CN116109657A - Geographic information data acquisition processing method, system, electronic equipment and storage medium - Google Patents

Geographic information data acquisition processing method, system, electronic equipment and storage medium Download PDF

Info

Publication number
CN116109657A
CN116109657A CN202310181121.XA CN202310181121A CN116109657A CN 116109657 A CN116109657 A CN 116109657A CN 202310181121 A CN202310181121 A CN 202310181121A CN 116109657 A CN116109657 A CN 116109657A
Authority
CN
China
Prior art keywords
geographic
target geographic
unmanned aerial
target
aerial vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310181121.XA
Other languages
Chinese (zh)
Other versions
CN116109657B (en
Inventor
茹国成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongyu Beijing New Technology Development Co ltd Of China Academy Of Civil Aviation Science And Technology
Original Assignee
Zhongyu Beijing New Technology Development Co ltd Of China Academy Of Civil Aviation Science And Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongyu Beijing New Technology Development Co ltd Of China Academy Of Civil Aviation Science And Technology filed Critical Zhongyu Beijing New Technology Development Co ltd Of China Academy Of Civil Aviation Science And Technology
Priority to CN202310181121.XA priority Critical patent/CN116109657B/en
Publication of CN116109657A publication Critical patent/CN116109657A/en
Application granted granted Critical
Publication of CN116109657B publication Critical patent/CN116109657B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)
  • Instructional Devices (AREA)

Abstract

The invention discloses a geographic information data acquisition and processing method, a system, electronic equipment and a storage medium, wherein the method comprises the following steps: s1, planning and dividing a target geographic area to obtain a plurality of target geographic subareas; s2, the unmanned aerial vehicle collects bird' S eye views of all target geographic subregions according to a preset unmanned aerial vehicle collection rule; s3, performing fusion splicing processing on all the aerial views of the target geographic subregion to obtain a fusion aerial view covering the whole target geographic subregion; s4, an example segmentation model is built and trained, example segmentation is carried out on the cutting unit diagrams of the target geographic subregion through the example segmentation model, and then the segmentation entities of all the cutting unit diagrams are correspondingly arranged according to the geographic coordinate map and are fused to obtain a geographic entity distribution diagram. The invention can obtain the comprehensive and accurate geographic entity distribution map of the whole target geographic area, thereby accurately obtaining the real information of geographic distribution through the geographic entity distribution map.

Description

Geographic information data acquisition processing method, system, electronic equipment and storage medium
Technical Field
The present invention relates to the field of data acquisition and processing technologies, and in particular, to a geographic information data acquisition and processing method, system, electronic device, and storage medium.
Background
With the development of big data age, the man-ground interaction big data plays an increasingly important role in man-ground system development, national economic construction and national strategic requirements, so that a geographic information system based on big data acquisition, processing and analysis integration is promoted. The geographic information system is a special and very important spatial information system, and is a technical system for collecting, storing, managing, calculating, analyzing, displaying and describing related geographic distribution data in the whole or part of the earth surface (including the atmosphere) space under the support of a computer hard and software system.
The geographic information system is often required to collect a large amount of geographic information, and when the geographic information is collected and processed in the prior art, the workload of the manual mode is large in most cases, and the geographic information system is generally only suitable for important small areas; the data acquisition is carried out through the remote sensing equipment, the large-area acquisition can be realized, but all the acquisition is carried out according to the uniform resolution or the acquisition quality, the acquired data quality is poor, the data acquisition method is mainly used for macroscopic trend analysis, the data processing is difficult, the acquired geographic information data is poor, even unusable, and the real situation of geographic distribution cannot be accurately reflected.
Disclosure of Invention
The invention aims to overcome the technical problems pointed out by the background art and provides a geographic information data acquisition and processing method, which comprises the steps of planning a target geographic area, controlling an unmanned aerial vehicle to obtain a comprehensive and high-quality fusion aerial view of the target geographic area according to the planning, and then carrying out instance segmentation and boundary fusion on the fusion aerial view to realize complete and accurate geographic entity division, so as to obtain a comprehensive and accurate geographic entity distribution diagram of the whole target geographic area, thereby accurately obtaining real information of geographic distribution through the geographic entity distribution diagram.
The aim of the invention is achieved by the following technical scheme:
in a first aspect, a method for collecting and processing geographic information data includes the following steps:
s1, acquiring a top view image of a target geographic area containing geographic data, planning and dividing the target geographic area to obtain a plurality of target geographic sub-areas, and constructing a geographic coordinate map corresponding to the target geographic area;
s2, presetting an unmanned aerial vehicle acquisition rule for each target geographic subarea, and acquiring a bird' S eye view of each target geographic subarea by the unmanned aerial vehicle according to the preset unmanned aerial vehicle acquisition rule;
s3, performing fusion splicing processing on all the aerial views of the target geographic subregion to obtain a fusion aerial view covering the whole target geographic subregion, correspondingly filling the fusion aerial view into a geographic coordinate map, and cutting the fusion aerial view to obtain a plurality of cutting unit images;
s4, an example segmentation model is built and trained, example segmentation is carried out on the cutting unit diagram of the target geographic subregion through the example segmentation model, each segmentation entity of the cutting unit diagram is obtained, and then the segmentation entities of all the cutting unit diagrams are correspondingly arranged according to the geographic coordinate map and are fused to obtain a geographic entity distribution diagram.
The step S1 in the geographic information data acquisition and processing method comprises the following steps:
s11, detecting edge area lines of a top view image of a target geographic area, extracting vertexes of all convex parts of the edge area lines, and connecting the vertexes to form an initial area to be divided;
and S12, planning the area of the initial area to be divided and dividing the initial area to obtain a plurality of target geographical sub-areas, wherein the target geographical sub-areas are arc-shaped.
In the geographic information data acquisition processing method, the unmanned aerial vehicle in the step S2 is as follows according to a preset first unmanned aerial vehicle acquisition rule:
setting a plurality of unmanned aerial vehicles in a target geographic subarea, wherein flight access points of the unmanned aerial vehicles are points intersected with edge lines of the target geographic subarea, and an included angle alpha=360 degrees/M between every two adjacent unmanned aerial vehicles represents the total number of the unmanned aerial vehicles; and carrying out task division on the target geographic subareas by each unmanned aerial vehicle according to the flight access points, respectively carrying out overlapping shooting by the unmanned aerial vehicles, and fully covering the target geographic subareas by a plurality of unmanned aerial vehicles to shoot and obtain a bird's eye view of the target geographic subareas.
In the geographic information data acquisition processing method, the unmanned aerial vehicle in the step S2 is according to a preset second unmanned aerial vehicle acquisition rule as follows:
and setting an unmanned aerial vehicle in the target geographic subarea, wherein the unmanned aerial vehicle performs full coverage shooting on the target geographic subarea according to an equidistant spiral flight mode, and obtains a bird's eye view of the target geographic subarea, and the unmanned aerial vehicle moves by delta R1 distance to a central point after each spiral movement.
In the geographic information data acquisition processing method, the splicing processing in the step S3 adopts a RANSAC algorithm; after the clipping processing in step S3, the clipping primitive map is position-coded according to the position information located in the geographic coordinate map to record the position information of the clipping primitive map in the geographic coordinate map.
The segmentation entity processing method of the step S4 in the geographic information data acquisition processing method comprises the following steps:
the instance segmentation model is constructed based on an HTC instance segmentation algorithm, instance segmentation is carried out on a cutting graph fused with a bird's eye view in a target geographic subregion through the trained instance segmentation model, segmentation entities of the target geographic subregion are obtained, and each segmentation entity is encoded to construct an encoding table;
generating an all-zero mask with the same size as the fused aerial view and zero pixel points for the target geographic subarea, reading the codes and the position information of the division entities of the target geographic subarea from the coding list and the coding list, refilling the division instances at the corresponding positions of the division entities on the all-zero mask according to the position information, and restoring all the division entities to the all-zero mask to obtain each target geographic subarea represented by the mask;
detecting boundary lines of masks of all division entities of a target geographic subregion, then completing expansion operation on the boundary lines, unifying codes of division entities with coincident boundary lines, fusing the division entities with the same codes, removing the boundary lines of the masks among the division entities, fusing the masks of the division entities with the same instance codes into one, and realizing the combination of the division instances; according to the method, complete and accurate entity division is carried out on all the divided entities of the target geographic area, and then a geographic entity distribution map of the whole target geographic area is obtained.
In a second aspect, the invention provides a geographic information data acquisition and processing system, which comprises a target geographic area dividing module, an unmanned aerial vehicle plane control module and a segmentation fusion processing module;
the target geographic region dividing module is internally constructed with a geographic coordinate map corresponding to the target geographic region, and is used for acquiring a top view image of the target geographic region containing geographic data, and planning and dividing the target geographic region to obtain a plurality of target geographic subregions;
the unmanned aerial vehicle aircraft control module presets unmanned aerial vehicle acquisition rules for each target geographic subarea and controls a human machine to acquire the aerial view of each target geographic subarea according to the preset unmanned aerial vehicle acquisition rules;
the segmentation fusion processing module is used for carrying out fusion splicing processing on all the aerial views of the target geographic subregion to obtain a fusion aerial view covering the whole target geographic subregion, correspondingly filling the fusion aerial view into the geographic coordinate map, and cutting the fusion aerial view to obtain a plurality of cutting unit images; the segmentation fusion processing module is internally constructed with a trained instance segmentation model, instance segmentation is carried out on the cutting unit diagram of the target geographic subregion through the instance segmentation model to obtain each segmentation entity of the cutting unit diagram, and then all the segmentation entities of the cutting unit diagram are correspondingly arranged according to the geographic coordinate map and are fused to obtain a geographic entity distribution diagram.
In a third aspect, the present invention provides an electronic device comprising at least one processor, at least one memory, and a data bus; wherein: the processor and the memory complete communication with each other through a data bus; the memory stores program instructions for execution by the processor, the processor invoking the program instructions to perform steps for implementing the geographic information data collection processing method of any of claims 1-6.
In a fourth aspect, the present invention provides a storage medium comprising a memory and a processor, said memory storing an executable program, characterized in that said processor, when executing said executable program, implements the steps of the geographical information data collecting and processing method according to one of claims 1 to 6.
Compared with the prior art, the invention has the following advantages:
(1) According to the invention, through regional planning on the target geographic region, the unmanned aerial vehicle is controlled to obtain the comprehensive and high-quality fusion aerial view of the target geographic region according to the planning, and then the fusion aerial view is subjected to example segmentation and boundary fusion to realize complete and accurate geographic entity division, so that a comprehensive and accurate geographic entity distribution map of the whole target geographic region is obtained, and the real information of geographic distribution can be accurately obtained through the geographic entity distribution map.
(2) According to the method, the target geographic area to be subjected to geographic information data acquisition processing is subjected to area planning, the target geographic area is divided into a plurality of target geographic subareas, and the unmanned aerial vehicle is controlled to acquire the aerial view of the target geographic subareas in each target geographic subarea according to a preset acquisition rule, so that the unmanned aerial vehicle can shoot once to acquire comprehensive and high-quality image information of the target geographic area, and subsequent data processing is facilitated; then, performing splicing and cutting pretreatment on the aerial view of each target geographic subregion, performing instance segmentation through an instance segmentation model to obtain segmentation masks of all entities in the target geographic subregion, and finally realizing complete and accurate entity division through expansion operation and boundary fusion of the entity segmentation masks to obtain a geographic entity distribution map of the whole target geographic subregion, thereby accurately obtaining real information of geographic distribution through the geographic entity distribution map.
Drawings
FIG. 1 is a flowchart of a geographic information data acquisition and processing method according to an embodiment of the present invention;
FIG. 2 is a flowchart of a geographic information data acquisition and processing method according to a second embodiment of the present invention;
FIG. 3 is a flowchart showing a step of performing region planning on a target geographic region and dividing the target geographic region into a plurality of target geographic sub-regions according to a second embodiment of the geographic information data acquisition and processing method of the present invention;
fig. 4 is a specific flowchart of a step of controlling a unmanned aerial vehicle to obtain a bird's eye view of a target geographical sub-area according to a preset collection rule in each target geographical sub-area in a second embodiment of the geographical information data collection and processing method of the present invention;
FIG. 5 is a specific flowchart illustrating a step of preprocessing an aerial view of an acquired target geographic sub-area in a second embodiment of a geographic information data acquisition and processing method according to the present invention;
FIG. 6 is a specific flowchart illustrating steps of performing example segmentation and boundary fusion on a bird's eye view of a target geographic region to obtain geographic entity distribution information of the target geographic region in a second embodiment of a geographic information data acquisition and processing method of the present invention;
FIG. 7 is a schematic block diagram of a geographic information data acquisition and processing system according to the present invention;
fig. 8 is a block diagram of an electronic device according to an embodiment of the present invention.
Wherein, the names corresponding to the reference numerals in the drawings are:
the system comprises a 1-target geographic area dividing module, a 2-unmanned plane control module, a 3-segmentation fusion processing module, a 4-processor, a 5-memory and a 6-data bus.
Detailed Description
The invention is further illustrated by the following examples:
example 1
As shown in fig. 1, a method, a system, an electronic device and a storage medium for collecting and processing geographic information data are provided, wherein the method is as follows:
s1, acquiring a top view image of a target geographic area containing geographic data, planning and dividing the target geographic area to obtain a plurality of target geographic sub-areas, and constructing a geographic coordinate map corresponding to the target geographic area.
In some embodiments, step S1 comprises the following method:
s11, detecting edge area lines of a top view image of a target geographic area, extracting vertexes of all convex parts of the edge area lines, and connecting the vertexes to form an initial area to be divided;
and S12, planning the area of the initial area to be divided and dividing the initial area to obtain a plurality of target geographical sub-areas, wherein the target geographical sub-areas are arc-shaped.
S2, presetting unmanned aerial vehicle acquisition rules for each target geographic subarea, and acquiring a bird' S eye view of each target geographic subarea by the unmanned aerial vehicle according to the preset unmanned aerial vehicle acquisition rules.
The unmanned aerial vehicle acquisition rule presetting method can adopt the following two methods:
the first method is as follows: the unmanned aerial vehicle in step S2 is as follows according to a preset unmanned aerial vehicle acquisition rule:
setting a plurality of unmanned aerial vehicles in a target geographic subarea, wherein flight access points of the unmanned aerial vehicles are points intersected with edge lines of the target geographic subarea, and an included angle alpha=360 degrees/M between every two adjacent unmanned aerial vehicles represents the total number of the unmanned aerial vehicles; and carrying out task division on the target geographic subareas by each unmanned aerial vehicle according to the flight access points, respectively carrying out overlapping shooting by the unmanned aerial vehicles, and fully covering the target geographic subareas by a plurality of unmanned aerial vehicles to shoot and obtain a bird's eye view of the target geographic subareas.
The second method is as follows: the unmanned aerial vehicle in step S2 is as follows according to a preset unmanned aerial vehicle acquisition rule:
and setting an unmanned aerial vehicle in the target geographic subarea, wherein the unmanned aerial vehicle performs full coverage shooting on the target geographic subarea according to an equidistant spiral flight mode, and obtains a bird's eye view of the target geographic subarea, and the unmanned aerial vehicle moves by delta R1 distance to a central point after each spiral movement.
S3, performing fusion splicing processing on all the aerial views of the target geographic subregion to obtain a fusion aerial view covering the whole target geographic subregion, correspondingly filling the fusion aerial view into a geographic coordinate map, and cutting the fusion aerial view to obtain a plurality of cutting unit images.
In some embodiments, the stitching process in step S3 employs a RANSAC algorithm; after the clipping processing of step S3, the clipping unit map is position-coded in accordance with the position information located in the geographic coordinate map to record the position information of the clipping unit map in the geographic coordinate map.
S4, an example segmentation model is built and trained, example segmentation is carried out on the cutting unit diagram of the target geographic subregion through the example segmentation model, each segmentation entity of the cutting unit diagram is obtained, and then the segmentation entities of all the cutting unit diagrams are correspondingly arranged according to the geographic coordinate map and are fused to obtain a geographic entity distribution diagram.
In some embodiments, the splitting entity processing method of step S4 includes:
the instance segmentation model is constructed based on an HTC instance segmentation algorithm, instance segmentation is carried out on a cutting graph fused with a bird's eye view in a target geographic subregion through the trained instance segmentation model, segmentation entities of the target geographic subregion are obtained, and each segmentation entity is encoded to construct an encoding table;
generating an all-zero mask with the same size as the fused aerial view and zero pixel points for the target geographic subarea, reading the codes and the position information of the division entities of the target geographic subarea from the coding list and the coding list, refilling the division instances at the corresponding positions of the division entities on the all-zero mask according to the position information, and restoring all the division entities to the all-zero mask to obtain each target geographic subarea represented by the mask;
detecting boundary lines of masks of all division entities of a target geographic subregion, then completing expansion operation on the boundary lines, unifying codes of division entities with coincident boundary lines, fusing the division entities with the same codes, removing the boundary lines of the masks among the division entities, fusing the masks of the division entities with the same instance codes into one, and realizing the combination of the division instances; according to the method, complete and accurate entity division is carried out on all the divided entities of the target geographic area, and then a geographic entity distribution map of the whole target geographic area is obtained.
Example two
As shown in fig. 2 to 6, in the geographic information data acquisition processing method, by performing region planning on a target geographic region to be subjected to geographic information data acquisition processing, dividing the target geographic region into a plurality of target geographic subregions, and controlling an unmanned aerial vehicle to acquire a bird's eye view of the target geographic subregions in each target geographic subregion according to a preset acquisition rule, for example, planning and designing the number and shooting paths of the unmanned aerial vehicles, so that comprehensive and high-quality image information of the target geographic region can be acquired through one-time shooting of the unmanned aerial vehicle, and subsequent data processing is facilitated; then, performing splicing and cutting pretreatment on the aerial view of each target geographic subregion, performing instance segmentation through an instance segmentation model to obtain segmentation masks of all entities in the target geographic subregion, and finally realizing complete and accurate entity division through expansion operation and boundary fusion of the entity segmentation masks to obtain a geographic entity distribution map of the whole target geographic subregion, thereby accurately obtaining real information of geographic distribution through the geographic entity distribution map.
As shown in fig. 2, the above-mentioned geographic information data acquisition and processing method includes the following steps:
step S101: and planning the target geographic area before collecting the geographic information data, and dividing the target geographic area into a plurality of target geographic subareas.
As shown in fig. 3, in the step S101, the step of planning a target geographic area and dividing the target geographic area into a plurality of target geographic sub-areas specifically includes:
step S1011: acquiring a top view image of the target geographic area, and acquiring an edge area line of the target geographic area through the top view image;
step S1012: extracting vertexes of all convex parts of the edge area line, and connecting the vertexes of the convex parts to form an initial area;
step S1013: and according to the shape of the initial region, approximately dividing the shape of the initial region into a plurality of circular or elliptical region ranges including the target geographic region according to a preset dividing rule, wherein each circular or elliptical region range is the target geographic region.
In the above steps, the overall overlooking image of the whole target geographic area can be obtained by remote sensing satellites or aerial photography, then edge detection is carried out on the overlooking image to obtain edge area lines, the vertexes of all the convex parts extracted from the edge area lines are connected, and an initial area is formed; and finally, approximately dividing the initial area shape into a plurality of circular or elliptical area ranges including the target geographical area according to a preset dividing rule, wherein each circular or elliptical area range is the target geographical sub-area. The preset dividing rule is as follows: generating N sub-area radiuses according to actual acquisition demand planning (in order to ensure the comprehensiveness of a graph obtained later, N sub-areas can be mutually overlapped, namely two adjacent sub-areas can be overlapped and crossed, the N sub-areas can be ensured to completely cover the whole target geographic area), and dividing the target geographic area into N-1 annular area dividing blocks (elliptical area range) and 1 central circular area dividing block (circular area range) according to the generated N sub-area radiuses, wherein each dividing block is the target geographic area. Through the steps, the rapid and accurate region division can be carried out according to the actual condition of the target geographic region, the effective and comprehensive data acquisition can be conveniently carried out by reasonably arranging and using the unmanned aerial vehicle according to the region division result, and the comprehensiveness of the data acquisition is improved.
Step S102: and in each target geographic subarea, controlling the unmanned aerial vehicle to acquire a bird's eye view of the target geographic subarea according to a preset acquisition rule.
As shown in fig. 3, step S102 is as follows: in each target geographic subarea, the step of controlling the unmanned aerial vehicle to acquire the aerial view of the target geographic subarea according to a preset acquisition rule specifically comprises the following steps:
step S1021: setting the number of unmanned aerial vehicles according to the number of the target geographic subareas;
step S1022: planning an unmanned aerial vehicle shooting path according to the number of unmanned aerial vehicles and the shape of a target geographical sub-area;
step S1023: and controlling the unmanned aerial vehicle to fly according to the shooting path of the unmanned aerial vehicle, and controlling the unmanned aerial vehicle to acquire the aerial view of the target geographic subregion.
In some embodiments, at least one drone may be set for the number of each target geographic sub-area; when the target geographic subarea is in a circular area range, setting a flight access point of each unmanned aerial vehicle, wherein the flight access points are respectively positioned on the outer boundary line of the target geographic subarea corresponding to each unmanned aerial vehicle, and an included angle alpha=360 degrees/M between every two adjacent flight access points represents the total number of unmanned aerial vehicles; enabling each unmanned aerial vehicle to enter a corresponding target geographic subarea range along the flight access point (all unmanned aerial vehicles share shooting tasks of the target geographic subarea together, can firstly carry out total task determination according to equidistant spirals of the target geographic subarea, and then carry out average division on the total tasks according to the total number of unmanned aerial vehicles, so that each unmanned aerial vehicle enters the corresponding flight task area according to the flight access point, and then moves circumferentially in the cutting-in direction towards the inner boundary line of the target geographic subarea or the central point of the geometric target subarea range in an equidistant spiral mode, and moves for a delta R1 distance towards the inner boundary line of the target geographic subarea or the central point of the target geographic subarea after spiral one-week movement is completed, wherein delta R1 represents a preset first inward movement distance of the circumferential movement of the unmanned aerial vehicle per week;
in some embodiments, when the target geographic subarea is an elliptical area range, setting a flight access point of each unmanned aerial vehicle, wherein the flight access points are respectively any one of two intersection points between a major axis of the elliptical area and an area boundary, two intersection points between a minor axis of the elliptical area and an area boundary and any one of two intersection points between a boundary of the target geographic subarea and an area boundary; and any one of two intersection points between the minor axis of the elliptical region and the region boundary is opposite to any one of two intersection points between the boundary of the target geographic subregion and the region boundary; when the unmanned aerial vehicle performs image acquisition on the central circle region segmentation block of the elliptical region range, performing circular motion in the central circle region in an equidistant spiral mode to the central point of the geometric target region range along the cutting-in direction after entering the central circle target geographic region from the flight access point corresponding to the unmanned aerial vehicle, and moving a delta R2 distance to the inner boundary line of the target geographic region or the central point of the geometric target region range after each spiral motion is completed, wherein delta R2 represents a preset second inward movement distance of the circumferential spiral motion of the unmanned aerial vehicle per week; when the unmanned aerial vehicle performs image acquisition on the side area dividing block of the oval area range, after the unmanned aerial vehicle enters the side area dividing block at the flight cut-in point, the unmanned aerial vehicle reciprocates from the outer edge boundary of the side area dividing range to the outer edge boundary of the side area dividing range along the longitudinal direction of the oval area range, and after one single-pass movement is completed, the unmanned aerial vehicle moves by delta R3 distance to the outer edge boundary of the side area dividing range, wherein delta R3 represents the preset moving distance from the unmanned aerial vehicle to the outer edge boundary of the side area dividing range after the single-pass movement is completed in the reciprocating flight. Through the unmanned aerial vehicle flight route planning method, the comprehensiveness of unmanned aerial vehicle shooting and the comprehensiveness of data acquisition can be effectively improved, the situation that all image information with higher quality of a target geographic area can be obtained after the unmanned aerial vehicle shoots once can be guaranteed, subsequent data processing is facilitated, and the efficiency of data acquisition is improved.
Step S103: and preprocessing the obtained aerial view of the target geographic subregion, and then carrying out instance segmentation and boundary fusion to obtain geographic entity distribution information of the target geographic subregion.
As shown in fig. 5, in the step S103, the step of preprocessing the acquired aerial view of the target geographic subregion specifically includes:
step S1031: splicing images of a plurality of target geographic subregions acquired by the unmanned aerial vehicle into a bird's eye view of the target geographic subregions;
step S1032: and performing overlapping cutting on the aerial view of the target geographic subregion to obtain a cutting image, and recording the position information of the cutting image in the aerial view.
Firstly, shooting a plurality of images under different view angles of a target geographical sub-area through an unmanned aerial vehicle according to a planned unmanned aerial vehicle shooting path, and then splicing the plurality of images under the different view angles through an image splicing algorithm (for example, a RANSAC algorithm is adopted in the embodiment) to obtain a bird's eye view of the whole target geographical sub-area; further, overlapping and cutting the obtained aerial view of the target geographical sub-area, wherein if the cutting size is 1024 x 1024 and the cutting step length is 512 pixel points, cutting the whole aerial view into a plurality of 1024 x 1024 cutting images, and simultaneously recording the initial position information (such as the horizontal and vertical coordinates) of each cutting image in the original aerial view.
Further, as shown in fig. 6, in the step S103, the steps of performing instance segmentation and boundary fusion on the aerial view of the target geographic sub-area to obtain the geographic entity distribution information of the target geographic sub-area specifically include:
step S1033: performing instance segmentation on the aerial view of each target geographic subarea through a trained instance segmentation model to obtain segmentation entities of each target geographic subarea;
step S1034: coding and mask filling are carried out on each division entity according to the position information, and a mask of each division entity is obtained;
step S1035: and (3) performing expansion operation on the masks of the divided entities, and then fusing the masks according to boundary lines to obtain a geographic entity distribution map of the whole target geographic area.
In the above steps, the instance segmentation model is constructed based on the HTC instance segmentation algorithm and is supervised trained by pre-acquisition or existing geographic information data. Performing example segmentation on the cut-out graph of the bird's eye view of each target geographic subregion through a trained example segmentation model to obtain segmentation entities of each target geographic subregion; each divided entity is then encoded, for example, each divided entity is marked by using arabic numerals 1, 2 and 3 … or letters a, b and c, and the marks of each divided entity and the position information (such as coordinates) of each divided entity in the bird's eye view are written into a coding table (such as a table of a database) in a one-to-one correspondence manner, and the coding table is stored, so that the subsequent reading and use are convenient.
Then generating an image with the same size as the aerial view for each target geographical subarea, and setting the value of each pixel point in the image to be 0 to obtain an all-black all-zero mask; and then reading codes (marks) of the division entities and position information thereof of each target geographical sub-area from the code table, and refilling the corresponding division instance of the division entities at the corresponding positions on the all-zero mask according to the position information, namely restoring the mask of each division entity to the all-zero mask to obtain a new mask of each division entity, and restoring all the division entities to the all-zero mask to obtain each target geographical sub-area represented by the mask.
Finally, after boundary line detection is carried out on the masks of all the division entities of all the target geographic subareas, expansion operation is completed on boundary lines, codes of division entities with coincident boundary lines are unified, the division entities with the same codes are fused, mask boundary lines among the division entities are removed, the masks of the division entities with the same instance codes are fused into one, combination of division instances is realized, the operation is repeated, and then, complete and accurate entity division can be carried out on all the division entities of the target geographic area, so that an accurate geographic entity distribution map of the whole target geographic area is obtained, and real information of geographic distribution can be accurately obtained through the geographic entity distribution map.
It should be noted that, in the embodiment of the present invention, the technical content that is not specifically described in the embodiment of the present invention may be implemented by using the existing related technology, which belongs to the prior art, and is not described in detail in the embodiment of the present invention.
Example III
As shown in fig. 7, a geographic information data acquisition and processing system is characterized in that: the system comprises a target geographic area dividing module 1, an unmanned plane control module 2 and a segmentation fusion processing module 3;
the target geographic region dividing module is internally constructed with a geographic coordinate map corresponding to the target geographic region, and is used for acquiring a top view image of the target geographic region containing geographic data, and planning and dividing the target geographic region to obtain a plurality of target geographic subregions;
the unmanned aerial vehicle aircraft control module presets unmanned aerial vehicle acquisition rules for each target geographic subarea and controls a human machine to acquire the aerial view of each target geographic subarea according to the preset unmanned aerial vehicle acquisition rules;
the segmentation fusion processing module is used for carrying out fusion splicing processing on all the aerial views of the target geographic subregion to obtain a fusion aerial view covering the whole target geographic subregion, correspondingly filling the fusion aerial view into the geographic coordinate map, and cutting the fusion aerial view to obtain a plurality of cutting unit images; the segmentation fusion processing module is internally constructed with a trained instance segmentation model, instance segmentation is carried out on the cutting unit diagram of the target geographic subregion through the instance segmentation model to obtain each segmentation entity of the cutting element-lifting diagram, and then all the segmentation entities of the cutting element-lifting diagram are correspondingly arranged according to the geographic coordinate map and are fused to obtain a geographic entity distribution diagram.
The specific implementation process of the above system refers to the geographic information data acquisition processing method provided in the first embodiment or the second embodiment, and is not described herein again.
Example IV
As shown in fig. 8, an electronic device comprises at least one processor 4, at least one memory 5 and a data bus 6; wherein: the processor and the memory complete communication with each other through a data bus; the memory stores program instructions for execution by the processor, which are invoked by the processor to perform the steps of implementing the geographic information data acquisition processing method of embodiment one or embodiment two.
Such as processor implementation: before geographic information data acquisition, carrying out regional planning on a target geographic region, and dividing the target geographic region into a plurality of target geographic subregions; in each target geographic subarea, controlling the unmanned aerial vehicle to acquire a bird's eye view of the target geographic subarea according to a preset acquisition rule; and preprocessing the obtained aerial view of the target geographic subregion, and then carrying out instance segmentation and boundary fusion to obtain geographic entity distribution information of the target geographic subregion.
The Memory may be, but is not limited to, random access Memory (Random Access Memory, RAM), read Only Memory (ROM), programmable Read Only Memory (Programmable Read-Only Memory, PROM), erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable Read Only Memory (Electric Erasable Programmabl e Read-Only Memory, EEPROM), etc.
The processor may be an integrated circuit chip having signal processing capabilities. The processor may be a general-purpose processor including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (Appl ication Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
Example five
A storage medium (i.e., a computer-readable storage medium) comprising a memory storing an executable program (i.e., a computer program) and a processor implementing the steps of the geographic information data acquisition processing method of the present invention when the executable program is executed by the processor. Before geographic information data acquisition, carrying out regional planning on a target geographic region, and dividing the target geographic region into a plurality of target geographic subregions; in each target geographic subarea, controlling the unmanned aerial vehicle to acquire a bird's eye view of the target geographic subarea according to a preset acquisition rule; and preprocessing the obtained aerial view of the target geographic subregion, and then carrying out instance segmentation and boundary fusion to obtain geographic entity distribution information of the target geographic subregion.
In the embodiments provided in the present invention, it should be understood that the disclosed system, module and method may be implemented in other manners. The above-described system, module, method embodiments are merely illustrative, and the flowcharts and block diagrams in the figures, for example, illustrate the architecture, functionality, and operation of possible implementations of systems, modules, methods, and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims (9)

1. A geographic information data acquisition and processing method is characterized in that: the method comprises the following steps:
s1, acquiring a top view image of a target geographic area containing geographic data, planning and dividing the target geographic area to obtain a plurality of target geographic sub-areas, and constructing a geographic coordinate map corresponding to the target geographic area;
s2, presetting an unmanned aerial vehicle acquisition rule for each target geographic subarea, and acquiring a bird' S eye view of each target geographic subarea by the unmanned aerial vehicle according to the preset unmanned aerial vehicle acquisition rule;
s3, performing fusion splicing processing on all the aerial views of the target geographic subregion to obtain a fusion aerial view covering the whole target geographic subregion, correspondingly filling the fusion aerial view into a geographic coordinate map, and cutting the fusion aerial view to obtain a plurality of cutting unit images;
s4, an example segmentation model is built and trained, example segmentation is carried out on the cutting unit diagram of the target geographic subregion through the example segmentation model, each segmentation entity of the cutting unit diagram is obtained, and then the segmentation entities of all the cutting unit diagrams are correspondingly arranged according to the geographic coordinate map and are fused to obtain a geographic entity distribution diagram.
2. The geographical information data collecting and processing method according to claim 1, wherein: step S1 comprises the following steps:
s11, detecting edge area lines of a top view image of a target geographic area, extracting vertexes of all convex parts of the edge area lines, and connecting the vertexes to form an initial area to be divided;
and S12, planning the area of the initial area to be divided and dividing the initial area to obtain a plurality of target geographical sub-areas, wherein the target geographical sub-areas are arc-shaped.
3. The geographical information data collecting and processing method according to claim 1, wherein: the unmanned aerial vehicle in step S2 is as follows according to a preset unmanned aerial vehicle acquisition rule:
setting a plurality of unmanned aerial vehicles in a target geographic subarea, wherein flight access points of the unmanned aerial vehicles are points intersected with edge lines of the target geographic subarea, and an included angle alpha=360 degrees/M between every two adjacent unmanned aerial vehicles represents the total number of the unmanned aerial vehicles; and carrying out task division on the target geographic subareas by each unmanned aerial vehicle according to the flight access points, respectively carrying out overlapping shooting by the unmanned aerial vehicles, and fully covering the target geographic subareas by a plurality of unmanned aerial vehicles to shoot and obtain a bird's eye view of the target geographic subareas.
4. The geographical information data collecting and processing method according to claim 1, wherein: the unmanned aerial vehicle in step S2 is as follows according to a preset unmanned aerial vehicle acquisition rule:
and setting an unmanned aerial vehicle in the target geographic subarea, wherein the unmanned aerial vehicle performs full coverage shooting on the target geographic subarea according to an equidistant spiral flight mode, and obtains a bird's eye view of the target geographic subarea, and the unmanned aerial vehicle moves by delta R1 distance to a central point after each spiral movement.
5. The geographical information data collecting and processing method according to claim 1, wherein: the splicing processing in the step S3 adopts a RANSAC algorithm; after the clipping processing of step S3, the clipping unit map is position-coded in accordance with the position information located in the geographic coordinate map to record the position information of the clipping unit map in the geographic coordinate map.
6. The geographical information data collecting and processing method according to claim 1, wherein: the segmentation entity processing method in the step S4 comprises the following steps:
the instance segmentation model is constructed based on an HTC instance segmentation algorithm, instance segmentation is carried out on a cutting graph fused with a bird's eye view in a target geographic subregion through the trained instance segmentation model, segmentation entities of the target geographic subregion are obtained, and each segmentation entity is encoded to construct an encoding table;
generating an all-zero mask with the same size as the fused aerial view and zero pixel points for the target geographic subarea, reading the codes and the position information of the division entities of the target geographic subarea from the coding list and the coding list, refilling the division instances at the corresponding positions of the division entities on the all-zero mask according to the position information, and restoring all the division entities to the all-zero mask to obtain each target geographic subarea represented by the mask;
detecting boundary lines of masks of all division entities of a target geographic subregion, then completing expansion operation on the boundary lines, unifying codes of division entities with coincident boundary lines, fusing the division entities with the same codes, removing the boundary lines of the masks among the division entities, fusing the masks of the division entities with the same instance codes into one, and realizing the combination of the division instances; according to the method, complete and accurate entity division is carried out on all the divided entities of the target geographic area, and then a geographic entity distribution map of the whole target geographic area is obtained.
7. The utility model provides a geographic information data acquisition processing system which characterized in that: the system comprises a target geographic area dividing module (1), an unmanned plane control module (2) and a segmentation fusion processing module (3);
the target geographic region dividing module is internally constructed with a geographic coordinate map corresponding to the target geographic region, and is used for acquiring a top view image of the target geographic region containing geographic data, and planning and dividing the target geographic region to obtain a plurality of target geographic subregions;
the unmanned aerial vehicle aircraft control module presets unmanned aerial vehicle acquisition rules for each target geographic subarea and controls a human machine to acquire the aerial view of each target geographic subarea according to the preset unmanned aerial vehicle acquisition rules;
the segmentation fusion processing module is used for carrying out fusion splicing processing on all the aerial views of the target geographic subregion to obtain a fusion aerial view covering the whole target geographic subregion, correspondingly filling the fusion aerial view into the geographic coordinate map, and cutting the fusion aerial view to obtain a plurality of cutting unit images; the segmentation fusion processing module is internally constructed with a trained instance segmentation model, instance segmentation is carried out on the cutting unit diagram of the target geographic subregion through the instance segmentation model to obtain each segmentation entity of the cutting unit diagram, and then the segmentation entities of all the cutting unit diagrams are correspondingly arranged according to the geographic coordinate map and are fused to obtain a geographic entity distribution diagram.
8. An electronic device, characterized in that: comprising at least one processor, at least one memory and a data bus; wherein: the processor and the memory complete communication with each other through a data bus; the memory stores program instructions for execution by the processor, the processor invoking the program instructions to perform steps for implementing the geographic information data collection processing method of any of claims 1-6.
9. A storage medium comprising a memory and a processor, the memory storing an executable program, characterized in that the processor implements the steps of the geographical information data collecting and processing method of one of claims 1 to 6 when executing the executable program.
CN202310181121.XA 2023-02-16 2023-02-16 Geographic information data acquisition processing method, system, electronic equipment and storage medium Active CN116109657B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310181121.XA CN116109657B (en) 2023-02-16 2023-02-16 Geographic information data acquisition processing method, system, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310181121.XA CN116109657B (en) 2023-02-16 2023-02-16 Geographic information data acquisition processing method, system, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116109657A true CN116109657A (en) 2023-05-12
CN116109657B CN116109657B (en) 2023-07-07

Family

ID=86254294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310181121.XA Active CN116109657B (en) 2023-02-16 2023-02-16 Geographic information data acquisition processing method, system, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116109657B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117612046A (en) * 2024-01-23 2024-02-27 青岛云世纪信息科技有限公司 Method and system for realizing ground object identification of target area based on AI and GIS interaction
CN117827815A (en) * 2024-03-01 2024-04-05 江西省大地数据有限公司 Quality inspection method and system for geographic information data

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190113349A1 (en) * 2017-10-13 2019-04-18 Kohl's Department Stores, lnc. Systems and methods for autonomous generation of maps
CN110062871A (en) * 2016-12-09 2019-07-26 通腾全球信息公司 Method and system for positioning and mapping based on video
CN112287824A (en) * 2020-10-28 2021-01-29 杭州海康威视数字技术股份有限公司 Binocular vision-based three-dimensional target detection method, device and system
CN114445593A (en) * 2022-01-30 2022-05-06 重庆长安汽车股份有限公司 Aerial view semantic segmentation label generation method based on multi-frame semantic point cloud splicing
US20220415059A1 (en) * 2019-11-15 2022-12-29 Nvidia Corporation Multi-view deep neural network for lidar perception

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110062871A (en) * 2016-12-09 2019-07-26 通腾全球信息公司 Method and system for positioning and mapping based on video
US20190113349A1 (en) * 2017-10-13 2019-04-18 Kohl's Department Stores, lnc. Systems and methods for autonomous generation of maps
US20220415059A1 (en) * 2019-11-15 2022-12-29 Nvidia Corporation Multi-view deep neural network for lidar perception
CN112287824A (en) * 2020-10-28 2021-01-29 杭州海康威视数字技术股份有限公司 Binocular vision-based three-dimensional target detection method, device and system
CN114445593A (en) * 2022-01-30 2022-05-06 重庆长安汽车股份有限公司 Aerial view semantic segmentation label generation method based on multi-frame semantic point cloud splicing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
崔云鹏: "面向复杂动态场景的多无人机协同摄影", 《计算机辅助设计与图形学学报》, pages 1113 - 1125 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117612046A (en) * 2024-01-23 2024-02-27 青岛云世纪信息科技有限公司 Method and system for realizing ground object identification of target area based on AI and GIS interaction
CN117612046B (en) * 2024-01-23 2024-04-26 青岛云世纪信息科技有限公司 Method and system for realizing ground object identification of target area based on AI and GIS interaction
CN117827815A (en) * 2024-03-01 2024-04-05 江西省大地数据有限公司 Quality inspection method and system for geographic information data
CN117827815B (en) * 2024-03-01 2024-05-17 江西省大地数据有限公司 Quality inspection method and system for geographic information data

Also Published As

Publication number Publication date
CN116109657B (en) 2023-07-07

Similar Documents

Publication Publication Date Title
CN116109657B (en) Geographic information data acquisition processing method, system, electronic equipment and storage medium
AU2015404215B2 (en) Vegetation management for power line corridor monitoring using computer vision
EP3885871B1 (en) Surveying and mapping system, surveying and mapping method and apparatus, device and medium
CN104807457A (en) Generation method and device of flight line of aircraft and terminal equipment
JP2018523875A (en) Lane recognition modeling method, apparatus, storage medium and device, and lane recognition method, apparatus, storage medium and apparatus
CA2994511A1 (en) Condition detection using image processing
CN111728535B (en) Method and device for generating cleaning path, electronic equipment and storage medium
CN111750857B (en) Route generation method, route generation device, terminal and storage medium
AU2018449839B2 (en) Surveying and mapping method and device
CN109883418A (en) A kind of indoor orientation method and device
WO2021113268A1 (en) Systems and methods for generating of 3d information on a user display from processing of sensor data
CN105574102A (en) Electronic map data loading method and device
CN111949817A (en) Crop information display system, method, equipment and medium based on remote sensing image
CN115375868B (en) Map display method, remote sensing map display method, computing device and storage medium
CN112270243A (en) Intercepted image synthesis method and system based on intelligent traffic and cloud server
CN116628123B (en) Dynamic slice generation method and system based on spatial database
CN110660103A (en) Unmanned vehicle positioning method and device
AU2018450016B2 (en) Method and apparatus for planning sample points for surveying and mapping, control terminal and storage medium
CN113918015A (en) Augmented reality interaction method and device
Wang et al. Construction Photo Localization in 3D Reality Models for Vision-Based Automated Daily Project Monitoring
CN114556425A (en) Positioning method, positioning device, unmanned aerial vehicle and storage medium
CN108571960A (en) A kind of localization method and positioning device
CN114283067B (en) Prescription diagram acquisition method and device, storage medium and terminal equipment
CN115035032A (en) Neural network training method, related method, device, terminal and storage medium
CN113421213A (en) Image processing method, image processing device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant