CN113342914B - Data set acquisition and automatic labeling method for detecting terrestrial globe area - Google Patents

Data set acquisition and automatic labeling method for detecting terrestrial globe area Download PDF

Info

Publication number
CN113342914B
CN113342914B CN202110672555.0A CN202110672555A CN113342914B CN 113342914 B CN113342914 B CN 113342914B CN 202110672555 A CN202110672555 A CN 202110672555A CN 113342914 B CN113342914 B CN 113342914B
Authority
CN
China
Prior art keywords
globe
model
coordinates
data
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110672555.0A
Other languages
Chinese (zh)
Other versions
CN113342914A (en
Inventor
董爽
陈恒鑫
陈鑫润
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN202110672555.0A priority Critical patent/CN113342914B/en
Publication of CN113342914A publication Critical patent/CN113342914A/en
Application granted granted Critical
Publication of CN113342914B publication Critical patent/CN113342914B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models

Abstract

The invention relates to a method for acquiring and automatically labeling a data set for detecting a globe area, belonging to the technical field of computer vision. The method specifically comprises the following steps: s1: converting coordinates to generate planar world maps with the same specification and different textures; s2: loading a globe model by using a 3D virtual engine, mapping a corresponding texture map to the globe model by parameterization texture mapping; then selecting a target area on the globe sphere model and marking; s3: acquiring a new texture map and setting the new texture map on a globe model; s4: setting environmental parameters, a rotation mode and screenshot frequency of a globe model, and intercepting pictures in the rotation process; s5: judging visible mark points and calculating the number of the visible mark points; s6: and judging whether the ratio of the number of the visible mark points to the number of the total mark points reaches a preset ratio or not. The invention can automatically acquire a large number of data sets and improve the accuracy of model training.

Description

Data set acquisition and automatic labeling method for detecting terrestrial globe area
Technical Field
The invention belongs to the technical field of computer vision, and relates to a method for acquiring and automatically labeling a data set for detecting a globe area.
Background
The deep learning technology brings great help to solving the problem related to computer vision, but the disadvantage of deep learning is that a large amount of marked data is needed, and a large amount of time and labor are consumed because manual photographing and manual marking are needed to acquire the data set.
Particularly, when a target detection algorithm based on deep learning is used for identifying a globe area, in order to acquire a large number of marked data sets, the existing method needs to consume a large amount of manpower and time for acquisition, which is not beneficial to the development of the existing work.
Disclosure of Invention
In view of the above, the present invention aims to provide a method for acquiring and automatically labeling a dataset for detecting a globe region, which utilizes a system capable of automatically acquiring and automatically labeling a dataset based on a 3D virtual engine to obtain a large amount of labeling data, thereby improving accuracy of model training, and greatly saving time and labor cost.
In order to achieve the above purpose, the present invention provides the following technical solutions:
a method for acquiring and automatically labeling a data set for detecting a globe area specifically comprises the following steps:
s1: converting GIS data containing a WGS84 geographic coordinate system into a Patterson projection coordinate system, and generating a large number of planar world maps with the same gauge and different textures, wherein the planar projection pictures are used for texture mapping of the globe in the 3D virtual engine;
s2: loading a globe model by using a 3D virtual engine, mapping a corresponding texture map to the globe model by parameterization texture mapping; then selecting a target area on the globe sphere model after texture mapping, and inserting a plurality of marks into the outline of the target area, wherein the marks of the outline of each area are the same group of marks;
s3: requesting a new texture map from a target storage position through a network and setting the new texture map on a globe model, wherein the contour mark inserted in the step S2 still matches the corresponding region contour position because the specifications of the texture maps are the same;
s4: setting environmental parameters (including illumination and distance) and a rotation mode of a globe model, then setting screenshot frequency, and capturing pictures in the rotation process to obtain the globe pictures under different angles, different illumination conditions and different distances;
s5: adding a layer of spherical detection frame outside the globe model, after capturing a picture in step S4, inserting all region outline marks on the globe model from the front position of the globe model to emit light rays, judging visible mark points and calculating the number of the visible mark points of each group;
s6: judging whether the ratio of the number of the visible mark points to the number of the group of the total mark points reaches a preset ratio or not; if the detection result is reached, calculating the minimum detection frame coordinates of the corresponding region outline, and then, correspondingly storing the coordinates and the screenshot one by one to obtain a data set; otherwise, the coordinates of the detection frame of the corresponding region are not calculated; if the detection frame data are stored, the two-dimensional image coordinates of all visible mark points of the region outline are needed to pass through, the image coordinates of the upper left and lower right points of the minimum detection frame are calculated, automatic marking data are obtained, and the automatic marking data are stored corresponding to the current screenshot.
S7: after the picture is intercepted according to the rule of the step S4 and the detection frame marking data is automatically generated, the steps S3 to S6 are repeatedly executed until no new texture map exists.
Further, in step S1, the calculation formula for converting the GIS data into the Patterson projection coordinate system is as follows:
x=λ
Figure BDA0003119959610000021
wherein x and y are projection coordinates, lambda,
Figure BDA0003119959610000022
C is longitude and latitude 1 、c 2 、c 3 、c 4 Is a polynomial coefficient.
Further, in step S2, the parameterized texture mapping formula is:
θ=atan2(-(z-c z ),x-c x )
u=(θ+π)/2π
Figure BDA0003119959610000023
Figure BDA0003119959610000024
wherein u and v are the coordinate system of the plane picture, and the range is 0,1]C is the center point coordinate of the sphere, x, y and z are the corresponding x-axis, y-axis and z-axis coordinates of a certain point on the surface of the sphere,
Figure BDA0003119959610000025
and is a longitude angle, θ is a latitude angle, and r is a sphere radius.
Further, the step S4 specifically includes: the direction of a light source is adjusted by using a program to simulate different illumination conditions in reality, the distance of a globe model is adjusted to simulate different distance scenes in reality, and meanwhile, the globe model rotates around a y axis and a z axis according to a certain frequency; and capturing pictures in the rotation process to obtain the image data of the globe under different angles, different illumination conditions and different distances.
Further, in step S5, the method comprises the steps of determining visible mark points and calculating the number of visible mark points of each group, specifically including: and judging whether the mark point is positioned on the front surface of the globe model, namely whether the mark point is visible on the current image or not by calculating whether the Euclidean distance from the point of the light ray, which is hit on the detection frame, to the corresponding mark point is smaller than a certain value.
The distance formula is:
Figure BDA0003119959610000026
wherein x is 1 、y 1 、z 1 Coordinates, x, of the ray striking the detection frame 2 、y 2 、z 2 Corresponding tag coordinates.
Further, in step S6, the image coordinate calculation formula is:
x 1 =Min(p 1 (x),p 2 (x),…,p n (x))
y 1 =Min(p 1 (y),p 2 (y),…,p n (y))
x 2 =Max(p 1 (x),p 2 (x),…,p n (x))
y 2 =Max(p 1 (y),p 2 (y),…,p n (y))
wherein x is 1 、y 1 To detect the coordinates of the upper left corner of the frame, x 2 、y 2 In order to detect the coordinates of the lower right corner of the frame, n is the number of visible mark points inserted into the outline of the region, and p is the visible coordinates of the mark.
The invention has the beneficial effects that: the method of the invention utilizes the system which is based on the 3D virtual engine and can automatically acquire the data set and automatically label to acquire a large amount of label data, so that the accuracy of model training is improved, and the time and labor cost are greatly saved.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objects and other advantages of the invention may be realized and obtained by means of the instrumentalities and combinations particularly pointed out in the specification.
Drawings
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in the following preferred detail with reference to the accompanying drawings, in which:
FIG. 1 is a general flow chart of a data set acquisition and automatic labeling method of the present invention;
FIG. 2 is a schematic diagram of coordinates of a planar world map mapped onto a sphere model;
FIG. 3 is a schematic diagram of the globe model and the marker points on the sphere-type detection frame and the detection frame collision points.
Detailed Description
Other advantages and effects of the present invention will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present invention with reference to specific examples. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be noted that the illustrations provided in the following embodiments merely illustrate the basic idea of the present invention by way of illustration, and the following embodiments and features in the embodiments may be combined with each other without conflict.
Referring to fig. 1 to 3, the present embodiment designs a method for acquiring and automatically labeling a data set for detecting a globe area, which specifically includes the following steps:
step 1: GIS data comprising a WGS84 geographic coordinate system is converted into a Patterson projection coordinate system, and a plane projection picture of a world map with the same large number of specifications and different textures is generated and used for texture mapping of the globe in the 3D virtual engine.
The calculation formula of the Patterson projection coordinate system is as follows:
x=λ
Figure BDA0003119959610000041
wherein x and y are projection coordinates, lambda,
Figure BDA0003119959610000042
Is longitude and latitude, polynomial coefficient c 1 =1.0148,c 2 =0.23185,c 3 =-0.14499,c 4 =0.02406。
Step 2: the 3D virtual engine is used to load the globe model, and a corresponding texture map is mapped onto the sphere model by parameterized texture mapping, as shown in fig. 2.
The parameterized texture mapping formula is:
θ=atan2(-(z-c z ),x-c x )
u=(θ+π)/2π
Figure BDA0003119959610000043
Figure BDA0003119959610000044
wherein u and v are the coordinate system of the plane picture, and the range is 0,1]C is the center point coordinate of the sphere, x, y and z are the corresponding x-axis, y-axis and z-axis coordinates of a certain point on the surface of the sphere,
Figure BDA0003119959610000045
and is a longitude angle, θ is a latitude angle, and r is a sphere radius.
Then selecting a target area on the globe model after texture mapping, and inserting a plurality of marks into the outline of the target area, wherein the marks of each area outline are the same group of marks.
Step 3: and (3) requesting a new texture map from the target storage position through the network and setting the new texture map on the globe model, wherein the contour mark inserted in the step (2) still matches the corresponding region contour position because the specifications of the texture maps are the same.
Step 4: the direction of a light source is adjusted by using a program to simulate different illumination conditions in reality, the distance of a model is adjusted to simulate different distance scenes in reality, meanwhile, the globe model rotates around a y axis and a z axis according to a certain frequency, and pictures are taken in the rotating process to obtain globe picture data under different angles, different illumination conditions and different distances.
Step 5: and (3) adding a layer of spherical detection frame outside the globe model, after capturing the picture in the step (4), emitting light from the front position of the globe model to all the region outline markers inserted in the step (2) on the globe model, and judging whether the marker points are positioned on the front surface of the globe model, namely whether the marker points are visible on the current image or not by calculating whether the Euclidean distance from the points of the light to the detection frame to the corresponding markers is smaller than a certain value or not, as shown in fig. 3.
The distance formula is:
Figure BDA0003119959610000046
wherein x is 1 、y 1 、z 1 Coordinates, x, of the ray striking the detection frame 2 、y 2 、z 2 Corresponding tag coordinates.
Step 6: whether to calculate the detection frame coordinate data of the region outline is determined by calculating the proportion (whether greater than 80%) of the visible mark points of the corresponding region outline acquired in step 5 to the total mark points of the region outline. If the detection frame data are stored, the two-dimensional image coordinates of all visible mark points of the region outline are needed to pass through, the image coordinates of the upper left and lower right points of the minimum detection frame are calculated, automatic marking data are obtained, and the automatic marking data are stored corresponding to the current screenshot.
The coordinate calculation formula is:
x 1 =Min(p 1 (x),p 2 (x),…,p n (x))
y 1 =Min(p 1 (y),p 2 (y),…,p n (y))
x 2 =Max(p 1 (x),p 2 (x),…,p n (x))
y 2 =Max(p 1 (y),p 2 (y),…,p n (y))
wherein x is 1 、y 1 To detect the coordinates of the upper left corner of the frame, x 2 、y 2 In order to detect the coordinates of the lower right corner of the frame, n is the number of visible mark points inserted into the outline of the region, and p is the visible coordinates of the mark.
Step 7: and (3) after the picture is intercepted according to the rule of the step (4) and the detection frame mark data is automatically generated, repeating the steps (3) to (6) until no new texture map exists.
Finally, it is noted that the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made thereto without departing from the spirit and scope of the present invention, which is intended to be covered by the claims of the present invention.

Claims (5)

1. A method for acquiring and automatically labeling a data set for detecting a globe area, the method comprising the steps of:
s1: converting GIS data into Patterson projection coordinates to generate planar world maps with the same specification and different textures;
s2: loading a globe model by using a 3D virtual engine, mapping a corresponding texture map to the globe model by parameterization texture mapping; then selecting a target area on the globe sphere model after texture mapping, and inserting a plurality of marks into the outline of the target area, wherein the marks of the outline of each area are the same group of marks;
the parameterized texture mapping formula is:
θ=arctan2(-(z-c z ),x-c x )
u=(θ+π)/2π
Figure FDA0003982968800000011
Figure FDA0003982968800000012
wherein u and v are coordinate systems of plane pictures, c is the coordinate of the center point of the sphere, x, y and z are the corresponding x-axis, y-axis and z-axis coordinates of a certain point on the surface of the sphere,
Figure FDA0003982968800000013
the longitude angle is a latitude angle, θ is a latitude angle, and r is a sphere radius;
s3: additionally, a new texture map is obtained and is arranged on a globe model;
s4: setting environmental parameters and a rotation mode of a globe model, then setting screenshot frequency, and intercepting pictures in the rotation process;
s5: adding a layer of spherical detection frame outside the globe model, inserting all region outline marks on the globe model from the front position of the globe model to emit light, judging visible mark points and calculating the number of the visible mark points of each group;
s6: judging whether the ratio of the number of the visible mark points to the number of the group of the total mark points reaches a preset ratio or not; if the detection result is reached, calculating the minimum detection frame coordinates of the corresponding region outline, and then, correspondingly storing the coordinates and the screenshot one by one to obtain a data set; otherwise, the coordinates of the detection frame of the corresponding region are not calculated; if the detection frame data are stored, the two-dimensional image coordinates of all visible mark points of the region outline are needed to pass through, the image coordinates of the upper left and lower right points of the minimum detection frame are calculated, automatic marking data are obtained, and the automatic marking data are stored corresponding to the current screenshot.
2. The method for acquiring and automatically labeling a data set according to claim 1, wherein in step S1, a calculation formula for converting GIS data into a Patterson projection coordinate system is as follows:
x=λ
Figure FDA0003982968800000014
wherein x and y are projection coordinates, lambda,
Figure FDA0003982968800000015
C is longitude and latitude 1 ~c 4 Is a polynomial coefficient.
3. The method for acquiring and automatically labeling a data set according to claim 1, wherein the step S4 specifically comprises: the direction of a light source is adjusted by using a program to simulate different illumination conditions in reality, the distance of a globe model is adjusted to simulate different distance scenes in reality, and meanwhile, the globe model rotates around a y axis and a z axis according to a certain frequency; and capturing pictures in the rotation process to obtain the image data of the globe under different angles, different illumination conditions and different distances.
4. The method for acquiring and automatically labeling a data set according to claim 1, wherein in step S5, the visible marker points are determined and the number of visible marker points of each group is calculated, specifically comprising: and judging whether the mark point is positioned on the front surface of the globe model, namely whether the mark point is visible on the current image or not by calculating whether the Euclidean distance from the point of the light ray, which is hit on the detection frame, to the corresponding mark point is smaller than a certain value.
5. The method for acquiring and automatically labeling a data set according to any of claims 1-4, further comprising the step of S7: after the picture is intercepted according to the rule of the step S4 and the detection frame mark data is automatically generated, the steps S3 to S6 are repeatedly executed until all the texture maps in the plane world map generated in the step S1 are detected.
CN202110672555.0A 2021-06-17 2021-06-17 Data set acquisition and automatic labeling method for detecting terrestrial globe area Active CN113342914B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110672555.0A CN113342914B (en) 2021-06-17 2021-06-17 Data set acquisition and automatic labeling method for detecting terrestrial globe area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110672555.0A CN113342914B (en) 2021-06-17 2021-06-17 Data set acquisition and automatic labeling method for detecting terrestrial globe area

Publications (2)

Publication Number Publication Date
CN113342914A CN113342914A (en) 2021-09-03
CN113342914B true CN113342914B (en) 2023-04-25

Family

ID=77476245

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110672555.0A Active CN113342914B (en) 2021-06-17 2021-06-17 Data set acquisition and automatic labeling method for detecting terrestrial globe area

Country Status (1)

Country Link
CN (1) CN113342914B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108280870A (en) * 2018-01-24 2018-07-13 郑州云海信息技术有限公司 A kind of point cloud model texture mapping method and system
CN111259950A (en) * 2020-01-13 2020-06-09 南京邮电大学 Method for training YOLO neural network based on 3D model
CN111967313A (en) * 2020-07-08 2020-11-20 北京航空航天大学 Unmanned aerial vehicle image annotation method assisted by deep learning target detection algorithm
CN112686872A (en) * 2020-12-31 2021-04-20 南京理工大学 Wood counting method based on deep learning

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9122843D0 (en) * 1991-10-28 1991-12-11 Imperial College Method and apparatus for image processing
EP2828834B1 (en) * 2012-03-19 2019-11-06 Fittingbox Model and method for producing photorealistic 3d models
CN203288164U (en) * 2013-04-24 2013-11-13 庞鸿宇 Transparent four-color latitude-longitude celestial globe
US9776320B2 (en) * 2014-08-28 2017-10-03 Kabushiki Kaisha Topcon Measurement and installation data indicating apparatus and measurement and installation data indicating method
WO2017158829A1 (en) * 2016-03-18 2017-09-21 三菱電機株式会社 Display control device and display control method
CN107451235B (en) * 2017-07-25 2020-08-14 广州视源电子科技股份有限公司 Display method and device of space dimension mark
CN108646922B (en) * 2018-05-24 2021-10-08 国家基础地理信息中心 Interactive digital globe and interaction method
JP6413042B1 (en) * 2018-05-30 2018-10-24 株式会社ほぼ日 Program, information processing apparatus and information processing method
CN110390258A (en) * 2019-06-05 2019-10-29 东南大学 Image object three-dimensional information mask method
CN110580723B (en) * 2019-07-05 2022-08-19 成都智明达电子股份有限公司 Method for carrying out accurate positioning by utilizing deep learning and computer vision
CN111199206A (en) * 2019-12-30 2020-05-26 上海眼控科技股份有限公司 Three-dimensional target detection method and device, computer equipment and storage medium
CN112597788B (en) * 2020-09-01 2021-09-21 禾多科技(北京)有限公司 Target measuring method, target measuring device, electronic apparatus, and computer-readable medium
CN112419233B (en) * 2020-10-20 2022-02-22 腾讯科技(深圳)有限公司 Data annotation method, device, equipment and computer readable storage medium
CN112700552A (en) * 2020-12-31 2021-04-23 华为技术有限公司 Three-dimensional object detection method, three-dimensional object detection device, electronic apparatus, and medium
CN112818990B (en) * 2021-01-29 2023-08-22 中国人民解放军军事科学院国防科技创新研究院 Method for generating target detection frame, method and system for automatically labeling image data
CN112767489A (en) * 2021-01-29 2021-05-07 北京达佳互联信息技术有限公司 Three-dimensional pose determination method and device, electronic equipment and storage medium
CN112766274B (en) * 2021-02-01 2023-07-07 长沙市盛唐科技有限公司 Water gauge image water level automatic reading method and system based on Mask RCNN algorithm

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108280870A (en) * 2018-01-24 2018-07-13 郑州云海信息技术有限公司 A kind of point cloud model texture mapping method and system
CN111259950A (en) * 2020-01-13 2020-06-09 南京邮电大学 Method for training YOLO neural network based on 3D model
CN111967313A (en) * 2020-07-08 2020-11-20 北京航空航天大学 Unmanned aerial vehicle image annotation method assisted by deep learning target detection algorithm
CN112686872A (en) * 2020-12-31 2021-04-20 南京理工大学 Wood counting method based on deep learning

Also Published As

Publication number Publication date
CN113342914A (en) 2021-09-03

Similar Documents

Publication Publication Date Title
CN109816704A (en) The 3 D information obtaining method and device of object
CN107702714A (en) Localization method, apparatus and system
CN106560835B (en) A kind of guideboard recognition methods and device
WO2021109775A1 (en) Methods and devices for generating training sample, training model and recognizing character
AU2007355942A2 (en) Arrangement and method for providing a three dimensional map representation of an area
CA2748031A1 (en) System and method for linking real-world objects and object representations by pointing
CN110969592B (en) Image fusion method, automatic driving control method, device and equipment
CN112489099B (en) Point cloud registration method and device, storage medium and electronic equipment
JP6571262B2 (en) Display objects based on multiple models
CN110288612B (en) Nameplate positioning and correcting method and device
CN110501712A (en) For determining the method, apparatus, equipment and medium of position and attitude data
CN108332752A (en) The method and device of robot indoor positioning
CN112150804A (en) City multi-type intersection identification method based on MaskRCNN algorithm
CN114782824A (en) Wetland boundary defining method and device based on interpretation mark and readable storage medium
CN113342914B (en) Data set acquisition and automatic labeling method for detecting terrestrial globe area
CN114612393A (en) Monocular vision-based reflective part pose estimation method
CN104359426B (en) Method for quantizing colored area during coloring detection of ball pin base
CN105631849B (en) The change detecting method and device of target polygon
CN111982077B (en) Electronic map drawing method and system and electronic equipment
CN107730592B (en) Visualization method for field-of-view target in virtual environment
CN102168973A (en) Automatic navigating Z-shaft positioning method for omni-directional vision sensor and positioning system thereof
CN113362287B (en) Man-machine cooperative remote sensing image intelligent interpretation method
CN115527000A (en) Method and device for batch singularization of oblique photography models of unmanned aerial vehicle
CN115239784A (en) Point cloud generation method and device, computer equipment and storage medium
CN114332241A (en) External parameter calibration method, three-dimensional reconstruction method and storage medium of RGBD camera based on process calibration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant