CN113342914A - Method for acquiring and automatically labeling data set for globe region detection - Google Patents

Method for acquiring and automatically labeling data set for globe region detection Download PDF

Info

Publication number
CN113342914A
CN113342914A CN202110672555.0A CN202110672555A CN113342914A CN 113342914 A CN113342914 A CN 113342914A CN 202110672555 A CN202110672555 A CN 202110672555A CN 113342914 A CN113342914 A CN 113342914A
Authority
CN
China
Prior art keywords
globe
model
mark points
data set
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110672555.0A
Other languages
Chinese (zh)
Other versions
CN113342914B (en
Inventor
董爽
陈恒鑫
陈鑫润
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN202110672555.0A priority Critical patent/CN113342914B/en
Publication of CN113342914A publication Critical patent/CN113342914A/en
Application granted granted Critical
Publication of CN113342914B publication Critical patent/CN113342914B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Remote Sensing (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Graphics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a method for acquiring and automatically labeling a data set for globe region detection, and belongs to the technical field of computer vision. The method specifically comprises the following steps: s1: converting coordinates to generate a planar world map with the same specification and different textures; s2: loading a globe model by using a 3D virtual engine, mapping a corresponding texture map on the globe model through parametric texture mapping; then selecting a target area on the globe sphere model and marking the target area; s3: acquiring a new texture map and setting the new texture map on the globe model; s4: setting environmental parameters, a rotation mode and screenshot frequency of the globe model, and intercepting pictures in the rotation process; s5: judging visible mark points and calculating the number of the visible mark points; s6: and judging whether the ratio of the number of the visible mark points to the total number of the mark points reaches a preset ratio or not. The method can automatically acquire a large number of data sets and improve the accuracy of model training.

Description

Method for acquiring and automatically labeling data set for globe region detection
Technical Field
The invention belongs to the technical field of computer vision, and relates to a method for acquiring and automatically labeling a data set for globe region detection.
Background
The deep learning technology brings great help to solve the related problems of computer vision, but the deep learning technology has the defects that a large amount of marked data is needed, and a large amount of time and manpower are consumed for acquiring a data set by adopting a mode of manual photographing and manual marking.
Particularly, when a target detection algorithm based on deep learning is used for identifying a globe region, in order to acquire a large number of labeled data sets, the existing method needs to consume a large amount of manpower and time for acquisition, and is not beneficial to the development of the existing work.
Disclosure of Invention
In view of this, the present invention provides a method for acquiring and automatically labeling a data set for globe region detection, which utilizes a system capable of automatically acquiring and automatically labeling a data set based on a 3D virtual engine to obtain a large amount of labeled data, thereby improving accuracy of model training and greatly saving time and labor cost.
In order to achieve the purpose, the invention provides the following technical scheme:
a method for acquiring and automatically labeling a data set for globe region detection specifically comprises the following steps:
s1: converting GIS data containing a WGS84 geographic coordinate system into a Patterson projection coordinate system to generate a large number of planar world maps with the same specification and different textures, wherein the planar projection pictures are used for texture mapping of a globe in a 3D virtual engine;
s2: loading a globe model by using a 3D virtual engine, mapping a corresponding texture map on the globe model through parametric texture mapping; then selecting a target area on the globe sphere model subjected to texture mapping, inserting a plurality of marks into the outline of the target area, and marking the outline of each area as the same group of marks;
s3: requesting a new texture map from the target storage location through the network and setting the new texture map on the globe model, wherein the contour mark inserted in the step S2 still matches the corresponding region contour position because the specifications of the texture maps are the same;
s4: setting environmental parameters (including illumination and distance) and a rotation mode of the globe model, then setting screenshot frequency, and capturing pictures to obtain globe pictures at different angles, different illumination conditions and different distances in the rotation process;
s5: adding a layer of spherical detection frame outside the globe model, inserting all area contour markers to emit light rays from the front position of the globe model to the globe model after intercepting the picture in step S4, then judging visible marker points and calculating the number of the visible marker points of each group;
s6: judging whether the ratio of the number of the visible mark points to the number of the total mark points reaches a preset ratio or not; if so, calculating the coordinate of the minimum detection frame of the corresponding area outline, and then corresponding the coordinate to the screenshot one by one and storing to obtain a data set; otherwise, the coordinates of the detection frame of the corresponding area are not calculated; if the detection frame data is stored, the image coordinates of the upper left point and the lower right point of the minimum detection frame of the detection frame are calculated through the two-dimensional image coordinates of all visible mark points of the area outline, so that automatic mark data are obtained and stored corresponding to the current screenshot.
S7: after the picture is cut out according to the rule of step S4 and the check box flag data is automatically generated, steps S3 to S6 are repeatedly performed until no new texture map exists.
Further, in step S1, the calculation formula for converting the GIS data into the Patterson projection coordinate system is:
x=λ
Figure BDA0003119959610000021
wherein x and y are projection coordinates, lambda,
Figure BDA0003119959610000022
Is latitude and longitude, c1、c2、c3、c4Is a polynomial coefficient.
Further, in step S2, the parameterized texture mapping formula is:
θ=atan2(-(z-cz),x-cx)
u=(θ+π)/2π
Figure BDA0003119959610000023
Figure BDA0003119959610000024
wherein u and v are coordinate systems of the planar picture, and the range is [0,1 ]]C is the coordinate of the central point of the sphere, x, y and z are the coordinates of the corresponding x axis, y axis and z axis of a certain point on the surface of the sphere,
Figure BDA0003119959610000025
is the longitude angle, theta is the latitude angle, and r is the sphere radius.
Further, the step S4 specifically includes: the direction of the light source is adjusted by using a program to simulate different real illumination conditions, the distance of the globe model is adjusted to simulate different real distance scenes, and meanwhile, the globe model rotates around the y axis and the z axis according to a certain frequency; and capturing the picture in the rotation process to obtain the data of the globe picture under different angles, different illumination conditions and different distances.
Further, in step S5, determining visible mark points and calculating the number of visible mark points in each group, specifically including: whether the mark point is positioned on the front side of the globe model or not, namely whether the mark point is visible on the current image or not is judged by calculating whether the Euclidean distance from the point, which is shot by the light rays, on the detection frame to the corresponding mark point is smaller than a certain value or not.
The distance formula is:
Figure BDA0003119959610000026
wherein x is1、y1、z1Coordinates, x, for rays striking the detection box2、y2、z2Is the corresponding marker coordinate.
Further, in step S6, the image coordinate calculation formula is:
x1=Min(p1(x),p2(x),…,pn(x))
y1=Min(p1(y),p2(y),…,pn(y))
x2=Max(p1(x),p2(x),…,pn(x))
y2=Max(p1(y),p2(y),…,pn(y))
wherein x is1、y1To detect the coordinates, x, of the upper left corner of the box2、y2In order to detect the coordinates of the lower right corner of the frame, n is the number of visible mark points inserted into the area outline, and p is the visible coordinates of the mark.
The invention has the beneficial effects that: according to the method, a system which is based on a 3D virtual engine and can automatically acquire the data set and automatically label is used for acquiring a large amount of labeled data, so that the accuracy of model training is improved, and meanwhile, the time and labor cost are greatly saved.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is an overall flow chart of a data set acquisition and automatic labeling method of the present invention;
FIG. 2 is a schematic diagram of coordinates of a planar world map mapped onto a sphere model;
FIG. 3 is a schematic diagram of the marker point on the globe model and the spherical detection frame and the collision point of the detection frame.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Referring to fig. 1 to fig. 3, the present embodiment designs a method for acquiring and automatically labeling a data set for globe area detection, which specifically includes the following steps:
step 1: and converting GIS data containing the WGS84 geographic coordinate system into a Patterson projection coordinate system, and generating a large number of plane projection pictures of world maps with the same specification and different textures for texture mapping of the globe in the 3D virtual engine.
The calculation formula of the Patterson projection coordinate system is as follows:
x=λ
Figure BDA0003119959610000041
wherein x and y are projection coordinates, lambda,
Figure BDA0003119959610000042
Is longitude and latitude, polynomial coefficient c1=1.0148,c2=0.23185,c3=-0.14499,c4=0.02406。
Step 2: the globe model is loaded using a 3D virtual engine and a corresponding texture map is mapped onto the sphere model by parametric texture mapping, as shown in fig. 2.
The parameterized texture mapping formula is:
θ=atan2(-(z-cz),x-cx)
u=(θ+π)/2π
Figure BDA0003119959610000043
Figure BDA0003119959610000044
wherein u and v are coordinate systems of the planar picture, and the range is [0,1 ]]C is the coordinate of the central point of the sphere, x, y and z are the coordinates of the corresponding x axis, y axis and z axis of a certain point on the surface of the sphere,
Figure BDA0003119959610000045
is the longitude angle, theta is the latitude angle, and r is the sphere radius.
And then selecting a target area on the globe sphere model subjected to texture mapping, and inserting a plurality of marks into the outline of the target area, wherein the marks of each area outline are the same group of marks.
And step 3: and (3) requesting a new texture map from the target storage position through the network and setting the new texture map on the globe model, wherein the contour marks inserted in the step (2) are still matched with the corresponding region contour positions due to the fact that the specifications of the texture maps are the same.
And 4, step 4: the direction of the light source is adjusted by using a program to simulate different illumination conditions of reality, the distance of the model is adjusted to simulate different distance scenes of reality, meanwhile, the globe model rotates around the y axis and the z axis according to a certain frequency, and pictures are captured in the rotating process to obtain globe picture data under different angles, different illumination conditions and different distances.
And 5: a layer of spherical detection frame is added outside the globe model, after the picture is captured in step 4, light rays are emitted from the front position of the globe model to all the area contour marks inserted in step 2 on the globe model, and whether the mark point is positioned on the front side of the globe model or not, namely whether the mark point is visible on the current image or not is judged by calculating whether the Euclidean distance between the point, hit on the detection frame, of the light rays and the corresponding mark is smaller than a certain value or not, as shown in FIG. 3.
The distance formula is:
Figure BDA0003119959610000046
wherein x is1、y1、z1Coordinates, x, for rays striking the detection box2、y2、z2Is the corresponding marker coordinate.
Step 6: and (4) judging whether to calculate the coordinate data of the detection frame of the area outline by calculating the proportion (whether to be more than 80%) of the visible mark points of the corresponding area outline acquired in the step (5) to the total mark points of the area outline. If the detection frame data is stored, the image coordinates of the upper left point and the lower right point of the minimum detection frame of the detection frame are calculated through the two-dimensional image coordinates of all visible mark points of the area outline, so that automatic mark data are obtained and stored corresponding to the current screenshot.
The coordinate calculation formula is as follows:
x1=Min(p1(x),p2(x),…,pn(x))
y1=Min(p1(y),p2(y),…,pn(y))
x2=Max(p1(x),p2(x),…,pn(x))
y2=Max(p1(y),p2(y),…,pn(y))
wherein x is1、y1To detect the coordinates, x, of the upper left corner of the box2、y2In order to detect the coordinates of the lower right corner of the frame, n is the number of visible mark points inserted into the area outline, and p is the visible coordinates of the mark.
And 7: and after the picture is intercepted according to the rule in the step 4 and the detection frame mark data are automatically generated, the steps 3-6 are repeatedly executed until no new texture map exists.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

Claims (6)

1. A method for acquisition and automatic labeling of a data set for globe region detection is characterized by comprising the following steps:
s1: converting GIS data into Patterson projection coordinates to generate a planar world map with the same specification and different textures;
s2: loading a globe model by using a 3D virtual engine, mapping a corresponding texture map on the globe model through parametric texture mapping; then selecting a target area on the globe sphere model subjected to texture mapping, inserting a plurality of marks into the outline of the target area, and marking the outline of each area as the same group of marks;
s3: additionally acquiring a new texture map and setting the new texture map on the globe model;
s4: setting environmental parameters and a rotation mode of the globe model, then setting screenshot frequency, and intercepting pictures in the rotation process;
s5: adding a layer of spherical detection frame outside the globe model, inserting all area contour marks from the front position of the globe model to emit light rays, then judging visible mark points and calculating the number of the visible mark points of each group;
s6: judging whether the ratio of the number of the visible mark points to the number of the total mark points reaches a preset ratio or not; if so, calculating the coordinate of the minimum detection frame of the corresponding area outline, and then corresponding the coordinate to the screenshot one by one and storing to obtain a data set; otherwise, the coordinates of the detection frame of the corresponding area are not calculated; if the detection frame data is stored, the image coordinates of the upper left point and the lower right point of the minimum detection frame of the detection frame are calculated through the two-dimensional image coordinates of all visible mark points of the area outline, so that automatic mark data are obtained and stored corresponding to the current screenshot.
2. The method for acquiring and automatically labeling the data set according to claim 1, wherein in step S1, the calculation formula for converting the GIS data into the Patterson projection coordinate system is:
x=λ
Figure FDA0003119959600000011
wherein x and y are projection coordinates, lambda,
Figure FDA0003119959600000012
Is latitude and longitude, c1~c4Is a polynomial coefficient.
3. The method for data set acquisition and automatic labeling according to claim 1, wherein in step S2, the parameterized texture mapping formula is:
θ=arctan2(-(z-cz),x-cx)
u=(θ+π)/2π
Figure FDA0003119959600000013
Figure FDA0003119959600000014
wherein u and v are coordinate systems of the plane pictures, c is the coordinate of the central point of the sphere, x, y and z are the coordinates of the corresponding x axis, y axis and z axis of a certain point on the surface of the sphere,
Figure FDA0003119959600000015
is the longitude angle, theta is the latitude angle, and r is the sphere radius.
4. The method for data set acquisition and automatic annotation according to claim 1, wherein said step S4 specifically comprises: the direction of the light source is adjusted by using a program to simulate different real illumination conditions, the distance of the globe model is adjusted to simulate different real distance scenes, and meanwhile, the globe model rotates around the y axis and the z axis according to a certain frequency; and capturing the picture in the rotation process to obtain the data of the globe picture under different angles, different illumination conditions and different distances.
5. The method for acquiring and automatically labeling a data set according to claim 1, wherein in step S5, the steps of judging visible mark points and calculating the number of visible mark points in each group specifically include: whether the mark point is positioned on the front side of the globe model or not, namely whether the mark point is visible on the current image or not is judged by calculating whether the Euclidean distance from the point, which is shot by the light rays, on the detection frame to the corresponding mark point is smaller than a certain value or not.
6. The method for data set acquisition and automatic labeling according to any of claims 1-5, further comprising the step of S7: after the picture is cut out according to the rule of step S4 and the check box flag data is automatically generated, steps S3 to S6 are repeatedly performed until no new texture map exists.
CN202110672555.0A 2021-06-17 2021-06-17 Data set acquisition and automatic labeling method for detecting terrestrial globe area Active CN113342914B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110672555.0A CN113342914B (en) 2021-06-17 2021-06-17 Data set acquisition and automatic labeling method for detecting terrestrial globe area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110672555.0A CN113342914B (en) 2021-06-17 2021-06-17 Data set acquisition and automatic labeling method for detecting terrestrial globe area

Publications (2)

Publication Number Publication Date
CN113342914A true CN113342914A (en) 2021-09-03
CN113342914B CN113342914B (en) 2023-04-25

Family

ID=77476245

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110672555.0A Active CN113342914B (en) 2021-06-17 2021-06-17 Data set acquisition and automatic labeling method for detecting terrestrial globe area

Country Status (1)

Country Link
CN (1) CN113342914B (en)

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5647018A (en) * 1991-10-28 1997-07-08 Imperial College Of Science, Technology And Medicine Method and apparatus for generating images
CN203288164U (en) * 2013-04-24 2013-11-13 庞鸿宇 Transparent four-color latitude-longitude celestial globe
US20140055570A1 (en) * 2012-03-19 2014-02-27 Fittingbox Model and method for producing 3d photorealistic models
US20170252918A1 (en) * 2014-08-28 2017-09-07 Kabushiki Kaisha Topcon Measurement and installation data indicating apparatus and measurement and installation data indicating method
WO2017158829A1 (en) * 2016-03-18 2017-09-21 三菱電機株式会社 Display control device and display control method
CN107451235A (en) * 2017-07-25 2017-12-08 广州视源电子科技股份有限公司 The methods of exhibiting and device of Spatial Dimension mark
CN108280870A (en) * 2018-01-24 2018-07-13 郑州云海信息技术有限公司 A kind of point cloud model texture mapping method and system
CN108646922A (en) * 2018-05-24 2018-10-12 国家基础地理信息中心 A kind of interactive digital tellurion and exchange method
CN110390258A (en) * 2019-06-05 2019-10-29 东南大学 Image object three-dimensional information mask method
WO2019230802A1 (en) * 2018-05-30 2019-12-05 株式会社ほぼ日 Program, information processing device, and information processing method
CN110580723A (en) * 2019-07-05 2019-12-17 成都智明达电子股份有限公司 method for carrying out accurate positioning by utilizing deep learning and computer vision
CN111199206A (en) * 2019-12-30 2020-05-26 上海眼控科技股份有限公司 Three-dimensional target detection method and device, computer equipment and storage medium
CN111259950A (en) * 2020-01-13 2020-06-09 南京邮电大学 Method for training YOLO neural network based on 3D model
CN111967313A (en) * 2020-07-08 2020-11-20 北京航空航天大学 Unmanned aerial vehicle image annotation method assisted by deep learning target detection algorithm
CN112419233A (en) * 2020-10-20 2021-02-26 腾讯科技(深圳)有限公司 Data annotation method, device, equipment and computer readable storage medium
CN112597788A (en) * 2020-09-01 2021-04-02 禾多科技(北京)有限公司 Target measuring method, target measuring device, electronic apparatus, and computer-readable medium
CN112686872A (en) * 2020-12-31 2021-04-20 南京理工大学 Wood counting method based on deep learning
CN112700552A (en) * 2020-12-31 2021-04-23 华为技术有限公司 Three-dimensional object detection method, three-dimensional object detection device, electronic apparatus, and medium
CN112766274A (en) * 2021-02-01 2021-05-07 长沙市盛唐科技有限公司 Water gauge image water level automatic reading method and system based on Mask RCNN algorithm
CN112767489A (en) * 2021-01-29 2021-05-07 北京达佳互联信息技术有限公司 Three-dimensional pose determination method and device, electronic equipment and storage medium
CN112818990A (en) * 2021-01-29 2021-05-18 中国人民解放军军事科学院国防科技创新研究院 Target detection frame generation method, image data automatic labeling method and system

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5647018A (en) * 1991-10-28 1997-07-08 Imperial College Of Science, Technology And Medicine Method and apparatus for generating images
US20140055570A1 (en) * 2012-03-19 2014-02-27 Fittingbox Model and method for producing 3d photorealistic models
CN203288164U (en) * 2013-04-24 2013-11-13 庞鸿宇 Transparent four-color latitude-longitude celestial globe
US20170252918A1 (en) * 2014-08-28 2017-09-07 Kabushiki Kaisha Topcon Measurement and installation data indicating apparatus and measurement and installation data indicating method
WO2017158829A1 (en) * 2016-03-18 2017-09-21 三菱電機株式会社 Display control device and display control method
CN107451235A (en) * 2017-07-25 2017-12-08 广州视源电子科技股份有限公司 The methods of exhibiting and device of Spatial Dimension mark
CN108280870A (en) * 2018-01-24 2018-07-13 郑州云海信息技术有限公司 A kind of point cloud model texture mapping method and system
CN108646922A (en) * 2018-05-24 2018-10-12 国家基础地理信息中心 A kind of interactive digital tellurion and exchange method
WO2019230802A1 (en) * 2018-05-30 2019-12-05 株式会社ほぼ日 Program, information processing device, and information processing method
CN110390258A (en) * 2019-06-05 2019-10-29 东南大学 Image object three-dimensional information mask method
CN110580723A (en) * 2019-07-05 2019-12-17 成都智明达电子股份有限公司 method for carrying out accurate positioning by utilizing deep learning and computer vision
CN111199206A (en) * 2019-12-30 2020-05-26 上海眼控科技股份有限公司 Three-dimensional target detection method and device, computer equipment and storage medium
CN111259950A (en) * 2020-01-13 2020-06-09 南京邮电大学 Method for training YOLO neural network based on 3D model
CN111967313A (en) * 2020-07-08 2020-11-20 北京航空航天大学 Unmanned aerial vehicle image annotation method assisted by deep learning target detection algorithm
CN112597788A (en) * 2020-09-01 2021-04-02 禾多科技(北京)有限公司 Target measuring method, target measuring device, electronic apparatus, and computer-readable medium
CN112419233A (en) * 2020-10-20 2021-02-26 腾讯科技(深圳)有限公司 Data annotation method, device, equipment and computer readable storage medium
CN112686872A (en) * 2020-12-31 2021-04-20 南京理工大学 Wood counting method based on deep learning
CN112700552A (en) * 2020-12-31 2021-04-23 华为技术有限公司 Three-dimensional object detection method, three-dimensional object detection device, electronic apparatus, and medium
CN112767489A (en) * 2021-01-29 2021-05-07 北京达佳互联信息技术有限公司 Three-dimensional pose determination method and device, electronic equipment and storage medium
CN112818990A (en) * 2021-01-29 2021-05-18 中国人民解放军军事科学院国防科技创新研究院 Target detection frame generation method, image data automatic labeling method and system
CN112766274A (en) * 2021-02-01 2021-05-07 长沙市盛唐科技有限公司 Water gauge image water level automatic reading method and system based on Mask RCNN algorithm

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
UROOJ MAHATAB ET AL.: "SPAM TAGS DETECTION AND PROTECTION USING TAGS RELATIONSHIP BASED ANTI-SPAM APPROACH", 《2018 IEEE 21ST INTERNATIONAL MULTI-TOPIC CONFERENCE》 *
王志旋等: "球面全景影像自动测量路灯坐标的方法", 《中国图象图形学报》 *
胡远志等: "基于数据融合的目标测距方法研究", 《重庆理工大学学报(自然科学)》 *

Also Published As

Publication number Publication date
CN113342914B (en) 2023-04-25

Similar Documents

Publication Publication Date Title
AU2007355942B2 (en) Arrangement and method for providing a three dimensional map representation of an area
CN109816704A (en) The 3 D information obtaining method and device of object
CN109724603A (en) A kind of Indoor Robot air navigation aid based on environmental characteristic detection
CN106560835B (en) A kind of guideboard recognition methods and device
CN107702714A (en) Localization method, apparatus and system
CN110967014B (en) Machine room indoor navigation and equipment tracking method based on augmented reality technology
WO2020090428A1 (en) Geographic object detection device, geographic object detection method, and geographic object detection program
CN110969592B (en) Image fusion method, automatic driving control method, device and equipment
WO2020199565A1 (en) Street lamp pole-based vehicle posture correction method and device
CN110288612B (en) Nameplate positioning and correcting method and device
CN115937439B (en) Method and device for constructing three-dimensional model of urban building and electronic equipment
CN108332752A (en) The method and device of robot indoor positioning
CN114004977A (en) Aerial photography data target positioning method and system based on deep learning
CN114782824A (en) Wetland boundary defining method and device based on interpretation mark and readable storage medium
CN115239784A (en) Point cloud generation method and device, computer equipment and storage medium
CN113255578B (en) Traffic identification recognition method and device, electronic equipment and storage medium
CN113342914B (en) Data set acquisition and automatic labeling method for detecting terrestrial globe area
CN105631849B (en) The change detecting method and device of target polygon
WO2024088071A1 (en) Three-dimensional scene reconstruction method and apparatus, device and storage medium
CN115527000B (en) Method and device for batch monomalization of unmanned aerial vehicle oblique photography model
CN113139031B (en) Method and related device for generating traffic sign for automatic driving
CN115908729A (en) Three-dimensional live-action construction method, device and equipment and computer readable storage medium
CN116401326A (en) Road identification updating method and device
US20230334819A1 (en) Illuminant estimation method and apparatus for electronic device
CN113095112A (en) Point cloud data acquisition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant