CN114998425B - Target object geographic coordinate positioning method and device based on artificial intelligence - Google Patents

Target object geographic coordinate positioning method and device based on artificial intelligence Download PDF

Info

Publication number
CN114998425B
CN114998425B CN202210933498.1A CN202210933498A CN114998425B CN 114998425 B CN114998425 B CN 114998425B CN 202210933498 A CN202210933498 A CN 202210933498A CN 114998425 B CN114998425 B CN 114998425B
Authority
CN
China
Prior art keywords
coordinate
pixel
camera
coordinates
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210933498.1A
Other languages
Chinese (zh)
Other versions
CN114998425A (en
Inventor
谢刚
姜孝兵
李倩
魏延峰
张煜辉
凌海锋
姚明
尚建勋
张呈峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Geospace Information Technology Co ltd
Original Assignee
Geospace Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Geospace Information Technology Co ltd filed Critical Geospace Information Technology Co ltd
Priority to CN202210933498.1A priority Critical patent/CN114998425B/en
Publication of CN114998425A publication Critical patent/CN114998425A/en
Application granted granted Critical
Publication of CN114998425B publication Critical patent/CN114998425B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20101Interactive definition of point of interest, landmark or seed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention is suitable for the field of natural resource investigation and monitoring, and provides a target object geographic coordinate positioning method and device based on artificial intelligence, wherein the method comprises the following steps: s1, performing field calibration; s2, collecting a sample; s3, generating a model; and S4, executing the model. According to the invention, by acquiring the horizontal azimuth angle, the vertical azimuth angle, the multiplying power, the pixel coordinate of the target object and the corresponding calibrated geographic coordinate sample of the camera and iteratively training and constructing the pixel coordinate and geographic coordinate conversion model, the geographic coordinate of any pixel point on the picture under the current visual field can be calculated, so that the geographic coordinate of the monitored target is positioned.

Description

Target object geographic coordinate positioning method and device based on artificial intelligence
Technical Field
The invention belongs to the field of natural resource investigation and monitoring, and particularly relates to a target object geographic coordinate positioning method and device based on artificial intelligence.
Background
With the development of the natural resource investigation and monitoring industry, the monitoring requirements of people on natural resources are increasingly improved, and the requirements on daily video content monitored by a camera are not limited to video pictures, but rather, the geographic coordinates of a target ground object in the real world (such as coordinate values in a 2000 national geodetic coordinate system widely used in China) need to be directly obtained.
The existing technical scheme is that a tower camera or a video pole is used for monitoring illegal land behaviors in real time, and monitoring cameras are generally arranged on an outdoor tower and the outdoor video pole, but the existing scheme can only achieve video monitoring, cannot achieve intelligent target identification of a target object, cannot achieve accurate geographical positioning of the target object, and cannot achieve real-time early warning and protection. In order to solve the problem, publication No. CN113223087B discloses a target object geographic coordinate positioning technology based on video monitoring, which describes a method for converting pixel coordinates into geodetic coordinates, and analyzes the mapping relationship between the pixel coordinates of the monitored object and the geodetic coordinates through the geodetic coordinates, the elevation difference, the vertical azimuth, the horizontal azimuth, the vertical viewing angle and the horizontal viewing angle of the camera, and then calculates the geographic coordinates according to the azimuth and the pixel coordinates.
The method has the problems that the logic for constructing the mapping relation is complex, the accurate value of the distance and the ground height difference of the camera is difficult to measure, and the camera is difficult to adjust to a complete horizontal state in the air. In addition, in a specific application, there are the following problems:
(1) The calculation parameters are more: the X, Y and Z coordinates, horizontal azimuth angle, vertical azimuth angle, horizontal view angle and vertical view angle of the camera need to be considered, and when the camera is not installed, the pitch angle, the roll angle and the like need to be considered;
(2) The logic is complex: the coordinate mapping relation is difficult to master by common implementers;
(3) The installation is difficult: in high-altitude operation, it is difficult to ensure horizontal installation of the camera;
(4) The measurement is difficult: the camera is in the air and it is difficult to measure its height difference from the ground.
Disclosure of Invention
In view of the above problems, the present invention provides a method and an apparatus for locating geographic coordinates of a target object based on artificial intelligence, and aims to solve the above problems.
The invention adopts the following technical scheme:
in one aspect, the method for locating the geographic coordinates of the target object based on the artificial intelligence comprises the following steps:
s1, field calibration: arranging field calibration points in a camera monitoring area, wherein each point has a corresponding geographic coordinate;
s2, sample collection: previewing a camera picture through a sample acquisition tool, locking the geographic coordinates of a field calibration point, and then clicking the camera picture through a mouse to automatically capture the current horizontal azimuth angle, vertical azimuth angle, multiplying power, pixel x coordinate and pixel y coordinate of the camera, forming a sample with the locked geographic coordinates, and further completing sample acquisition;
s3, model generation: generating a model file through model training and multiple iterations according to the collected samples;
step S4, model execution step: acquiring the current horizontal azimuth angle, vertical azimuth angle, multiplying power and pixel coordinates of a target object of the camera, inputting the parameters into the model, carrying out forward propagation, and calculating to obtain the real geographic coordinates of the target object.
Further, in step S2, the sample format of the collected sample is
Figure 516664DEST_PATH_IMAGE001
In the formula (I), the reaction is carried out,
Figure 914147DEST_PATH_IMAGE002
respectively representing horizontal azimuth, vertical azimuth and magnification,
Figure 705386DEST_PATH_IMAGE003
pixel x and pixel y coordinates representing screen coordinates respectively,
Figure 49779DEST_PATH_IMAGE004
x and y coordinates representing geographic coordinates, respectively.
Further, the specific process of step S3 is as follows:
setting a neuron forward propagation operation expression and an activation function for model training;
obtaining a forward propagation result and an average loss function after multi-stage neuron operation;
optimizing the loss amount by using a least square method, and solving an optimal solution;
updating the weight by using a chain type derivative rule;
and (5) performing multiple iterations by using a gradient descent method, fitting to obtain a global optimal solution, and storing the global optimal solution as a model file.
In another aspect, the apparatus for locating geographic coordinates of a target object based on artificial intelligence comprises:
the field calibration unit is used for arranging field calibration points in a camera monitoring area, and each point has a corresponding geographic coordinate;
the sample acquisition unit is used for previewing a camera picture through a sample acquisition tool, locking the geographic coordinates of a field calibration point, and then automatically capturing the current horizontal azimuth angle, vertical azimuth angle, multiplying power, pixel x coordinate and pixel y coordinate of the camera by clicking the camera picture through a mouse, forming a sample with the locked geographic coordinates, and further completing sample acquisition;
a model generation unit: generating a model file through model training and multiple iterations according to the collected samples;
and the model execution unit is used for acquiring five parameters including the current horizontal azimuth angle, the current vertical azimuth angle, the current multiplying power of the camera and the pixel coordinate of the target object, inputting the parameters into the model, performing forward propagation, and calculating to obtain the real geographic coordinate of the target object.
Further, the sample format of the collected sample is
Figure 111538DEST_PATH_IMAGE001
In the formula (I), wherein,
Figure 945502DEST_PATH_IMAGE002
respectively representing horizontal azimuth, vertical azimuth and magnification,
Figure 224037DEST_PATH_IMAGE003
pixel x and pixel y coordinates representing screen coordinates respectively,
Figure 168859DEST_PATH_IMAGE004
x and y coordinates representing geographic coordinates, respectively.
Further, the model generation unit includes:
the setting module is used for setting a neuron forward propagation operation expression and an activation function of model training;
the first calculation module is used for obtaining a forward propagation result and an average loss function after multi-stage neuron operation;
the second calculation module is used for optimizing the loss amount by using a least square method and solving an optimal solution;
the weight updating module is used for updating the weight by using a chain type derivative rule;
and the iterative fitting module is used for executing multiple iterations by using a gradient descent method, obtaining a global optimal solution after fitting, and storing the global optimal solution as a model file.
The beneficial effects of the invention are: according to the invention, an artificial intelligence technology is adopted, the X, Y and Z coordinates of the camera and whether the camera is horizontal or not do not need to be concerned, under the video monitoring, the geographic coordinates of any pixel point on a picture under the current visual field can be calculated by acquiring the horizontal azimuth angle, the vertical azimuth angle, the multiplying power, the pixel coordinates of a target object and a corresponding calibrated geographic coordinate sample and iteratively training a constructed pixel coordinate and geographic coordinate conversion model, so that the geographic coordinates of the monitored target are positioned; therefore, the method solves the problems in the prior art, reduces the number of parameters, simplifies the operation flow, and reduces the installation difficulty and the measurement difficulty.
Drawings
FIG. 1 is a flowchart of a method for locating geographic coordinates of a target object based on artificial intelligence according to an embodiment of the present invention;
FIG. 2 is a block diagram of an artificial intelligence based target object geographic coordinate locating device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
The first embodiment is as follows:
as shown in fig. 1, the method for locating geographic coordinates of a target object based on artificial intelligence according to this embodiment includes the following steps:
s1, field calibration: and arranging field calibration points in the monitoring area of the camera, wherein each point has a corresponding geographic coordinate.
In the step, the field calibration points are arranged in the monitoring area of the camera, for example, points can be uniformly arranged in the area, and then the calibration points are measured by using Real-time kinematic (RTK) to obtain the geographic coordinates of each field calibration point.
RTK real-time differential positioning is a measuring method which can obtain centimeter-level positioning accuracy in real time in the field, and can greatly improve the field operation efficiency. The purpose of this step is to obtain the actual geographic coordinates of each field calibration point.
S2, sample collection: the method comprises the steps of previewing a camera picture through a sample collection tool, locking the geographic coordinates of a field calibration point, then clicking the camera picture through a mouse to automatically capture the current horizontal azimuth angle, vertical azimuth angle, multiplying power, pixel X coordinates and pixel Y coordinates of the camera, forming a sample with the locked geographic coordinates, and further completing sample collection.
The picture of the camera can be previewed in real time through the sample acquisition tool, and the field calibration point can be seen in the picture. During operation, the monitoring range of the camera can be adjusted through the remote controller, the magnification (zooming in and zooming out) can also be adjusted, the video picture is clicked, and the pixel coordinate of the click target can be obtained. After a certain field calibration point is locked, the geographic coordinates of the field calibration point are input, and the azimuth angle and the magnification of the camera can be controlled through operation. After the camera is adjusted, the current horizontal azimuth angle, vertical azimuth angle, multiplying power, pixel x coordinate and pixel y coordinate of the camera can be automatically captured by clicking the picture of the camera by a mouse, and a sample is formed by the locked geographic coordinate. The sample format is
Figure 786922DEST_PATH_IMAGE001
In the formula (I), wherein,
Figure 293252DEST_PATH_IMAGE002
respectively representing horizontal azimuth, vertical azimuth and magnification,
Figure 793504DEST_PATH_IMAGE003
pixel x and pixel y coordinates representing screen coordinates respectively,
Figure 73175DEST_PATH_IMAGE004
x and y coordinates representing geographic coordinates, respectively.
Aiming at each field calibration point, the camera can be adjusted to be positioned at different postures, and a sample containing an azimuth angle, a multiplying power, a pixel coordinate and a geographic coordinate of the calibration point can be acquired by clicking a picture of the camera. And acquiring samples of all dimensions through multiple azimuth angle adjustment and magnification adjustment.
Therefore, this step needs to collect as many samples as possible to ensure the accuracy of the subsequent model training.
S3, model generation: and generating a model file through model training and multiple iterations according to the acquired samples.
The invention quickly establishes a calculation model for converting pixel coordinates into geographic coordinates by the principles of calibrating and manufacturing samples and iterative training. The specific process is as follows:
and S31, setting a neuron forward propagation operation formula and an activation function trained by the model.
The neuron forward propagation operation formula of model training is as follows:
Figure DEST_PATH_IMAGE005
the formula is simplified as follows:
Figure 171843DEST_PATH_IMAGE006
in the formula (I), the compound is shown in the specification,
Figure 347610DEST_PATH_IMAGE002
respectively representing horizontal azimuth, vertical azimuth and magnification,
Figure 335157DEST_PATH_IMAGE003
pixel x-coordinate and pixel y-coordinate respectively representing screen coordinates,
Figure DEST_PATH_IMAGE007
the weight is represented by a weight that is,
Figure 377108DEST_PATH_IMAGE008
indicating the amount of offset.
The activation function uses the sigmoid function:
Figure DEST_PATH_IMAGE009
wherein t represents a matrix after neuron operation,
Figure 828818DEST_PATH_IMAGE010
representing the activated matrix.
And S32, obtaining a forward propagation result and an average loss function after multi-stage neuron operation.
After multi-stage neuron operation, the result of forward propagation is obtained
Figure DEST_PATH_IMAGE011
Let the actual geographic coordinates of the sample
Figure 739267DEST_PATH_IMAGE012
Is composed of
Figure DEST_PATH_IMAGE013
To obtain the average loss function:
Figure 479690DEST_PATH_IMAGE014
wherein m represents the number of samples, i represents the sample number,
Figure DEST_PATH_IMAGE015
representing predicted and true values, respectively.
And S33, optimizing the loss amount by using a least square method, and solving an optimal solution.
Solving the optimal solution formula is as follows:
Figure 399367DEST_PATH_IMAGE016
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE017
which represents the amount of loss of training,
Figure 440004DEST_PATH_IMAGE018
the real coordinate value of the sample is represented,
Figure DEST_PATH_IMAGE019
representing a training predictor.
And S34, updating the weight by using a chain derivative rule.
The weights are updated using the chain-derivation rule, which is as follows:
Figure 521355DEST_PATH_IMAGE020
in the formula (I), the compound is shown in the specification,
Figure 686757DEST_PATH_IMAGE017
which represents the amount of loss of training,
Figure DEST_PATH_IMAGE021
represents the jth weight value of the ith neuron,
Figure 643080DEST_PATH_IMAGE022
the jth sample value in the ith neuron operation is represented.
And S35, executing multiple iterations by using a gradient descent method, fitting to obtain a global optimal solution, and storing the global optimal solution as a model file.
The gradient descent method is a first order optimization algorithm. Using gradient descent method to find out local minimum value of a function, it must perform iterative search to the distance point with specified step length corresponding to the opposite direction of the gradient (or approximate gradient) on the current point of the function. The detailed description of this step is omitted.
S4, model execution: acquiring the current horizontal azimuth angle, vertical azimuth angle, multiplying power and pixel coordinates of a target object of the camera, inputting the parameters into the model, carrying out forward propagation, and calculating to obtain the real geographic coordinates of the target object.
The steps S1-S3 realize the establishment of a calculation model of converting pixel coordinates into geographic coordinates through training. When the method is actually applied, the actual geographic coordinates of each target object in the picture can be obtained according to the calculation model.
Specifically, first, the camera is called to obtain the SDK (software development)nt Kit, software development Kit), the current horizontal azimuth, vertical azimuth and magnification of the camera can be obtained as
Figure DEST_PATH_IMAGE023
. Then, the pixel coordinates of the required target object can be directly obtained from the imaging picture of the camera
Figure 774110DEST_PATH_IMAGE012
(ii) a And finally, inputting 5 parameters of the front horizontal azimuth angle, the vertical azimuth angle, the multiplying power and the pixel coordinate of the target object into the model, and calculating the real geographic coordinate of the target object by forward propagation. The forward propagation calculation and activation function usage is consistent with the aforementioned step S3 model generation step.
Compared with the prior art, the method of the embodiment has the following advantages:
(1) The number of parameters is reduced. The implementation process does not need to care about the specific coordinates and elevation of the camera, and corresponding parameters are reduced.
(2) The process is simplified. In the implementation process, the sample can be collected only by calibrating the coordinates of part of characteristic ground objects in the monitoring area of the camera and clicking the picture of the camera in the field, and the implementer can carry out work without mastering complex mathematical knowledge.
(3) The installation difficulty is reduced. In the calculation process, the pitching, the side-tipping and the rotating postures of the camera do not need to be concerned, and only the condition that whether the camera can cover the monitoring range is considered.
(4) The measurement difficulty is reduced. The existing method needs to calculate the elevation of the camera, but the camera is generally installed in the height of 40 meters, the measurement difficulty is high, the elevation parameters of the camera do not need to be concerned in the calculation process of the method, and the coordinate conversion can be completed without the elevation measurement.
Example two:
as shown in fig. 2, the target object geographic coordinate locating device based on artificial intelligence provided in this embodiment includes:
the field calibration unit 1 is used for arranging field calibration points in a camera monitoring area, and each point has a corresponding geographic coordinate;
the sample acquisition unit 2 is used for previewing a camera picture through a sample acquisition tool, locking the geographic coordinates of a field calibration point, and then automatically capturing the current horizontal azimuth angle, vertical azimuth angle, multiplying power, pixel x coordinate and pixel y coordinate of the camera by clicking the camera picture through a mouse, forming a sample with the locked geographic coordinates, and further completing sample acquisition;
model generation unit 3: generating a model file through model training and multiple iterations according to the acquired sample;
and the model execution unit 4 is used for acquiring five parameters including the current horizontal azimuth angle, the current vertical azimuth angle, the current multiplying power of the camera and the pixel coordinate of the target object, inputting the parameters into the model, performing forward propagation, and calculating to obtain the real geographic coordinate of the target object.
The sample format of the collected sample is
Figure 462580DEST_PATH_IMAGE001
In the formula (I), the reaction is carried out,
Figure 115278DEST_PATH_IMAGE002
respectively representing horizontal azimuth, vertical azimuth and magnification,
Figure 547396DEST_PATH_IMAGE003
pixel x and pixel y coordinates representing screen coordinates respectively,
Figure 765888DEST_PATH_IMAGE004
x-and y-coordinates representing geographic coordinates, respectively.
For the functional units in this embodiment, for implementing steps S1 to S4 in the first embodiment, coordinate calibration is performed by the field calibration unit, then a sample is collected by the sample collection unit, a model file is generated according to the model generation unit, and finally calculation of the real geographic coordinate of the target object is completed by the model execution unit. The specific process is not described in detail in this embodiment.
As a preferable configuration, the model generation unit 4 includes:
the setting module is used for setting a neuron forward propagation operation expression and an activation function of model training;
the first calculation module is used for obtaining a forward propagation result and an average loss function after multi-stage neuron operation;
the second calculation module is used for optimizing the loss amount by using a least square method and solving an optimal solution;
the weight updating module is used for updating the weight by using a chain type derivation rule;
and the iterative fitting module is used for executing multiple iterations by using a gradient descent method, obtaining a global optimal solution after fitting, and storing the global optimal solution as a model file.
In conclusion, the implementation process of the invention is simple, the implementers do not need to master a complex mathematical operation method, can automatically acquire the samples for training by clicking the video picture, and can simply and quickly construct the relationship of converting the pixel coordinates to the geographic coordinates. In addition, the artificial intelligence technical model adopted by the invention does not need to pay attention to the position and the elevation of the camera, and can avoid complex elevation measurement work, thereby reducing the measurement difficulty. Therefore, the method and the device have simplified implementation, measurement and installation processes, so that the method and the device are easier to apply industrially.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (2)

1. A target object geographic coordinate positioning method based on artificial intelligence is characterized by comprising the following steps:
s1, field calibration: arranging field calibration points in a camera monitoring area, wherein each point has a corresponding geographic coordinate;
s2, sample collection: previewing a camera picture through a sample acquisition tool, locking the geographic coordinates of a field calibration point, and then clicking the camera picture through a mouse to automatically capture the current horizontal azimuth angle, vertical azimuth angle, multiplying power, pixel x coordinate and pixel y coordinate of the camera, forming a sample with the locked geographic coordinates, and further completing sample acquisition;
s3, model generation: generating a model file through model training and multiple iterations according to the acquired sample;
s4, model execution: acquiring current horizontal azimuth angle, vertical azimuth angle, multiplying power and pixel coordinates of a target object of a camera, inputting the parameters into a model, carrying out forward propagation, and calculating to obtain real geographic coordinates of the target object;
in step S2, the sample format of the collected sample is
Figure 940925DEST_PATH_IMAGE001
In the formula (I), wherein,
Figure 684759DEST_PATH_IMAGE002
respectively representing horizontal azimuth, vertical azimuth and magnification,
Figure 889475DEST_PATH_IMAGE003
pixel x and pixel y coordinates representing screen coordinates respectively,
Figure 773861DEST_PATH_IMAGE004
x and y coordinates representing geographic coordinates, respectively;
the specific process of the step S3 is as follows:
setting a neuron forward propagation operation expression and an activation function for model training;
obtaining a forward propagation result and an average loss function after multi-stage neuron operation;
optimizing the loss amount by using a least square method, and solving an optimal solution;
updating the weight by using a chain type derivative rule;
and (5) performing multiple iterations by using a gradient descent method, fitting to obtain a global optimal solution, and storing the global optimal solution as a model file.
2. An artificial intelligence based target object geographic coordinate locating device, the device comprising:
the field calibration unit is used for arranging field calibration points in a camera monitoring area, and each point has a corresponding geographic coordinate;
the sample acquisition unit is used for previewing a camera picture through a sample acquisition tool, locking the geographic coordinates of a field calibration point, and then clicking the camera picture through a mouse to automatically capture the current horizontal azimuth angle, vertical azimuth angle, multiplying power, pixel x coordinate and pixel y coordinate of the camera, and forming a sample with the locked geographic coordinates to finish sample acquisition;
a model generation unit: generating a model file through model training and multiple iterations according to the collected samples;
the model execution unit is used for acquiring five parameters including the current horizontal azimuth angle, the current vertical azimuth angle, the current multiplying power and the pixel coordinate of the target object of the camera, inputting the parameters into the model, carrying out forward propagation and calculating to obtain the real geographic coordinate of the target object;
the sample format of the collected sample is
Figure 380423DEST_PATH_IMAGE001
In the formula (I), the reaction is carried out,
Figure 560738DEST_PATH_IMAGE002
respectively representing horizontal azimuth, vertical azimuth and magnification,
Figure 252750DEST_PATH_IMAGE003
pixel x-coordinate and pixel y-coordinate respectively representing screen coordinates,
Figure 193024DEST_PATH_IMAGE004
x and y coordinates representing geographic coordinates, respectively;
the model generation unit includes:
the setting module is used for setting a neuron forward propagation operation expression and an activation function of model training;
the first calculation module is used for obtaining a forward propagation result and an average loss function after multi-stage neuron operation;
the second calculation module is used for optimizing the loss amount by using a least square method and solving an optimal solution;
the weight updating module is used for updating the weight by using a chain type derivation rule;
and the iterative fitting module is used for executing multiple iterations by using a gradient descent method, obtaining a global optimal solution after fitting, and storing the global optimal solution as a model file.
CN202210933498.1A 2022-08-04 2022-08-04 Target object geographic coordinate positioning method and device based on artificial intelligence Active CN114998425B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210933498.1A CN114998425B (en) 2022-08-04 2022-08-04 Target object geographic coordinate positioning method and device based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210933498.1A CN114998425B (en) 2022-08-04 2022-08-04 Target object geographic coordinate positioning method and device based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN114998425A CN114998425A (en) 2022-09-02
CN114998425B true CN114998425B (en) 2022-10-25

Family

ID=83023239

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210933498.1A Active CN114998425B (en) 2022-08-04 2022-08-04 Target object geographic coordinate positioning method and device based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN114998425B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116805277B (en) * 2023-08-18 2024-01-26 吉奥时空信息技术股份有限公司 Video monitoring target node pixel coordinate conversion method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104501779A (en) * 2015-01-09 2015-04-08 中国人民解放军63961部队 High-accuracy target positioning method of unmanned plane on basis of multi-station measurement
CN110246175A (en) * 2019-05-24 2019-09-17 国网安徽省电力有限公司检修分公司 Intelligent Mobile Robot image detecting system and method for the panorama camera in conjunction with holder camera
CN110660101A (en) * 2019-08-19 2020-01-07 浙江理工大学 Object 6D posture prediction method based on RGB image and coordinate system transformation
CN111754552A (en) * 2020-06-29 2020-10-09 华东师范大学 Multi-camera cooperative target tracking method based on deep learning
CN111879313A (en) * 2020-07-31 2020-11-03 中国人民解放军国防科技大学 Multi-target continuous positioning method and system based on unmanned aerial vehicle image recognition
CN113011310A (en) * 2021-03-15 2021-06-22 中国地质大学(武汉) Method and device for collecting shooting exercise amount based on image recognition and induction backboard, computer equipment and storage medium
CN113223087A (en) * 2021-07-08 2021-08-06 武大吉奥信息技术有限公司 Target object geographic coordinate positioning method and device based on video monitoring
CN114519742A (en) * 2022-02-10 2022-05-20 首都医科大学 Three-dimensional target automatic positioning and attitude determination method based on monocular optical photography and application thereof

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9324190B2 (en) * 2012-02-24 2016-04-26 Matterport, Inc. Capturing and aligning three-dimensional scenes
CN108663677A (en) * 2018-03-29 2018-10-16 上海智瞳通科技有限公司 A kind of method that multisensor depth integration improves target detection capabilities
CN110763203A (en) * 2019-11-05 2020-02-07 西安航天天绘数据技术有限公司 Positioning method and device of urban component and vehicle-mounted mobile measurement system
CN113959439B (en) * 2021-10-21 2024-03-19 重庆邮电大学 Indoor high-precision visible light positioning method and system under sparse light source

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104501779A (en) * 2015-01-09 2015-04-08 中国人民解放军63961部队 High-accuracy target positioning method of unmanned plane on basis of multi-station measurement
CN110246175A (en) * 2019-05-24 2019-09-17 国网安徽省电力有限公司检修分公司 Intelligent Mobile Robot image detecting system and method for the panorama camera in conjunction with holder camera
CN110660101A (en) * 2019-08-19 2020-01-07 浙江理工大学 Object 6D posture prediction method based on RGB image and coordinate system transformation
CN111754552A (en) * 2020-06-29 2020-10-09 华东师范大学 Multi-camera cooperative target tracking method based on deep learning
CN111879313A (en) * 2020-07-31 2020-11-03 中国人民解放军国防科技大学 Multi-target continuous positioning method and system based on unmanned aerial vehicle image recognition
CN113011310A (en) * 2021-03-15 2021-06-22 中国地质大学(武汉) Method and device for collecting shooting exercise amount based on image recognition and induction backboard, computer equipment and storage medium
CN113223087A (en) * 2021-07-08 2021-08-06 武大吉奥信息技术有限公司 Target object geographic coordinate positioning method and device based on video monitoring
CN114519742A (en) * 2022-02-10 2022-05-20 首都医科大学 Three-dimensional target automatic positioning and attitude determination method based on monocular optical photography and application thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
High precision passive target localization based on airborne electro-optical payload;Xu Cheng 等;《2015 14th International Conference on Optical Communications and Networks (ICOCN)》;20150817;全文 *
精密定位平台视觉运动追踪与伺服控制研究;李海;《中国博士学位论文全文数据库 (信息科技辑)》;20181231(第12期);全文 *

Also Published As

Publication number Publication date
CN114998425A (en) 2022-09-02

Similar Documents

Publication Publication Date Title
CN112766274B (en) Water gauge image water level automatic reading method and system based on Mask RCNN algorithm
CN109443307B (en) System and method for measuring settlement and inclination angle of transmission tower based on optical measurement
CN104713491B (en) The method that the slope monitoring system of slope deforming three-dimensional data can be obtained and its obtain slope deforming three-dimensional data
CN104976960B (en) A kind of raindrop physical characteristic observation procedure
CN108919367B (en) Alternating current transmission line inversion method based on current magnetic field
CN109084690A (en) Crop plant height calculation method based on unmanned plane visual remote sensing
CN110400315A (en) A kind of defect inspection method, apparatus and system
CN114283203B (en) Calibration method and system of multi-camera system
CN110490932A (en) The infrared coplanar cursor iteration optimization measurement driving boom spatial pose method of monocular
CN114998425B (en) Target object geographic coordinate positioning method and device based on artificial intelligence
CN112146761A (en) Human body temperature measurement compensation method based on machine learning
CN104410839A (en) Mobile online monitoring system for mountain fire and landslide hazards in power transmission line pole and tower region and monitoring method
Hou et al. Automatic recognition system of pointer meters based on lightweight CNN and WSNs with on-sensor image processing
CN116844147A (en) Pointer instrument identification and abnormal alarm method based on deep learning
CN115326026A (en) Method and device for acquiring hydraulic characteristics based on non-contact measurement-hydrodynamic fusion assimilation
CN113902698A (en) Unmanned aerial vehicle holder progressive target focusing method based on intelligent visual control
CN112581386A (en) Full-automatic lightning arrester detection and tracking method
CN111767826A (en) Timing fixed-point scene abnormity detection method
WO2022126339A1 (en) Method for monitoring deformation of civil structure, and related device
CN106845360A (en) High-resolution crop surface model construction method based on unmanned aerial vehicle remote sensing
CN113076808A (en) Method for accurately acquiring bidirectional pedestrian flow through image algorithm
CN115184563B (en) Chemical workshop field data acquisition method based on digital twinning
CN115761265A (en) Method and device for extracting substation equipment in laser radar point cloud
CN114119670A (en) Flow velocity measuring method for acquiring river video based on smart phone
CN114419259A (en) Visual positioning method and system based on physical model imaging simulation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant