CN113570649B - Gravity direction determination method and device based on three-dimensional model and computer equipment - Google Patents

Gravity direction determination method and device based on three-dimensional model and computer equipment Download PDF

Info

Publication number
CN113570649B
CN113570649B CN202111132160.8A CN202111132160A CN113570649B CN 113570649 B CN113570649 B CN 113570649B CN 202111132160 A CN202111132160 A CN 202111132160A CN 113570649 B CN113570649 B CN 113570649B
Authority
CN
China
Prior art keywords
dimensional
target
point
target object
gravity direction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111132160.8A
Other languages
Chinese (zh)
Other versions
CN113570649A (en
Inventor
李鹏
黄文琦
曾群生
吴洋
周锐烨
陈佳捷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southern Power Grid Digital Grid Research Institute Co Ltd
Original Assignee
Southern Power Grid Digital Grid Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southern Power Grid Digital Grid Research Institute Co Ltd filed Critical Southern Power Grid Digital Grid Research Institute Co Ltd
Priority to CN202111132160.8A priority Critical patent/CN113570649B/en
Publication of CN113570649A publication Critical patent/CN113570649A/en
Application granted granted Critical
Publication of CN113570649B publication Critical patent/CN113570649B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C1/00Measuring angles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P13/00Indicating or recording presence, absence, or direction, of movement
    • G01P13/02Indicating direction only, e.g. by weather vane
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The application relates to a gravity direction determination method and device based on a three-dimensional model and computer equipment. The method comprises the following steps: the method comprises the steps that a terminal obtains a multi-frame monitoring image and RTK data of a monitoring area of a power grid system, generates three-dimensional point cloud of the monitoring area according to the multi-frame monitoring image, semantically segments the three-dimensional point cloud to obtain a three-dimensional model of a target object in the monitoring area, obtains magnetic declination angles of target points in the three-dimensional model of the target object according to the RTK data, and adjusts the gravity direction of the target object according to the magnetic declination angles of the target points. The RTK data has the characteristic of high precision, the magnetic declination of the target point is calculated by utilizing the RTK data, so that the gravity direction of the target object is determined, the precision of gravity direction adjustment can be effectively improved, the error is reduced, the influence of subjective factors is avoided, the whole method is operated by computer equipment, manual operation is not needed, and the calculation efficiency of gravity direction determination based on the three-dimensional model is greatly improved.

Description

Gravity direction determination method and device based on three-dimensional model and computer equipment
Technical Field
The application relates to the technical field of power grid systems, in particular to a gravity direction determining method and device based on a three-dimensional model and computer equipment.
Background
With the rapid development of economy in China, the demand of various industries on electric energy is continuously increased, and the scale of a power grid is continuously enlarged, so that higher requirements are provided for the routing inspection and maintenance of power grid lines. The geographic environment of the power transmission line is complex and diverse, the relationship with the spatial position is large, the operation and maintenance management based on the two-dimensional visual effect has certain limitation, and the contained information is not complete and abundant, so that the three-dimensional model is widely applied to planning of iron towers and power transmission lines, but is influenced by angles and the like when data are collected, the gravity direction of an output object of a single object model is not necessarily vertical to the ground, and the object model is difficult to operate and measure.
In the prior art, it is generally considered that the gravity direction of an output object of an object model is modified, for example, a generated object model is introduced into a unity engine, and a user modifies a coordinate value of the gravity direction of the output object by using the unity engine to achieve the purpose of controlling the global gravity direction.
However, the existing method for modifying the gravity direction has the problems of low efficiency and low accuracy.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a method, an apparatus, and a computer device for determining a gravity direction based on a three-dimensional model, which can improve calculation efficiency and accuracy.
A method for determining a direction of gravity based on a three-dimensional model, the method comprising:
acquiring a multi-frame monitoring image and RTK data of a monitoring area of a power grid system;
generating three-dimensional point cloud of the monitoring area according to the multi-frame monitoring image;
performing semantic segmentation on the three-dimensional point cloud to obtain a three-dimensional model of a target object in the monitoring area;
acquiring declination of each target point in the three-dimensional model of the target object according to the RTK data;
and adjusting the gravity direction of the target object according to the declination angle of each target point.
In one embodiment, the obtaining the declination of each target point in the three-dimensional model of the target object according to the RTK data includes:
determining a preset number of target points in a three-dimensional coordinate system corresponding to the three-dimensional model of the target object;
calculating an initial declination of each target point according to the RTK data;
and rotating the initial magnetic declination angle of each target point by a preset angle in a preset direction to obtain the magnetic declination angle of each target point.
In one embodiment, the adjusting the gravity direction of the target object according to the declination angle of each target point includes:
calculating the average declination of the declination of each target point;
and adjusting the gravity direction of the target object according to the average declination.
In one embodiment, the generating a three-dimensional point cloud of the monitoring area according to the plurality of monitoring images includes:
generating sparse three-dimensional point cloud according to the multi-frame monitoring image;
acquiring a depth map corresponding to the multiple frames of monitoring images by using the camera pose of the camera shooting equipment for shooting the monitoring images and the multiple frames of monitoring images;
generating the three-dimensional point cloud based on the camera pose, the depth map, and the sparse three-dimensional point cloud.
In one embodiment, the generating a sparse three-dimensional point cloud from the plurality of monitoring images includes:
extracting feature points of each frame of the monitoring image;
carrying out feature matching on feature points of every two adjacent frames of monitoring images to obtain matched feature point pairs of every two adjacent frames of monitoring images;
acquiring position coordinates of the matched characteristic point pairs in the same coordinate system corresponding to the multi-frame monitoring images;
and constructing the sparse three-dimensional point cloud according to the position coordinates of the matched characteristic point pairs in the same coordinate system.
In one embodiment, the obtaining of the position coordinates of the matching feature point pairs in the same coordinate system corresponding to the multiple frames of monitoring images includes:
determining the relative position and orientation of each two adjacent frames of monitoring images according to the matching feature point pairs;
determining the camera position and orientation of the multiple frames of monitoring images in the same coordinate system according to the relative position and orientation of each two adjacent frames of monitoring images;
and calculating the position coordinates of each matched feature point pair in the same coordinate system by a triangulation method according to the position and the orientation of the camera in the same coordinate system.
In one embodiment, the method further comprises:
acquiring a dense three-dimensional grid model according to the three-dimensional point cloud by adopting a surface grid extraction technology;
the semantic segmentation is carried out on the three-dimensional point cloud to obtain a three-dimensional model of the target object in the monitoring area, and the semantic segmentation comprises the following steps:
and performing semantic segmentation on the dense three-dimensional grid model to obtain a three-dimensional model of the target object in the monitoring area.
A gravity direction determination apparatus based on a three-dimensional model, the apparatus comprising:
the first acquisition module is used for acquiring multi-frame monitoring images and RTK data of a monitoring area of the power grid system;
the first generation module is used for generating three-dimensional point cloud of the monitoring area according to the multi-frame monitoring image;
the segmentation module is used for performing semantic segmentation on the three-dimensional point cloud to obtain a three-dimensional model of a target object in the monitoring area;
the second acquisition module is used for acquiring the magnetic declination of each target point in the three-dimensional model of the target object according to the RTK data;
and the adjusting module is used for adjusting the gravity direction of the target object according to the declination of each target point.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring a multi-frame monitoring image and RTK data of a monitoring area of a power grid system;
generating three-dimensional point cloud of the monitoring area according to the multi-frame monitoring image;
performing semantic segmentation on the three-dimensional point cloud to obtain a three-dimensional model of a target object in the monitoring area;
acquiring declination of each target point in the three-dimensional model of the target object according to the RTK data;
and adjusting the gravity direction of the target object according to the declination angle of each target point.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a multi-frame monitoring image and RTK data of a monitoring area of a power grid system;
generating three-dimensional point cloud of the monitoring area according to the multi-frame monitoring image;
performing semantic segmentation on the three-dimensional point cloud to obtain a three-dimensional model of a target object in the monitoring area;
acquiring declination of each target point in the three-dimensional model of the target object according to the RTK data;
and adjusting the gravity direction of the target object according to the declination angle of each target point.
According to the gravity direction determining method, the gravity direction determining device and the computer equipment based on the three-dimensional model, the terminal acquires the multi-frame monitoring image and the RTK data of the monitoring area of the power grid system, generates the three-dimensional point cloud of the monitoring area according to the multi-frame monitoring image, semantically divides the three-dimensional point cloud to obtain the three-dimensional model of the target object in the monitoring area, acquires the declination angle of each target point in the three-dimensional model of the target object according to the RTK data, and adjusts the gravity direction of the target object according to the declination angle of each target point. The RTK data has the characteristic of high precision, the magnetic declination of the target point is calculated by utilizing the RTK data, so that the gravity direction of the target object is determined, the precision of gravity direction adjustment can be effectively improved, the error is reduced, the influence of subjective factors is avoided, the whole method is operated by computer equipment, manual operation is not needed, and the calculation efficiency of gravity direction determination based on the three-dimensional model is greatly improved.
Drawings
FIG. 1 is a diagram of an application environment of a gravity direction determination method based on a three-dimensional model according to an embodiment;
FIG. 2 is a schematic flow chart of a method for determining a gravity direction based on a three-dimensional model according to an embodiment;
FIG. 3 is a schematic flow chart illustrating the step of calculating declination in one embodiment;
FIG. 4 is a schematic diagram illustrating a flow of adjusting the gravity direction according to an embodiment;
FIG. 5 is a schematic flow chart illustrating the generation of a three-dimensional point cloud in one embodiment;
FIG. 6 is a diagram illustrating the effect of recovering a three-dimensional point cloud in one embodiment;
FIG. 7 is a schematic flow chart of generating a sparse three-dimensional point cloud in one embodiment;
FIG. 8 is a schematic illustration of feature matching in one embodiment;
FIG. 9 is a schematic diagram illustrating a detailed process for calculating coordinate locations in one embodiment;
FIG. 10 is a flow diagram that illustrates semantic segmentation of a dense three-dimensional mesh model, according to one embodiment;
FIG. 11 is a schematic flow chart illustrating a method for determining a gravity direction based on a three-dimensional model according to another embodiment;
FIG. 12 is a first block diagram of an apparatus for determining a direction of gravity based on a three-dimensional model according to an embodiment;
FIG. 13 is a second block diagram illustrating an exemplary three-dimensional model-based gravity direction determining apparatus;
FIG. 14 is a third block diagram of an apparatus for determining a direction of gravity based on a three-dimensional model according to an embodiment;
FIG. 15 is a fourth block diagram illustrating an exemplary three-dimensional model-based gravity direction determining apparatus;
FIG. 16 is a fifth block diagram illustrating an exemplary three-dimensional model-based gravity direction determining apparatus;
FIG. 17 is a block diagram illustrating a sixth configuration of a three-dimensional model-based gravity direction determining apparatus according to an embodiment;
FIG. 18 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The gravity direction determining method based on the three-dimensional model can be applied to the application environment shown in fig. 1. In the application environment, a terminal acquires multi-frame monitoring images and Real-Time differential positioning (RTK) data of a monitoring area of a power grid system; generating three-dimensional point cloud of a monitoring area according to a plurality of frames of monitoring images; performing semantic segmentation on the three-dimensional point cloud to obtain a three-dimensional model of a target object in a monitoring area; acquiring the declination of each target point in the three-dimensional model of the target object according to the RTK data; and adjusting the gravity direction of the target object according to the declination of each target point. The terminal may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 1. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a three-dimensional model based gravity direction determination method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
In one embodiment, as shown in fig. 2, a gravity direction determining method based on a three-dimensional model is provided, which is described by taking the method as an example applied to the terminal in fig. 1, and includes the following steps:
s201, obtaining a multi-frame monitoring image and RTK data of a monitoring area of the power grid system.
The monitoring area comprises a high-voltage iron tower, a high-voltage wire, electromagnetic parameter information, a surrounding environment and the like in the power grid system; the monitoring image and the RTK data are images and RTK data of a monitoring area acquired by camera equipment of the unmanned aerial vehicle, and the monitoring image comprises an RGB image, an infrared image, laser point cloud and other images; RTK data includes data such as a roll name, an X coordinate, a Y coordinate, an elevation, an attribute code, a point storage state (fixed solution or floating solution), a Horizontal Residual (HRMS), a Vertical Residual (VRMS), a satellite number, a spatial distribution factor (PDOP), an observation date and time, a default value, an antenna, and the like.
In this embodiment, unmanned aerial vehicle can carry the camera equipment on the aircraft, like camera equipment such as light-duty optical camera, infrared scanner, laser radar and RTK measuring apparatu, and the personnel of surveying controls unmanned aerial vehicle, the monitoring image and the RTK data of real-time acquisition monitoring area through radio remote control equipment or machine-mounted computer program control system.
In this embodiment, the terminal may acquire a multi-frame monitoring image and RTK data of a monitoring area of the power grid system. The terminal can acquire the multiframe monitoring images and the RTK data of the monitoring area of the power grid system in real time, can also acquire the multiframe monitoring images and the RTK data of the monitoring area of the power grid system periodically, and can also acquire the multiframe monitoring images and the RTK data of the monitoring area of the power grid system after receiving a user instruction.
And S202, generating three-dimensional point cloud of the monitoring area according to the multi-frame monitoring image.
In this embodiment, an unmanned aerial vehicle camera device is used to collect a monitoring image of a monitoring area, and a three-dimensional point cloud of the monitoring area is generated according to a plurality of monitoring images, which may be a sparse three-dimensional point cloud of the monitoring area or a dense three-dimensional point cloud of the monitoring area.
In this embodiment, generating a sparse three-dimensional point cloud in a monitoring area includes inputting multiple frames of monitoring images with different viewing angles, performing feature extraction on key points capable of representing an object model on the monitoring images to obtain feature points, matching the feature points on the monitoring images of each adjacent frame, where the correctly matched feature points correspond to the same point in an actual scene, so as to obtain feature point matching pairs of the multiple frames of monitoring images, and generating the sparse three-dimensional point cloud according to the coordinate positions of the feature point matching pairs and the camera pose of a camera device. The dense three-dimensional point cloud of the monitoring area is generated by utilizing the camera pose of the camera shooting equipment for shooting the monitoring image and multiple frames of monitoring images with different visual angles, obtaining relative depth information according to the camera pose and the visual difference, obtaining depth maps corresponding to the multiple frames of monitoring images, calculating three-dimensional points corresponding to each pixel point in the images pixel by pixel, obtaining the dense three-dimensional point cloud on the surface of the scene object, and generating the three-dimensional dense point cloud.
In this embodiment, although the three-dimensional point cloud can restore the physical appearance, it is still only a collection of a large number of isolated three-dimensional spaces, and the point cloud data has an irregular distribution problem, and in order to realize real physical three-dimensionality and better represent the properties of the object model, the three-dimensional point cloud can be further gridded, and the like, so as to reconstruct a dense three-dimensional grid model which is easy to express and operate from the three-dimensional point cloud.
S203, performing semantic segmentation on the three-dimensional point cloud to obtain a three-dimensional model of the target object in the monitoring area.
Wherein, the three-dimensional point cloud segmentation needs to know the global geometry and fine-grained details of each point. Depending on the segmentation granularity, the three-dimensional point cloud segmentation method may be semantic segmentation (scene level), instance segmentation (object level), part segmentation (part level), and the like. The three-dimensional point cloud semantic segmentation means that each point in the point cloud is endowed with a specific semantic label, a point cloud is given, the semantic segmentation aims to divide the point cloud into a plurality of subsets according to the semantics of the point cloud, or each object is segmented, specific meanings are given to each object, and the types of the objects in the space can be accurately described.
In this embodiment, semantic segmentation is performed on the generated three-dimensional point cloud to obtain a three-dimensional model of a target object in a monitoring area. For example, the monitoring area includes objects such as a high voltage tower, a high voltage line, a house building and a tree, when semantic segmentation is performed, the high voltage tower is given a label 0 and is visualized by using a blue mark, the high voltage line is given a label 1 and is visualized by using a red mark, the house building is given a label 2 and is visualized by using a black mark, the tree is given a label 3 and is visualized by using a green mark, four different objects are effectively divided, and therefore a three-dimensional model of a target object in the required monitoring area is obtained.
And S204, acquiring the declination of each target point in the three-dimensional model of the target object according to the RTK data.
The magnetic declination refers to an included angle between a magnetic meridian at any point on the earth surface and a geographic meridian, namely an included angle between the north and the true north when the magnetic needle is static.
In this embodiment, the preset number, the preset direction and the preset angle can be directly set by the user according to the monitoring image acquired by the camera device and the data acquisition condition of the PTK data. The user can select the target object model according to the importance degree of the features, for example, more points representing the contour shape of the target object, such as edges and angular points, the user can randomly select a preset number of target points, or select a preset number of target points according to a certain regularity, the target points at different positions have different declination angles, based on the selected preset number of target points, the coordinate position, longitude, latitude and other data of each target point can be known by using RTK data, and the initial declination angle is calculated according to the RTK data, such as the longitude, latitude, geomagnetic intensity and the like. Because the direction of the magnetic field is close to being parallel to the ground, after the initial magnetic declination angle of each target point is calculated, the initial magnetic declination angle of each target point is corrected, and a user can rotate according to the preset direction and the preset angle set by the user to obtain the magnetic declination angle of each target point.
And S205, adjusting the gravity direction of the target object according to the declination of each target point.
In this embodiment, the gravity direction of the target object is not necessarily perpendicular to the ground output, and the gravity direction of the target object needs to be adjusted according to the declination of each target point. For example, the magnetic needle is specified to have a positive declination when the north pole is oriented east and a negative declination when the north pole is oriented west.
Illustratively, the relative position relationship is determined according to the difference value between the declination angle of each target point and the declination angle of the target object in the gravity direction, so as to perform corresponding adjustment, if the difference value obtained by subtracting the declination angle of the target object from the declination angle of the target point is a positive value, the target point is proved to be located east of the target object, and the gravity direction of the target object is correspondingly adjusted to the declination angle of each target point east. If the difference value obtained by subtracting the magnetic declination angle of the gravity direction of the target object from the magnetic declination angle of the target point is a negative value, the target point is proved to be positioned on the west side of the target object, and the gravity direction of the target object is correspondingly adjusted to the west magnetic declination angle of each target point. For example, if the difference between the declination angle of each target point and the declination angle of the target object in the gravity direction is 0.5 degrees, the gravity direction of the target object is adjusted to east by 0.5 degrees, and if the difference between the declination angle of each target point and the declination angle of the target object in the gravity direction is-0.2 degrees, the gravity direction of the target object is adjusted to west by 0.2 degrees.
In the gravity direction determining method based on the three-dimensional model, the terminal acquires a multi-frame monitoring image and RTK data of a monitoring area of the power grid system, generates three-dimensional point cloud of the monitoring area according to the multi-frame monitoring image, semantically divides the three-dimensional point cloud to obtain the three-dimensional model of a target object in the monitoring area, acquires the declination angle of each target point in the three-dimensional model of the target object according to the RTK data, and adjusts the gravity direction of the target object according to the declination angle of each target point. The RTK data has the characteristic of high precision, the magnetic declination of the target point is calculated by utilizing the RTK data, so that the gravity direction of the target object is determined, the precision of gravity direction adjustment can be effectively improved, the error is reduced, the influence of subjective factors is avoided, the whole method is operated by computer equipment, manual operation is not needed, and the calculation efficiency of gravity direction determination based on the three-dimensional model is greatly improved.
In the embodiment shown in fig. 2, a method for determining the gravity direction is described, and then a process for calculating the declination of the target point is mainly described, as shown in fig. 3, the method includes the following steps:
s301, determining a preset number of target points in a three-dimensional coordinate system corresponding to the three-dimensional model of the target object.
In this embodiment, the predetermined number of target points is determined in the three-dimensional coordinate system corresponding to the three-dimensional model of the target object, and the predetermined number of target points may be randomly selected or selected according to a certain rule. For example, the preset number is 10, and 10 target points are named as a1, a2, A3, a4, a5, a6, a7, A8, a9, and a10, and if the 10 target points are randomly selected, there is no regularity among the 10 target points; if the coordinate values are selected according to a certain rule, the ordinate values between every two target points can be set to increase progressively in a 2cm mode, and if the target points A1 (2, 5, 7), the target points A2 (2, 7, 7), A3 (2, 7, 7) and so on are assumed; it is also possible to set the elevation between every two target points to be increased by 5m, assuming that the elevation of the target point A1 is 25m, the elevation of the target point A2 is 30m, and so on.
And S302, calculating the initial declination of each target point according to the RTK data.
In this embodiment, the target points at different positions have different declination angles, and the coordinate position, longitude, latitude, and other data of each target point can be known from the RTK data. And calculating an initial declination according to RTK data such as longitude, latitude, geomagnetic intensity and the like. For example, initial declination angles of the 10 target points are calculated respectively, and the corresponding initial declination angles obtained according to the calculation results are respectively B1, B2, B3, B4, B5, B6, B7, B8, B9 and B10.
And S303, rotating the initial magnetic declination angle of each target point by a preset angle in a preset direction to obtain the magnetic declination angle of each target point.
The preset direction can be clockwise rotation or anticlockwise rotation, and the preset angle can be 90 degrees or any angle close to 90 degrees.
In this embodiment, the initial declination of each target point is rotated by a preset angle in a preset direction to obtain the declination of each target point. For example, for 10 initial declination angles B1, B2 … B10 corresponding to 10 target points, the rotation is 90 degrees in the clockwise direction, which are denoted as declination angles C1, C2, C3, C4, C5, C6, C7, C8, C9, and C10.
It should be noted that, the process of selecting a preset number of target points on the target object and setting the preset direction and the preset angle is not limited, and the user may decide according to the actual situation.
In this embodiment, a preset number of target points are selected on a target object, and an initial declination angle of the target points is calculated according to RTK data, because the magnetic field direction is close to parallel to the ground, and a certain deviation exists between the direction represented by the initial declination angle and the gravity direction, the initial declination angle is rotated by a preset angle in the preset direction to obtain the declination angle of each target point, so that the accuracy of the calculated gravity direction is higher, the flexibility of the gravity direction determination method based on the three-dimensional model is improved, the whole process is computer operation, and the calculation efficiency is obviously improved.
In the embodiment of fig. 3, a process of calculating the declination of each target point is described, and then a specific process of adjusting the gravity direction of the target object according to the declination of the target point is mainly described, as shown in fig. 4, the method includes the following steps:
s401, calculating the average declination of the declination of each target point.
In this embodiment, the average declination is calculated according to the declination of each target point, where the predetermined number is n, and the declination of each target point is C1, C2 … Cn, and the average declination is (C1 + C2+ … + Cn)/n. For example, if the declination angles C1 and C2 … C10 of the 10 target points are 0.2 degrees, 0.25 degrees, 0.28 degrees, 0.21 degrees, 0.18 degrees, 0.22 degrees, 0.15 degrees, 0.17 degrees, 0.23 degrees, and 0.22 degrees, respectively, the average declination angle is 0.211 degrees.
In this embodiment, a declination at the middle value may also be selected as a fixed value, the difference between declinations at different declinations smaller than the fixed value is a negative value, the difference between declinations greater than the fixed value is a positive value, all the differences are averaged, and the average declination is determined by adding the fixed value. For example, if the declination angle 0.2 degrees is selected as a fixed value, the declination angles of the 10 target points are respectively 0 degrees, 0.05 degrees, 0.08 degrees, 0.01 degrees, -0.02 degrees, -0.05 degrees, -0.03 degrees, and 0.02 degrees from the fixed value, and the average of the 10 differences is 0.11 degrees, the average declination angle is equal to the sum of the difference and the fixed value, which is 0.211 degrees.
And S402, adjusting the gravity direction of the target object according to the average declination.
In this embodiment, the gravity direction of the target object is adjusted according to the average declination angle, for example, the average declination angle calculated according to the step S401 is 0.211 degrees, if the declination angle corresponding to the gravity direction of the target object is-0.03 degrees, it is proved that the target object is located on the west side of the average declination angle, then the target object is adjusted to the east by 0.204 degrees, which is the finally determined gravity direction of the target object; if the declination corresponding to the gravity direction of the target object is 0.214 degrees, the target object is proved to be located east of the average declination, and the target object is adjusted to the west by 0.003 degrees, namely the finally determined gravity direction of the target object.
In this embodiment, each declination corresponds to the gravity direction on each target point, and the gravity direction of the target object is adjusted by calculating the average declination of the declinations of the target points and according to the calculated average declination of the RTK data. The RTK is a measuring method capable of obtaining centimeter-level positioning accuracy in real time, the positioning accuracy is higher, more accurate position information can be obtained, the gravity direction of a target object adjusted according to RTK data is more accurate, and the influence of subjective factors is avoided.
In the embodiment of fig. 4, a process of adjusting the gravity direction of the target object according to the declination of the target point is described, and then a specific process of generating a three-dimensional point cloud of the monitoring area is mainly described, as shown in fig. 5, the method includes the following steps:
and S501, generating sparse three-dimensional point cloud according to the multi-frame monitoring image.
In this embodiment, generating the sparse three-dimensional point cloud according to the multi-frame monitoring images includes performing feature extraction on key points in the multi-frame monitoring images, performing feature matching on every two adjacent monitoring images after extracting features, obtaining matching feature point pairs of every two adjacent monitoring images, obtaining position coordinates of the matching feature point pairs in the same coordinate system corresponding to the multi-frame monitoring images, and constructing the sparse three-dimensional point cloud according to the position coordinates of the matching feature point pairs in the same coordinate system.
Optionally, the method of generating the sparse three-dimensional point cloud includes a Structure-From-Motion (SFM) method, a Simultaneous Localization And Mapping (SLAM) method, And the like.
And S502, acquiring a depth map corresponding to the multi-frame monitoring image by using the camera pose of the camera shooting equipment for shooting the monitoring image and the multi-frame monitoring image.
The depth map reflects the distance of an object in a scene from a camera, and a stereo matching method is usually adopted for recovering the depth map.
In this embodiment, a depth map corresponding to a plurality of frames of monitoring images is acquired by using a camera pose of an image pickup apparatus that shoots the monitoring images and the plurality of frames of monitoring images. Supposing that two frames of monitoring images of the same scene in a monitoring area are shot by using a camera shooting device for shooting the monitoring images, wherein the two frames of monitoring images are respectively M1 and M2, parallax images N1 and N2 of the two frames of monitoring images are obtained by using a stereo matching algorithm according to the camera pose and the monitoring images, the unit of parallax is a pixel, the unit of depth is often represented by a millimeter, and further depth images Z1 and Z2 corresponding to the two frames of monitoring images are obtained according to the conversion relation between the pixel and the millimeter.
And S503, generating a three-dimensional point cloud based on the camera pose, the depth map and the sparse three-dimensional point cloud.
In this embodiment, the sparse three-dimensional point cloud is mainly because the matching point feature pair is used for three-dimensional reconstruction, and the object model of the monitoring area cannot be clearly and vividly represented, and in order to further characterize the object model of the monitoring area, the three-dimensional point corresponding to each pixel point in the image is calculated pixel by pixel on the premise that the camera pose is known, so as to obtain the three-dimensional point cloud with dense scene object surface, and the density degree of the obtained points can be closer to the definition displayed by the image. In the present embodiment, as shown in fig. 6, a dense three-dimensional point cloud 63 is generated by a dense method of depth map fusion for the image sequence 61 and the camera pose 62.
In the embodiment, the sparse three-dimensional point cloud is generated according to the multi-frame monitoring images, and the depth map corresponding to the multi-frame monitoring images is obtained by using the camera pose of the camera shooting equipment for shooting the monitoring images and the multi-frame monitoring images, so that the dense point cloud is generated by combining the sparse three-dimensional point cloud. The sparse three-dimensional point cloud mainly utilizes the feature matching point pairs to recover the three-dimensional point cloud model, the original appearance of the object model cannot be completely expressed, the dense three-dimensional model is a three-dimensional point corresponding to each pixel point in a pixel-by-pixel calculation image, the dense three-dimensional point cloud on the surface of the scene object is obtained, the object model can be more clearly restored and expressed, and the understanding of a user is facilitated.
In the embodiment of fig. 5, a process of generating a three-dimensional point cloud of a monitoring area according to multiple monitoring images is described, and then a specific implementation process of generating a sparse three-dimensional point cloud is mainly described, as shown in fig. 7, the method includes the following steps:
and S701, extracting the characteristic points of each frame of monitoring image.
And extracting the feature points of each frame of monitoring image comprises obtaining corresponding feature point detection and feature point descriptors. The feature point detection is to extract key points (or feature points, angular points) from the image, and the feature point descriptor is to describe the feature points by using a set of mathematical vectors, which refer to the orientation of the key points and the surrounding pixel information.
In this embodiment, a Feature extraction algorithm is used to extract points that are relatively rich in texture and easy to identify in each frame of the monitored image, and optionally, the Feature extraction algorithm includes a harris corner detection method, a Scale-Invariant Feature Transform (SIFT) detection method, an Accelerated Robust Features (SURF) detection method, a FAST (FAST) corner detection method, and other Feature extraction algorithms. The method for extracting the feature points by using the neural network can also be VGG, ResNet, DenseNet and the like. For example, a SIFT feature detection method is adopted to extract feature points of each frame of monitored image, the feature points are detected according to the SIFT feature detection method, then the attributes of the feature points are described by using a feature descriptor, the feature points detected in the image are described by using a multi-dimensional feature vector, therefore, an image is represented as a multi-dimensional feature vector set after being subjected to an SIFT algorithm, and the feature vector set has the features of no change in scaling, translation and rotation of the image.
And S702, performing feature matching on the feature points of each two adjacent frames of monitoring images to obtain matched feature point pairs of each two adjacent frames of monitoring images.
In the present embodiment, after extracting the feature points, the feature points are matched. When the characteristics are matched, after extracting characteristic points from every two adjacent monitoring images, obtaining characteristic vectors of the monitoring images, then matching the characteristics to obtain the one-to-one correspondence of pixels between the images, wherein the one-to-one correspondence mainly comprises a violent matching method and a nearest neighbor algorithm.
In this embodiment, the violent matching mainly calculates the distances between the feature point descriptor and other feature point descriptors, then sorts the obtained distances, and takes the feature point with the closest distance as a matching point pair, where the common distances include the euclidean distance, the hamming distance, the cosine distance, and the like. As shown in fig. 8, the feature points of the monitored image 1 are a1, b1, c1, d1, e1, f1 and g1, the feature points of the monitored image 2 are a2, b2, c2, d2, e2, f2 and g2, the feature point descriptors of the feature point a of the monitored image 1 and the feature point descriptors of the feature points a2, b2, c2, d2, e2, f2 and g2 of the monitored image 2 are 0.5, 2, 1, 4, 5, 7 and 3, respectively, and then a1 and a2 are the matching feature point pairs of two adjacent frames of the monitored image.
In this embodiment, matching may also be performed by a proximity search method, for the feature point a1 in the monitored image 1, finding two feature descriptors, a2 and b2, which are closest to the feature descriptor in the monitored image 2, respectively, calculating the distances between the feature point a1 and the feature point a2, a1 and b2 feature descriptor, then setting the nearest distance as m1, finding the second nearest distance as m2, and if the ratio of the two distances m1 and m2 is less than a threshold, determining that a matching pair is acceptable. For example, if the nearest neighbor distance m1 is 5, the second nearest distance m2 is 10, the threshold value is 0.6, and the ratio of m1 to m2 is 0.5 according to calculation and is less than the threshold value 0.6, the feature point a1 and the feature point a2 are matching feature point pairs in the monitor image 1 and the monitor image 2; if the nearest neighbor distance m1 is 5, the second nearest distance m2 is 8, the threshold is 0.6, and the ratio of m1 to m2 is 0.625 according to the calculation, which is greater than the threshold 0.6, then there is no corresponding matching feature point for feature point a1 in the monitored image 2.
In this embodiment, if there is a feature point that is matched to multiple feature points at the same time when the features are matched, the matching may be optionally abandoned, or parameters of the feature point extraction algorithm may be adjusted, or another feature point extraction algorithm may be substituted. As shown in fig. 8, feature points e1 and f1 in the monitored image 1 are simultaneously matched with feature point e2 in the monitored image 2, and the matching of feature points e1 and f1 can be optionally abandoned, or parameters in a feature extraction algorithm can be modified, so that the matching accuracy is improved.
And S703, acquiring the position coordinates of the matched characteristic point pairs in the same coordinate system corresponding to the multi-frame monitoring image.
In this embodiment, the position coordinates of the matching feature point pairs in the same coordinate system corresponding to the multiple frames of monitoring images are obtained. Assume that, as shown in fig. 8, the coordinate position of the feature point a1 in the monitor image 1 with respect to the monitor image 1 coordinate system is (1, 2, 3), the coordinate position of the feature point a2 in the monitor image 2 with respect to the monitor image 2 coordinate system is (4, 5, 6), the feature point a1 and the feature point a2 are matching feature point pairs, and the feature point a1 and the feature point a2 are converted into the same coordinate system according to the matching feature point pairs to obtain new coordinate points, the new coordinate point of the feature point a1 is (2, 2, 2), and the new coordinate point of the feature point a2 is (3, 2, 5).
And S704, constructing a sparse three-dimensional point cloud according to the position coordinates of the matched feature point pairs in the same coordinate system.
In this embodiment, when all pairs of matching images are determined, common feature matching points that occur in multiple images can be connected to form a trace. For example, as shown in fig. 8, the feature matching points of the monitor image 1 are a1, b1, c1, d1, e1, f1, and g1, the feature matching points of the monitor image 2 are a2, b2, c2, d2, e2, f2, and g2, the feature matching point a1 of the monitor image 1 and the feature matching point a2 of the monitor image 2 are matching feature point pairs, the feature matching point d1 of the monitor image 1 and the feature matching point d2 of the monitor image 2 are matching feature point pairs, the feature matching point of the monitor image 1 is e1 and the feature matching point e2 of the monitor image 2 are matching feature point pairs, the feature matching point pairs a1 and a2 are connected according to the position coordinates of the matching feature point pairs in the same coordinate system, the feature matching point pairs d1 and d2 are connected, the feature matching point pairs e1 and 2 are connected, and so on the basis of the position coordinates of the feature matching point pairs in the same coordinate system, and so on the monitor image 5393 and the feature matching point a2 a7 a of the monitor image 2 are found in turn And connecting the characteristic matching points of the monitored image 2 and the monitored image 3 according to the characteristic matching points corresponding to d2 and e2, then finding the complete track of each characteristic point in all image pairs, and once the corresponding tracks are found, constructing an image connection diagram and constructing a sparse three-dimensional point cloud.
In the embodiment, the three-dimensional sparse point cloud is constructed by utilizing algorithms such as feature extraction and feature matching, the two-dimensional image is converted into the three-dimensional model, and compared with the two-dimensional image, the three-dimensional model is more visual and visual than a two-dimensional drawing, the contained information is more complete and abundant, the object model can be observed at multiple angles, and the understanding of a user is facilitated.
In the embodiment of fig. 7, a process of generating a sparse three-dimensional point cloud in a monitoring area according to multiple frames of monitoring images is introduced, and then a specific implementation process of acquiring position coordinates of matching feature points in the same coordinate system corresponding to the multiple frames of monitoring images is mainly introduced, as shown in fig. 9, the method includes the following steps:
and S901, determining the relative position and orientation of each two adjacent frames of monitoring images according to the matched feature point pairs.
In the present embodiment, the relative position and orientation of each two adjacent frames of the monitor image are determined from the matching feature point pairs. For example, as shown in fig. 8, the feature matching points of the monitored image 1 are a1, b1, c1, d1, e1, f1 and g1, the feature matching points of the monitored image 2 are a2, b2, c2, d2, e2, f2 and g2, the first monitored image is selected as the reference frame image, the camera position corresponding to the monitored image 1 is set to (0, 0 and 0), feature matching is performed on the monitored image 1 and the monitored image 2, the matching feature point pairs of the monitored image 1 and the monitored image 2 are obtained, and the relative position and orientation of the monitored image 2 are obtained on the basis of the camera position of the monitored image 1. And carrying out feature matching on the monitoring image 2 and the monitoring image 3, obtaining a matching feature point pair of the monitoring image 2 and the monitoring image 3, solving the relative position and orientation of the monitoring image 3 according to the relative position and orientation of the monitoring image 3, and repeating the steps in the same way, thereby determining the relative position and orientation of every two adjacent frames of monitoring images.
And S902, determining the camera position and the camera orientation of the multiple frames of monitoring images in the same coordinate system according to the relative position and the orientation of every two adjacent frames of monitoring images.
In the embodiment, the relative position and orientation of every two adjacent monitoring images are determined according to the matched feature point pairs, and the camera position and orientation of the multiple monitoring images in the same coordinate system are determined. As can be seen from the above step S901, the camera position and orientation of the monitored image 2 are relative positions and orientations obtained with respect to the camera position and orientation of the monitored image 1, and the camera position and orientation of each subsequent monitored image are relative positions and orientations obtained according to the camera position and orientation of the previous monitored image, and the relative positions and orientations of the cameras of the multiple monitored images are determined in the same coordinate system through the conversion relationship between the coordinate systems, so as to obtain the camera positions and orientations of the multiple monitored images in the same coordinate system.
And S903, calculating the position coordinates of each matched characteristic point pair in the same coordinate system by a triangulation method according to the position and the orientation of the camera in the same coordinate system.
The triangulation method is to observe the included angle of the same three-dimensional point at different positions and recover the depth information of the three-dimensional point by utilizing the triangulation relation.
In this embodiment, assuming that the three-dimensional point is P and the two different observation positions are Q1 and Q2, respectively, the two straight lines PQ1 and PQ2 cannot intersect each other due to interference of noise, so that the positions and orientations of PQ1 and PQ2 and the camera satisfy a triangular relationship, and the position coordinates of each matching feature point pair in the same coordinate system are calculated.
In this embodiment, the relative position and orientation of each two adjacent frames of the monitored images are determined according to the matching feature point pairs, so as to determine the position and orientation of the camera of the multiple frames of the monitored images in the same coordinate system, and finally, the position coordinates of each matching feature point pair in the same coordinate system are calculated by a triangulation method. The feature matching point pairs correspond to the same point in the actual scene and represent the similarity of every two adjacent monitoring images, and the more accurate relative position and orientation of every two adjacent monitoring images can be determined by utilizing the feature matching point pairs, so that the position coordinates of all the matching feature point pairs in the same coordinate system are determined to be more accurate.
When the semantic segmentation is carried out, in addition to the semantic segmentation of the three-dimensional point cloud to obtain the three-dimensional model of the target object, the semantic segmentation of the dense three-dimensional mesh model can also be carried out to obtain the three-dimensional model of the target object. The following mainly describes a specific implementation process of performing semantic segmentation on a dense three-dimensional mesh model, as shown in fig. 10, including the following steps:
s1001, a dense three-dimensional grid model is obtained according to the three-dimensional point cloud by adopting a surface grid extraction technology.
In this embodiment, as shown in fig. 6, although the three-dimensional point cloud can more vividly restore the physical appearance, it is still a collection of a large number of isolated three-dimensional spaces, the point cloud data has a problem of irregular distribution, and in order to realize real physical three-dimensionality and better represent the properties of the physical model, it is necessary to structure the point cloud data by using a standard grid, and reconstruct a dense three-dimensional grid model 64 which is easy to express and operate from the three-dimensional point cloud by using a surface grid extraction technology. The method comprises the following specific steps: randomly selecting a point on the three-dimensional point cloud, searching a second point based on the adjacent position of the point, forming a line between the two points, searching a third point based on the adjacent position of the line, forming a triangle, filling a face on the basis of the triangle, performing adjacent search by using the middle point of one side, forming a new triangular face with the one side when a proper point is touched, and deriving two new sides. And so on until the condition that the outer edge provides the midpoint search is no longer satisfied in the queue.
S1002, performing semantic segmentation on the dense three-dimensional grid model to obtain a three-dimensional model of the target object in the monitoring area.
In this embodiment, semantic segmentation is performed on the dense three-dimensional mesh model to obtain a three-dimensional model of the target object in the monitoring area. For example, a network structure based on full convolution in deep learning can be used for realizing semantic segmentation of image input of a high-voltage tower and a high-voltage line with any size in a mode of encoding and then decoding, multi-scale information fusion design in the network structure can enable the network to have stronger robustness to scale change of the same object in a high-voltage tower scene, and semantic segmentation models based on UNet Family semantic segmentation model in deep learning and Attention U-Net in deep learning can also be used.
In the embodiment, a dense three-dimensional mesh model is obtained from the three-dimensional point cloud through a surface mesh extraction technology, and then semantic segmentation is performed. The surface mesh technology can keep the surface details of the three-dimensional point cloud, remove redundant parts, can more clearly visualize the objects in the monitoring area, is convenient for subsequent semantic segmentation of the objects, is convenient for users to understand, and has strong readability.
Further, as shown in fig. 11, the gravity direction determination method based on the three-dimensional model includes the following steps:
s1101, acquiring multi-frame monitoring images and RTK data of a monitoring area of the power grid system;
s1102, extracting feature points of each frame of monitoring image;
s1103, performing feature matching on the feature points of each two adjacent frames of monitoring images to obtain matched feature point pairs of each two adjacent frames of monitoring images;
s1104, acquiring position coordinates of the matched characteristic point pairs in the same coordinate system corresponding to the multi-frame monitoring images;
s1105, constructing a sparse three-dimensional point cloud according to the position coordinates of the matched characteristic point pairs in the same coordinate system;
s1106, acquiring a depth map corresponding to the multi-frame monitoring image by using the camera pose of the camera shooting equipment for shooting the monitoring image and the multi-frame monitoring image;
s1107, generating dense three-dimensional point cloud based on the camera pose, the depth map and the sparse three-dimensional point cloud;
s1108, performing semantic segmentation on the dense three-dimensional point cloud model to obtain a three-dimensional model of a target object in the monitoring area;
s1109, determining a preset number of target points in a three-dimensional coordinate system corresponding to the three-dimensional model of the target object;
s11010, calculating initial declination angles of all target points according to the RTK data;
s11011, rotating the initial magnetic declination angle of each target point by a preset angle in a preset direction to obtain the magnetic declination angle of each target point;
s11012, calculating the average declination of the declination of each target point;
and S11013, adjusting the gravity direction of the target object according to the average declination.
According to the gravity direction determining method based on the three-dimensional model, the terminal obtains the multi-frame monitoring image and the RTK data of the monitoring area of the power grid system, the three-dimensional point cloud of the monitoring area is generated according to the multi-frame monitoring image, the three-dimensional point cloud is subjected to semantic segmentation to obtain the three-dimensional model of the target object in the monitoring area, the declination angle of each target point in the three-dimensional model of the target object is obtained according to the RTK data, and the gravity direction of the target object is adjusted according to the declination angle of each target point. The RTK data has the characteristic of high precision, the magnetic declination of the target point is calculated by utilizing the RTK data, so that the gravity direction of the target object is determined, the precision of gravity direction adjustment can be effectively improved, the error is reduced, the influence of subjective factors is avoided, the whole method is operated by computer equipment, manual operation is not needed, and the calculation efficiency of gravity direction determination based on the three-dimensional model is greatly improved.
It should be understood that although the various steps in the flow charts of fig. 2-11 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-11 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 12, there is provided a gravity direction determination apparatus based on a three-dimensional model, including: a first obtaining module 11, a first generating module 12, a dividing module 13, a second obtaining module 14 and an adjusting module 15, wherein:
the first acquisition module 11 is used for acquiring a multi-frame monitoring image and RTK data of a monitoring area of a power grid system;
the first generation module 12 is configured to generate a three-dimensional point cloud of the monitoring area according to the multiple frames of monitoring images;
the segmentation module 13 is configured to perform semantic segmentation on the three-dimensional point cloud to obtain a three-dimensional model of a target object in the monitoring area;
a second obtaining module 14, configured to obtain a declination of each target point in the three-dimensional model of the target object according to the RTK data;
and the adjusting module 15 is configured to adjust the gravity direction of the target object according to the declination of each target point.
In one embodiment, as shown in fig. 13, the second obtaining module 14 includes:
a determining unit 141, configured to determine a preset number of target points in a three-dimensional coordinate system corresponding to the three-dimensional model of the target object;
a first calculating unit 142, configured to calculate an initial declination of each target point according to the RTK data;
and a rotating unit 143, configured to rotate the initial declination of each target point by a preset angle in a preset direction, so as to obtain the declination of each target point.
In one embodiment, as shown in fig. 14, the adjusting module 15 includes:
a second calculation unit 151 for calculating an average declination of the declinations of the target points;
and an adjusting unit 152, configured to adjust a gravity direction of the target object according to the average declination.
In one embodiment, as shown in fig. 15, the generation module 12 includes:
a first generating unit 121, configured to generate a sparse three-dimensional point cloud from the multiple frames of monitoring images;
the acquiring unit 122 is configured to acquire a depth map corresponding to the multiple frames of monitoring images by using the camera pose of the imaging device that captures the monitoring images and the multiple frames of monitoring images;
a second generating unit 123 configured to generate the three-dimensional point cloud based on the camera pose, the depth map, and the sparse three-dimensional point cloud.
In one embodiment, the first generating unit 121 is specifically configured to extract feature points of the monitoring image of each frame; carrying out feature matching on feature points of every two adjacent frames of monitoring images to obtain matched feature point pairs of every two adjacent frames of monitoring images; acquiring position coordinates of the matched characteristic point pairs in the same coordinate system corresponding to the multi-frame monitoring images; and constructing the sparse three-dimensional point cloud according to the position coordinates of the matched characteristic point pairs in the same coordinate system.
In one embodiment, the first generating unit 121 is specifically configured to determine the relative position and orientation of each two adjacent frames of the monitoring images according to the matching feature point pairs; determining the camera position and orientation of the multiple frames of monitoring images in the same coordinate system according to the relative position and orientation of each two adjacent frames of monitoring images; and calculating the position coordinates of each matched feature point pair in the same coordinate system by a triangulation method according to the position and the orientation of the camera in the same coordinate system.
In one embodiment, as shown in FIG. 16, there is provided a gravity direction determination apparatus based on a three-dimensional model, the apparatus further comprising
A third obtaining module 16, configured to obtain a dense three-dimensional mesh model from the three-dimensional point cloud by using a surface mesh extraction technique;
and the segmentation module 13 is further configured to perform semantic segmentation on the three-dimensional point cloud to obtain a three-dimensional model of the target object in the monitoring area.
In one embodiment, as shown in fig. 17, the segmentation module 13 includes:
and the dividing unit 131 is configured to perform semantic division on the dense three-dimensional mesh model to obtain a three-dimensional model of the target object in the monitoring area.
For specific definition of the three-dimensional model-based gravity direction determination apparatus, reference may be made to the above definition of the three-dimensional model-based gravity direction determination method, which is not described herein again. The modules in the above-mentioned three-dimensional model-based gravity direction determining apparatus may be implemented in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 18. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used to store the monitoring images as well as the RTK data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a three-dimensional model based gravity direction determination method.
Those skilled in the art will appreciate that the architecture shown in fig. 18 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring a multi-frame monitoring image and RTK data of a monitoring area of a power grid system;
generating three-dimensional point cloud of the monitoring area according to the multi-frame monitoring image;
performing semantic segmentation on the three-dimensional point cloud to obtain a three-dimensional model of a target object in the monitoring area;
acquiring declination of each target point in the three-dimensional model of the target object according to the RTK data;
and adjusting the gravity direction of the target object according to the declination angle of each target point.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
determining a preset number of target points in a three-dimensional coordinate system corresponding to the three-dimensional model of the target object;
calculating an initial declination of each target point according to the RTK data;
and rotating the initial magnetic declination angle of each target point by a preset angle in a preset direction to obtain the magnetic declination angle of each target point.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
calculating the average declination of the declination of each target point;
and adjusting the gravity direction of the target object according to the average declination.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
generating sparse three-dimensional point cloud according to the multi-frame monitoring image;
acquiring a depth map corresponding to the multiple frames of monitoring images by using the camera pose of the camera shooting equipment for shooting the monitoring images and the multiple frames of monitoring images;
generating the three-dimensional point cloud based on the camera pose, the depth map, and the sparse three-dimensional point cloud.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
extracting feature points of each frame of the monitoring image;
carrying out feature matching on feature points of every two adjacent frames of monitoring images to obtain matched feature point pairs of every two adjacent frames of monitoring images;
acquiring position coordinates of the matched characteristic point pairs in the same coordinate system corresponding to the multi-frame monitoring images;
and constructing the sparse three-dimensional point cloud according to the position coordinates of the matched characteristic point pairs in the same coordinate system.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
determining the relative position and orientation of each two adjacent frames of monitoring images according to the matching feature point pairs;
determining the camera position and orientation of the multiple frames of monitoring images in the same coordinate system according to the relative position and orientation of each two adjacent frames of monitoring images;
and calculating the position coordinates of each matched feature point pair in the same coordinate system by a triangulation method according to the position and the orientation of the camera in the same coordinate system.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring a dense three-dimensional grid model according to the three-dimensional point cloud by adopting a surface grid extraction technology;
the semantic segmentation is carried out on the three-dimensional point cloud to obtain a three-dimensional model of the target object in the monitoring area, and the semantic segmentation comprises the following steps:
and performing semantic segmentation on the dense three-dimensional grid model to obtain a three-dimensional model of the target object in the monitoring area.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a multi-frame monitoring image and RTK data of a monitoring area of a power grid system;
generating three-dimensional point cloud of the monitoring area according to the multi-frame monitoring image;
performing semantic segmentation on the three-dimensional point cloud to obtain a three-dimensional model of a target object in the monitoring area;
acquiring declination of each target point in the three-dimensional model of the target object according to the RTK data;
and adjusting the gravity direction of the target object according to the declination angle of each target point.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining a preset number of target points in a three-dimensional coordinate system corresponding to the three-dimensional model of the target object;
calculating an initial declination of each target point according to the RTK data;
and rotating the initial magnetic declination angle of each target point by a preset angle in a preset direction to obtain the magnetic declination angle of each target point.
In one embodiment, the computer program when executed by the processor further performs the steps of:
calculating the average declination of the declination of each target point;
and adjusting the gravity direction of the target object according to the average declination.
In one embodiment, the computer program when executed by the processor further performs the steps of:
generating sparse three-dimensional point cloud according to the multi-frame monitoring image;
acquiring a depth map corresponding to the multiple frames of monitoring images by using the camera pose of the camera shooting equipment for shooting the monitoring images and the multiple frames of monitoring images;
in one embodiment, the computer program when executed by the processor further performs the steps of:
extracting feature points of each frame of the monitoring image;
carrying out feature matching on feature points of every two adjacent frames of monitoring images to obtain matched feature point pairs of every two adjacent frames of monitoring images;
acquiring position coordinates of the matched characteristic point pairs in the same coordinate system corresponding to the multi-frame monitoring images;
and constructing the sparse three-dimensional point cloud according to the position coordinates of the matched characteristic point pairs in the same coordinate system.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining the relative position and orientation of each two adjacent frames of monitoring images according to the matching feature point pairs;
determining the camera position and orientation of the multiple frames of monitoring images in the same coordinate system according to the relative position and orientation of each two adjacent frames of monitoring images;
and calculating the position coordinates of each matched feature point pair in the same coordinate system by a triangulation method according to the position and the orientation of the camera in the same coordinate system.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring a dense three-dimensional grid model according to the three-dimensional point cloud by adopting a surface grid extraction technology;
the semantic segmentation is carried out on the three-dimensional point cloud to obtain a three-dimensional model of the target object in the monitoring area, and the semantic segmentation comprises the following steps:
and performing semantic segmentation on the dense three-dimensional grid model to obtain a three-dimensional model of the target object in the monitoring area.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A gravity direction determination method based on a three-dimensional model, the method comprising:
acquiring a multi-frame monitoring image and RTK data of a monitoring area of a power grid system;
generating sparse three-dimensional point cloud according to the multi-frame monitoring image; acquiring a depth map corresponding to the multiple frames of monitoring images by using the camera pose of the camera shooting equipment for shooting the monitoring images and the multiple frames of monitoring images; generating the three-dimensional point cloud based on the camera pose, the depth map, and the sparse three-dimensional point cloud;
performing semantic segmentation on the three-dimensional point cloud to obtain a three-dimensional model of a target object in the monitoring area;
determining a preset number of target points in a three-dimensional coordinate system corresponding to the three-dimensional model of the target object; calculating an initial declination of each target point according to the RTK data; rotating the initial magnetic declination angle of each target point by a preset angle in a preset direction to obtain the magnetic declination angle of each target point;
and adjusting the gravity direction of the target object according to the declination angle of each target point.
2. The method of claim 1, wherein the adjusting the gravity direction of the target object according to the declination angle of each target point comprises:
judging the relative position relation according to the difference value of the magnetic declination angle of each target point and the magnetic declination angle of the target object in the gravity direction, and accordingly adjusting correspondingly; if the difference value obtained by subtracting the magnetic declination angle of the target object in the gravity direction from the magnetic declination angle of each target point is a positive value, the gravity direction of the target object is adjusted to the magnetic declination angle of each target point in an eastern manner; and if the difference value obtained by subtracting the magnetic declination angle of the gravity direction of the target object from the magnetic declination angle of the target point is a negative value, adjusting the gravity direction of the target object to the west to the magnetic declination angle of each target point.
3. The method according to claim 1 or 2, wherein the adjusting the gravity direction of the target object according to the declination angle of each target point comprises:
calculating the average declination of the declination of each target point;
and adjusting the gravity direction of the target object according to the average declination.
4. The method of claim 1 or 2, wherein said calculating an initial declination for each of said target points from said RTK data comprises:
acquiring longitude, latitude and geomagnetic intensity of each target point according to the RTK data, and calculating an initial declination of each target point according to the longitude, the latitude and the geomagnetic intensity of each target point; the target points have different declination angles.
5. The method of claim 4, wherein generating a sparse three-dimensional point cloud from the plurality of monitored images comprises:
extracting feature points of each frame of the monitoring image;
carrying out feature matching on feature points of every two adjacent frames of monitoring images to obtain matched feature point pairs of every two adjacent frames of monitoring images;
acquiring position coordinates of the matched characteristic point pairs in the same coordinate system corresponding to the multi-frame monitoring images;
and constructing the sparse three-dimensional point cloud according to the position coordinates of the matched characteristic point pairs in the same coordinate system.
6. The method according to claim 5, wherein the obtaining of the position coordinates of the matching feature point pairs in the same coordinate system corresponding to the multiple frames of monitored images comprises:
determining the relative position and orientation of each two adjacent frames of monitoring images according to the matching feature point pairs;
determining the camera position and orientation of the multiple frames of monitoring images in the same coordinate system according to the relative position and orientation of each two adjacent frames of monitoring images;
and calculating the position coordinates of each matched feature point pair in the same coordinate system by a triangulation method according to the position and the orientation of the camera in the same coordinate system.
7. The method of claim 1, further comprising:
acquiring a dense three-dimensional grid model according to the three-dimensional point cloud by adopting a surface grid extraction technology;
the semantic segmentation is carried out on the three-dimensional point cloud to obtain a three-dimensional model of the target object in the monitoring area, and the semantic segmentation comprises the following steps:
and performing semantic segmentation on the dense three-dimensional grid model to obtain a three-dimensional model of the target object in the monitoring area.
8. A gravity direction determination apparatus based on a three-dimensional model, the apparatus comprising:
the first acquisition module is used for acquiring multi-frame monitoring images and RTK data of a monitoring area of the power grid system;
the first generation module is used for generating sparse three-dimensional point cloud according to the multi-frame monitoring image; acquiring a depth map corresponding to the multiple frames of monitoring images by using the camera pose of the camera shooting equipment for shooting the monitoring images and the multiple frames of monitoring images; generating the three-dimensional point cloud based on the camera pose, the depth map, and the sparse three-dimensional point cloud;
the segmentation module is used for performing semantic segmentation on the three-dimensional point cloud to obtain a three-dimensional model of a target object in the monitoring area;
the second acquisition module is used for determining a preset number of target points in a three-dimensional coordinate system corresponding to the three-dimensional model of the target object; calculating an initial declination of each target point according to the RTK data; rotating the initial magnetic declination angle of each target point by a preset angle in a preset direction to obtain the magnetic declination angle of each target point;
and the adjusting module is used for adjusting the gravity direction of the target object according to the declination of each target point.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202111132160.8A 2021-09-26 2021-09-26 Gravity direction determination method and device based on three-dimensional model and computer equipment Active CN113570649B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111132160.8A CN113570649B (en) 2021-09-26 2021-09-26 Gravity direction determination method and device based on three-dimensional model and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111132160.8A CN113570649B (en) 2021-09-26 2021-09-26 Gravity direction determination method and device based on three-dimensional model and computer equipment

Publications (2)

Publication Number Publication Date
CN113570649A CN113570649A (en) 2021-10-29
CN113570649B true CN113570649B (en) 2022-03-08

Family

ID=78174677

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111132160.8A Active CN113570649B (en) 2021-09-26 2021-09-26 Gravity direction determination method and device based on three-dimensional model and computer equipment

Country Status (1)

Country Link
CN (1) CN113570649B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116592861B (en) * 2023-07-18 2023-09-29 天津云圣智能科技有限责任公司 Magnetic compass calibration model construction method, magnetic compass calibration method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583108A (en) * 2018-12-06 2019-04-05 中国电力工程顾问集团西南电力设计院有限公司 A kind of UHV transmission line scenario building method based on GIS
CN111795673A (en) * 2020-07-09 2020-10-20 杭州海康微影传感科技有限公司 Azimuth angle display method and device
CN112634370A (en) * 2020-12-31 2021-04-09 广州极飞科技有限公司 Unmanned aerial vehicle dotting method, device, equipment and storage medium
CN112991534A (en) * 2021-03-26 2021-06-18 中国科学技术大学 Indoor semantic map construction method and system based on multi-granularity object model

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3151017B1 (en) * 2015-09-29 2018-12-12 Honeywell International Inc. Amr speed and direction sensor for use with magnetic targets
CN110930503B (en) * 2019-12-05 2023-04-25 武汉纺织大学 Clothing three-dimensional model building method, system, storage medium and electronic equipment
CN111678536B (en) * 2020-05-08 2021-12-10 中国人民解放军空军工程大学 Calibration method for calibrating magnetic declination of ground observation whistle and angle measurement system error of observation and aiming equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583108A (en) * 2018-12-06 2019-04-05 中国电力工程顾问集团西南电力设计院有限公司 A kind of UHV transmission line scenario building method based on GIS
CN111795673A (en) * 2020-07-09 2020-10-20 杭州海康微影传感科技有限公司 Azimuth angle display method and device
CN112634370A (en) * 2020-12-31 2021-04-09 广州极飞科技有限公司 Unmanned aerial vehicle dotting method, device, equipment and storage medium
CN112991534A (en) * 2021-03-26 2021-06-18 中国科学技术大学 Indoor semantic map construction method and system based on multi-granularity object model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
直流输电线路对地磁场Z分量观测干扰的校正;唐波 等;《中国电机工程学报 》;20121025;第32卷(第30期);第147-第153页 *

Also Published As

Publication number Publication date
CN113570649A (en) 2021-10-29

Similar Documents

Publication Publication Date Title
US11209837B2 (en) Method and device for generating a model of a to-be reconstructed area and an unmanned aerial vehicle flight trajectory
CN109683699B (en) Method and device for realizing augmented reality based on deep learning and mobile terminal
Liang et al. Image based localization in indoor environments
US11521311B1 (en) Collaborative disparity decomposition
US9613388B2 (en) Methods, apparatuses and computer program products for three dimensional segmentation and textured modeling of photogrammetry surface meshes
Zhang et al. A UAV-based panoramic oblique photogrammetry (POP) approach using spherical projection
US20160232420A1 (en) Method and apparatus for processing signal data
CN114202632A (en) Grid linear structure recovery method and device, electronic equipment and storage medium
Kim et al. Interactive 3D building modeling method using panoramic image sequences and digital map
CN113570649B (en) Gravity direction determination method and device based on three-dimensional model and computer equipment
CN112270709A (en) Map construction method and device, computer readable storage medium and electronic device
CN116563493A (en) Model training method based on three-dimensional reconstruction, three-dimensional reconstruction method and device
CN112733641A (en) Object size measuring method, device, equipment and storage medium
CN114782646A (en) House model modeling method and device, electronic equipment and readable storage medium
CN116109684B (en) Online video monitoring two-dimensional and three-dimensional data mapping method and device for variable electric field station
US11223815B2 (en) Method and device for processing video
Liang et al. Efficient match pair selection for matching large-scale oblique UAV images using spatial priors
Li et al. BDLoc: Global localization from 2.5 D building map
Yang et al. Three-dimensional panoramic terrain reconstruction from aerial imagery
Wei et al. Indoor and outdoor multi-source 3D data fusion method for ancient buildings
CN116824068B (en) Real-time reconstruction method, device and equipment for point cloud stream in complex dynamic scene
He Research on outdoor garden scene reconstruction based on PMVS Algorithm
Yoo Rapid three-dimensional urban model production using bilayered displacement mapping
Liu et al. Real-scene 3D measurement algorithm and program implementation based on Mobile terminals
Zhou et al. Digital surface model generation from aerial imagery using bridge probability relaxation matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant