CN113192000A - Occlusion detection method based on elevation and angle constraints - Google Patents

Occlusion detection method based on elevation and angle constraints Download PDF

Info

Publication number
CN113192000A
CN113192000A CN202110230021.2A CN202110230021A CN113192000A CN 113192000 A CN113192000 A CN 113192000A CN 202110230021 A CN202110230021 A CN 202110230021A CN 113192000 A CN113192000 A CN 113192000A
Authority
CN
China
Prior art keywords
point
visible
elevation
feature point
ground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110230021.2A
Other languages
Chinese (zh)
Other versions
CN113192000B (en
Inventor
刘宇
李德军
白新伟
吴迪
孙文邦
尤金凤
于光
顾子侣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PLA AIR FORCE AVIATION UNIVERSITY
Original Assignee
PLA AIR FORCE AVIATION UNIVERSITY
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PLA AIR FORCE AVIATION UNIVERSITY filed Critical PLA AIR FORCE AVIATION UNIVERSITY
Priority to CN202110230021.2A priority Critical patent/CN113192000B/en
Publication of CN113192000A publication Critical patent/CN113192000A/en
Application granted granted Critical
Publication of CN113192000B publication Critical patent/CN113192000B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Computing Systems (AREA)
  • Algebra (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A shading detection method based on elevation and angle constraints belongs to the technical field of image processing. The invention aims to improve an occlusion detection algorithm based on elevation constraint for improving the accuracy and efficiency of occlusion area detection on the basis of an occlusion detection method based on an angle, and provides an occlusion detection method based on elevation and angle constraint. The method comprises the following steps: and (3) retaining the result information of the visibility analysis of the current point, recording five corresponding parameters, calculating the height difference generated by changing the distance of a single pixel, and judging whether the intersection point with the previous layer on the search path is visible or not. The invention has the advantages of high precision, high visibility analysis speed and the like.

Description

Occlusion detection method based on elevation and angle constraints
Technical Field
The invention belongs to the technical field of image processing.
Background
In the imaging process of the large-inclination aerial image, the terrain relief and buildings are easy to cause shielding, and especially in the area with large terrain relief, a large shielding area exists. When the geometric correction is carried out on the large-inclination aerial image, if the shielding area of a large film is not eliminated, a serious ghost image phenomenon is generated, and the application of the image is seriously influenced.
The occlusion detection algorithm can be divided into an occlusion detection algorithm based on raster data and an occlusion detection algorithm based on vector data according to different ground surface data forms adopted during processing; according to different basic principles, the method can be divided into a Z-buffer algorithm, an angle-based occlusion detection algorithm, an elevation-based occlusion detection algorithm, a ray tracing occlusion detection algorithm, a PBI algorithm, a triangulation-based occlusion detection algorithm, and a derivative algorithm based on the algorithms and data characteristics.
(1) Z-Buffer algorithm
The basis of the Z-Buffer algorithm is as follows: and all the ground object points on the same projection light line are the ground object points which are close to the photographing center and are far away from the projection center. When the occlusion detection is performed, a Z-Buffer matrix corresponding to each image point on an original image and a visibility matrix of a DSM grid need to be recorded and are respectively marked as P _ Buffer and P _ visible, where the P _ Buffer records a distance D between a corresponding object point P and the image point P and a coordinate of the object point P. The process of using Z-Buffer algorithm to carry out shielding detection is as follows: firstly, setting the depth of each P _ buffer to be infinite, setting the visibility of each DSM grid to be visible, calculating image points P corresponding to the points P and the distance D between the image points P and the image points P when the points P on the DSM grids are corrected according to a collinear equation, taking out the corresponding P _ buffers according to the coordinates of the points P, recording the positions of object points recorded in the P _ buffers as invisible if the D is less than the P _ buffers _ D, enabling the P _ buffers _ D to be D and the P _ buffers _ P to be P, otherwise, recording the object points P as invisible, and circularly finishing the visibility analysis of the whole image.
The Z-Buffer algorithm has the advantages of simple thought, small calculation amount and high efficiency, but the algorithm has fatal defects, an ideal effect can be obtained only on the premise that one object point corresponds to one image point, the problems of pseudo visibility and M-Port can be caused when one object point covers a plurality of image points, and the problem of pseudo shielding can be caused when one image point covers a plurality of adjacent object points. Therefore, the Z-Buffer algorithm is difficult to adapt to the image with large imaging inclination angle and large terrain relief.
(2) Angle-based occlusion detection algorithm
The basic principle of the angle-based occlusion detection algorithm is as follows: on a horizontal connecting line of the ground points and the points to be detected, an included angle is formed between the projection light corresponding to each ground object point and the horizontal plane, and the visibility of the points to be detected is analyzed according to the change of the included angle. On the horizontal connecting line of the ground object point and the ground bottom point, in the direction gradually far away from the ground bottom point, if the included angle between the projection light and the horizontal plane is gradually reduced, no shielding exists; if the area suddenly becomes larger at a certain position and then becomes smaller and returns to or is smaller than the original angle, the area is blocked.
The algorithm has the characteristics of simplicity, clearness and strict theory, can be suitable for various complex environments, and does not have the problems of pseudo-shading, pseudo-visibility and M-position of the Z-Buffer algorithm. However, since each feature point needs to be compared with an angle to analyze the visibility, frequent angle calculation results in low processing efficiency and long time consumption. Like the Z-Buffer algorithm, if a proper scanning mode is not determined through prior information, serious repeated calculation exists, so that a fast and efficient scanning mode is also a key problem to be solved by the algorithm, Habib adopts a spiral scanning mode to improve the efficiency, and similarly, the idea of improving the algorithm of the Z-Buffer algorithm is also suitable for the algorithm to improve the processing efficiency.
(3) Height-based occlusion detection algorithm
The basic principle of the elevation-based occlusion detection algorithm is as follows: when a certain feature point is subjected to visibility analysis, if the feature point is visible, a connecting line of the feature point and a projection center is above the terrain data[69]. As shown in FIG. 1, at a decision point P0If visible, at P0And PSOn the connection line of the transformer, a certain variable quantity is set,according to the proportion, the elevation H of the point on the connecting line can be calculatedit(i ═ 1,2,3, …, n), the actual elevation H of the DSM in relation to the horizontal coordinates of that pointi(i is 1,2,3, …, n), and if any one of the values satisfies Hit<HiDescription of the Point P0Is shielded, otherwise, continuously taking down a point according to the variation until the ground point O is searched, and if the ground point O satisfies Hit>HiIf (i ═ 1,2,3, …, n) is true, point P will be described0It can be seen.
The algorithm is similar to an angle-based occlusion detection algorithm, has the characteristic of intuitive and rigorous theory, can also effectively solve the problems of pseudo occlusion, pseudo visibility and M-position of the Z-Buffer algorithm, does not need to calculate the angle, has high execution efficiency in a region which is closer to a ground bottom point, can judge the visibility of the point only if the point is occluded or searched to reach the ground bottom point in a region which is farther from the ground bottom point, and causes the efficiency reduction due to the longer distance.
Disclosure of Invention
The invention aims to improve an occlusion detection algorithm based on elevation constraint for improving the accuracy and efficiency of occlusion area detection on the basis of an occlusion detection method based on an angle, and provides an occlusion detection method based on elevation and angle constraint.
The method comprises the following steps:
s1, each column represents a pixel point, and the projection center PSIn three-dimensional space, the position relation between the pixel point and the projection center can be defined by four parameters
Figure BDA0002958766810000021
Indicating that (X, Y, Z) is the position of the pixel,
Figure BDA0002958766810000022
for the elevation angle of the projection center relative to the pixel point, the result information of the visibility analysis of the current point is reserved, and corresponding five parameters are recorded
Figure BDA0002958766810000023
Visible is the visibility of the point;
s2, placing the highest point at the farthest position of the image, and calculating the elevation difference H generated by changing the distance of a single pixel when the elevation angle of the projection center relative to the pixel point in the whole image is the minimum valueV(ii) a When the difference between the elevation of the inner layer point and the elevation of the outer layer point is less than HVWhen the inner layer points are visible, the outer layer points are necessarily visible, and such an elevation H will existdVSo that when the elevation of the inner layer point is larger than that of the outer layer point by HdVIf the inner layer point is invisible, the outer layer point is inevitably invisible;
s3, when detecting whether a feature point 3 is visible, firstly judging whether an intersection point with a previous layer on a search path is visible, wherein the intersection point is defined as a feature point 2, and the judging step is as follows:
(1) if the ground object point 2 is visible, and Z3+HV>Z2Then the ground object point 3 is visible;
(2) if the ground object point 2 is visible, and Z3+HV<Z2Then, the elevation angle characterization values of the ground object point 2 and the ground object point 3 are calculated
Figure BDA0002958766810000031
And
Figure BDA0002958766810000032
if it is
Figure BDA0002958766810000033
The ground feature point 3 is visible; otherwise
Figure BDA0002958766810000034
The feature point 3 is not visible and will
Figure BDA0002958766810000035
Substitution
Figure BDA0002958766810000036
Writing the data into the recording parameters of the ground object point 3;
(3) if the feature point 2 is not visible, calculating the elevation angle representation value of the feature point 3
Figure BDA0002958766810000037
If it is
Figure BDA0002958766810000038
The ground feature point 3 is visible; otherwise
Figure BDA0002958766810000039
The feature point 3 is not visible and will
Figure BDA00029587668100000310
Substitution
Figure BDA00029587668100000311
And writing the data into the recording parameters of the ground object point 3.
The visibility analysis process of the invention comprises the following steps:
s1, assuming that four ground feature points near the ground point are visible, and recording five parameters of the four ground feature points
Figure BDA00029587668100000312
The starting point of each layer is a ground object point marked in a circular manner, the current layer is judged point by point according to the anticlockwise sequence, recording parameters corresponding to each point, when an edge is searched on one side, the ground object points entering other sides are directly skipped, and the ground objects are sequentially subjected to visibility analysis layer by layer outwards until the visibility analysis of all the ground object points is completed;
s2, detecting a feature point P (X)P,YP,ZP) If the projection light is visible, the intersection point Q (X, Y) of the projection light and the inward layer is calculated, and at the moment, two ground object points Q closest to the layer where Q is located1And Q2Given the visibility of point Q1And Q2Respectively is
Figure BDA00029587668100000313
And
Figure BDA00029587668100000314
the visibility analysis of the feature point P can be classified into the following two cases:
(1) feature point Q1And Q2Are all visible, i.e. V1&V2True; if Z is satisfiedP+HV>Z1And Z isP+HV>Z2If not, the visibility of the ground object point P cannot be judged; if Z isP+HV>ZQIf so, the feature point P is visible, otherwise, the feature point Q is calculated
Figure BDA00029587668100000315
Of value and feature point P
Figure BDA00029587668100000316
A value; if it is
Figure BDA00029587668100000317
The ground object point P is visible, otherwise, the ground object point P is invisible;
(2) feature point Q1And Q2One is not visible, i.e. V1&V2False; calculating the feature point P
Figure BDA00029587668100000318
Value if it satisfies
Figure BDA00029587668100000319
And is
Figure BDA00029587668100000320
The ground object point P is not visible; otherwise calculating the feature point Q
Figure BDA00029587668100000321
Wherein k is (X-X)1)/(X2-X1). If it is
Figure BDA00029587668100000322
The feature point P is visible otherwise it is not.
The invention has the advantages of high precision, high visibility analysis speed and the like.
Drawings
FIG. 1 is a schematic diagram of an elevation-based occlusion detection algorithm;
FIG. 2 is a diagram showing the relationship between the projection center and the pixel location;
FIG. 3 is a flow diagram of a visibility analysis of point 3;
FIG. 4 is a schematic view of a helical scanning process;
FIG. 5 is a visibility analysis flow diagram;
FIG. 6 is a comparison graph of the effect of an occlusion detection algorithm; wherein, the time used in the graphs (a), (b) and (c) is 261 seconds, 551 seconds and 481 seconds respectively, and the graphs (d), (e) and (f) are partially enlarged views of the frame lines in the graphs (a), (b) and (c).
Detailed Description
The invention discloses a detailed elevation and angle constraint occlusion detection method, which comprises the following steps:
as shown in FIG. 2, each column represents a pixel point, the projection center PSIn three-dimensional space, the position relation between the pixel point and the projection center can use four parameters
Figure BDA0002958766810000041
Indicating that (X, Y, Z) is the position of the pixel,
Figure BDA0002958766810000042
the elevation angle of the projection center relative to the pixel point (i.e. the included angle between the connection line of the pixel point and the projection center and the horizontal plane). It is obvious that
Figure BDA0002958766810000043
Thus, it is possible to provide
Figure BDA0002958766810000044
Can be characterized by
Figure BDA0002958766810000045
The size of (2). Since the solution of the inverse trigonometric function requires iteration and is inefficient, it can be adopted
Figure BDA0002958766810000046
The value is used as the basis for judging whether the image is shielded or not so as to achieve the purpose of improving the operation efficiency. To keep pointing forward to the current pointThe result information of line visibility analysis records the five parameters corresponding to the line visibility analysis
Figure BDA0002958766810000047
Visible is the visibility of this point, and the other parameters are the same as the aforementioned parameters.
If the elevation of the highest point is positioned at the farthest position of the image, the elevation angle of the projection center relative to the pixel point in the whole image is the minimum, and when the elevation angle is the value, the elevation difference generated by changing the distance of a single pixel can be calculated to be HV. Obviously, when the difference between the elevation of the inner layer point and the elevation of the outer layer point is less than HVWhen the inner layer points are visible, the outer layer points are necessarily visible. Likewise, such an elevation H will also existdVSo that when the elevation of the inner layer point is larger than that of the outer layer point by HdVIn time, if the inner layer dots are not visible, the outer layer dots are necessarily not visible.
As shown in fig. 2, when detecting whether a certain feature point 3 is visible, it is first determined whether an intersection point (feature point 2) with a previous layer on a search path is visible, and the determination process is shown in fig. 3, and the specific steps are as follows:
(1) if the ground object point 2 is visible, and Z3+HV>Z2Point 3 is visible.
(2) If the ground object point 2 is visible, and Z3+HV<Z2Then calculate the elevation angle characterization values of point 2 and point 3
Figure BDA0002958766810000048
And
Figure BDA0002958766810000049
if it is
Figure BDA00029587668100000410
Point 3 is visible; otherwise
Figure BDA00029587668100000411
Point 3 is not visible and will
Figure BDA00029587668100000412
Alternatives
Figure BDA00029587668100000413
In the recording parameters of the spot 3.
(3) If the ground object point 2 is not visible, calculating the elevation angle representation value of the point 3
Figure BDA00029587668100000414
If it is
Figure BDA00029587668100000415
Point 3 is visible; otherwise
Figure BDA00029587668100000416
Point 3 is not visible and will
Figure BDA00029587668100000417
Substitution
Figure BDA00029587668100000418
In the recording parameters of the spot 3.
The method not only retains the advantages of an angle discrimination method, but also avoids the calculation of an inverse trigonometric function, and meanwhile, when a certain ground object point is subjected to visibility analysis, the visibility of the ground object point can be discriminated only according to the recording parameter of the previous point on the projection light line, and the inner layer does not need to be searched, so that the shielding detection efficiency can be effectively improved. The information of the inner layer point is utilized when the outer layer point is subjected to visibility analysis, and the advantages of the method can be further reflected when the point to be detected is at the far boundary of the shielded area and the edge of the image.
Visibility analysis flow of the invention
When the ground feature point is subjected to visibility analysis, the spiral scanning method is used for reference, and the visibility of the ground feature is judged layer by taking four points around the ground point as a center, as shown in fig. 4. First, assume that four feature points near the nadir are visible and record their respective five parameters
Figure BDA00029587668100000419
And when an edge is searched on one side, directly skipping over the ground object points entering other sides, and carrying out visibility analysis on the ground objects layer by layer in sequence until the visibility analysis of all the ground object points is finished.
Detecting a feature point P (X)P,YP,ZP) If the projection is visible, the intersection point Q (X, Y) of the projection ray and the inward layer is calculated (as the position marked by the triangle in FIG. 4), and two feature points Q nearest to the layer where Q is located at the moment1And Q2Given the visibility of point Q1And Q2Respectively is
Figure BDA0002958766810000051
And
Figure BDA0002958766810000052
the visibility analysis of the feature point P can be classified into the following two cases:
(1) feature point Q1And Q2Are all visible, i.e. V1&V2True. If Z is satisfiedP+HV>Z1And Z isP+HV>Z2If not, the visibility of the ground object point P cannot be judged. If Z isP+HV>ZQIf the feature point P is visible, the feature point Q is calculated
Figure BDA0002958766810000053
Of value and feature point P
Figure BDA0002958766810000054
The value is obtained. If it is
Figure BDA0002958766810000055
The feature point P is visible otherwise it is not.
(2) Feature point Q1And Q2One is not visible, i.e. V1&V2=falAnd se. Calculating the feature point P
Figure BDA0002958766810000056
Value if it satisfies
Figure BDA0002958766810000057
And is
Figure BDA0002958766810000058
The ground object point P is not visible; otherwise calculating the feature point Q
Figure BDA0002958766810000059
Wherein k is (X-X)1)/(X2-X1). If it is
Figure BDA00029587668100000510
The feature point P is visible otherwise it is not. The flow of the visibility analysis of the point P is shown in fig. 5.
The algorithm fully utilizes the visibility analysis result and the calculation result of the inner layer point, takes the transmissibility of light shielding into consideration, does not need any inward search, completes the shielding judgment from the inner layer to the outer layer at one time, and can absolutely judge whether to carry out gray scale assignment on the pixel point according to the shielding judgment condition.
Analysis of Experimental results
Wherein fig. 6(a) is a result of geometric correction without occlusion detection, fig. 6(b) is a result of occlusion detection and geometric correction using an occlusion detection algorithm based on elevation constraints, fig. 6(c) is a result of occlusion detection and geometric correction performed by the algorithm of this patent, and fig. 6(d) (e) (f) are enlarged views of red boxes in fig. 6(a) (b) (c), respectively.
As can be seen from fig. 6, the occlusion detection algorithm based on the elevation constraint has a relatively obvious missing detection, because when occlusion detection is performed, in order to improve the detection efficiency, the occlusion detection algorithm based on the elevation directly takes the nearest neighbor pixel of the inner layer point, and cannot consider the information of other adjacent points, so that light may pass through a gap between two pixels to cause missing detection; according to the method, a bilinear interpolation method is adopted to interpolate the elevation, so that certain efficiency is sacrificed, and the accuracy of shielding detection is improved. In the development environment of win2008, AMD Athlon II X4640, memory 4g and C #, geometric correction is performed to obtain the graph of FIG. 6, the time used is 261 seconds, 551 seconds and 481 seconds respectively, the CPU occupancy rate is 25%, and the memory occupancy rate is 327M, 331M and 353M respectively. Compared with the occlusion detection algorithm based on the elevation, the occlusion detection algorithm has higher efficiency and occupies fewer resources in the processing process.

Claims (2)

1. A shading detection method based on elevation and angle constraints is characterized in that: the method comprises the following steps:
s1, each column represents a pixel point, and the projection center PSIn three-dimensional space, the position relation between the pixel point and the projection center can use four parameters
Figure FDA0002958766800000011
Indicating that (X, Y, Z) is the position of the pixel,
Figure FDA0002958766800000012
for the elevation angle of the projection center relative to the pixel point, the result information of the visibility analysis of the current point is reserved, and corresponding five parameters are recorded
Figure FDA0002958766800000013
Visible is the visibility of the point;
s2, placing the highest point at the farthest position of the image, minimizing the elevation angle of the projection center relative to the pixel point in the whole image, and calculating the height difference generated by changing the distance of a single pixel as H when the elevation angle is the valueV(ii) a When the difference between the elevation of the inner layer point and the elevation of the outer layer point is less than HVWhen the inner layer points are visible, the outer layer points are necessarily visible, and such an elevation H will existdVSo that when the elevation of the inner layer point is larger than that of the outer layer point by HdVIf the inner layer point is invisible, the outer layer point is inevitably invisible;
s3, when detecting whether a feature point 3 is visible, firstly judging whether an intersection point with a previous layer on a search path is visible, wherein the intersection point is defined as a feature point 2, and the judging step is as follows:
(1) if the ground object point 2 is visible, and Z3+HV>Z2Then the ground object point 3 is visible;
(2) if the ground object point 2 is visible, and Z3+HV<Z2Then, the elevation angle characterization values of the ground object point 2 and the ground object point 3 are calculated
Figure FDA0002958766800000014
And
Figure FDA0002958766800000015
if it is
Figure FDA0002958766800000016
The ground feature point 3 is visible; otherwise
Figure FDA0002958766800000017
The feature point 3 is not visible and will
Figure FDA0002958766800000018
Substitution
Figure FDA0002958766800000019
Writing the data into the recording parameters of the ground object point 3;
(3) if the feature point 2 is not visible, calculating the elevation angle representation value of the feature point 3
Figure FDA00029587668000000110
If it is
Figure FDA00029587668000000111
The ground feature point 3 is visible; otherwise
Figure FDA00029587668000000112
The feature point 3 is not visible and will
Figure FDA00029587668000000113
Substitution
Figure FDA00029587668000000114
Written into the recording parameters of the feature point 3.
2. The occlusion detection method based on elevation and angle constraints of claim 1, wherein: and (3) visibility analysis flow:
s1, assuming that four ground feature points near the ground point are visible, and recording five parameters of the four ground feature points
Figure FDA00029587668000000115
The starting point of each layer is a ground object point marked in a circular manner, the current layer is judged point by point according to the anticlockwise sequence, recording parameters corresponding to each point, when an edge is searched on one side, the ground object points entering other sides are directly skipped, and the ground objects are subjected to visibility analysis outwards layer by layer in sequence until the visibility analysis of all the ground object points is completed;
s2, detecting a feature point P (X)P,YP,ZP) If the projection light is visible, the intersection point Q (X, Y) of the projection light and the inward layer is calculated, and at the moment, two ground object points Q closest to the layer where Q is located1And Q2Given the visibility of point Q1And Q2Respectively is
Figure FDA00029587668000000116
And
Figure FDA00029587668000000117
the visibility analysis of the feature point P can be classified into the following two cases:
(1) feature point Q1And Q2Are all visible, i.e. V1&V2True; if Z is satisfiedP+HV>Z1And Z isP+HV>Z2If not, the character point P can not be judgedSex; if Z isP+HV>ZQIf so, the feature point P is visible, otherwise, the feature point Q is calculated
Figure FDA00029587668000000118
Of value and feature point P
Figure FDA00029587668000000119
A value; if it is
Figure FDA00029587668000000120
The ground object point P is visible, otherwise, the ground object point P is invisible;
(2) feature point Q1And Q2One is not visible, i.e. V1&V2False; calculating the feature point P
Figure FDA0002958766800000021
Value if it satisfies
Figure FDA0002958766800000022
And is
Figure FDA0002958766800000023
The ground object point P is not visible; otherwise calculating the feature point Q
Figure FDA0002958766800000024
Wherein k is (X-X)1)/(X2-X1). If it is
Figure FDA0002958766800000025
The feature point P is visible otherwise it is not.
CN202110230021.2A 2021-03-02 2021-03-02 Occlusion detection method based on elevation and angle constraints Active CN113192000B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110230021.2A CN113192000B (en) 2021-03-02 2021-03-02 Occlusion detection method based on elevation and angle constraints

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110230021.2A CN113192000B (en) 2021-03-02 2021-03-02 Occlusion detection method based on elevation and angle constraints

Publications (2)

Publication Number Publication Date
CN113192000A true CN113192000A (en) 2021-07-30
CN113192000B CN113192000B (en) 2022-07-22

Family

ID=76973069

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110230021.2A Active CN113192000B (en) 2021-03-02 2021-03-02 Occlusion detection method based on elevation and angle constraints

Country Status (1)

Country Link
CN (1) CN113192000B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6489962B1 (en) * 1998-07-08 2002-12-03 Russell Andrew Ambroziak Analglyphic representations of image and elevation data
US20110292208A1 (en) * 2009-11-24 2011-12-01 Old Dominion University Research Foundation High-resolution urban true orthoimage creation
CN104200527A (en) * 2014-09-02 2014-12-10 西安煤航信息产业有限公司 Method for generating true orthophoto
CN106251326A (en) * 2016-07-02 2016-12-21 桂林理工大学 A kind of building occlusion detection utilizing ghost picture and occlusion area compensation method
EP3249558A1 (en) * 2016-03-31 2017-11-29 Southeast University Form optimization control method applied to peripheral buildings of open space and using evaluation of visible sky area
US10553020B1 (en) * 2018-03-20 2020-02-04 Ratheon Company Shadow mask generation using elevation data

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6489962B1 (en) * 1998-07-08 2002-12-03 Russell Andrew Ambroziak Analglyphic representations of image and elevation data
US20110292208A1 (en) * 2009-11-24 2011-12-01 Old Dominion University Research Foundation High-resolution urban true orthoimage creation
CN104200527A (en) * 2014-09-02 2014-12-10 西安煤航信息产业有限公司 Method for generating true orthophoto
EP3249558A1 (en) * 2016-03-31 2017-11-29 Southeast University Form optimization control method applied to peripheral buildings of open space and using evaluation of visible sky area
CN106251326A (en) * 2016-07-02 2016-12-21 桂林理工大学 A kind of building occlusion detection utilizing ghost picture and occlusion area compensation method
US10553020B1 (en) * 2018-03-20 2020-02-04 Ratheon Company Shadow mask generation using elevation data

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
HENRIQUE C.O, ET AL: "Surface Gradient Approch for Occlusion Detection Based in Triangulated Irregular Network for Ture Orthophoto Generation", 《IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING》 *
HENRIQUE C.O, ET AL: "Surface Gradient Approch for Occlusion Detection Based in Triangulated Irregular Network for Ture Orthophoto Generation", 《IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING》, vol. 11, no. 2, 28 February 2018 (2018-02-28), pages 443 - 457 *
张克,等: "真正射纠正中地物遮挡区域自动检测与标识方法研究", 《测绘科学》 *
张克,等: "真正射纠正中地物遮挡区域自动检测与标识方法研究", 《测绘科学》, vol. 37, no. 3, 20 May 2012 (2012-05-20), pages 10 - 12 *
王涛,等: "一种基于共线方程的正射影像遮蔽区查找方法", 《测绘标准化》 *
王涛,等: "一种基于共线方程的正射影像遮蔽区查找方法", 《测绘标准化》, vol. 26, no. 2, 15 June 2010 (2010-06-15), pages 144 - 146 *
王玉峰,等: "遮挡检测算法分析比较研究", 《科技视界》 *
王玉峰,等: "遮挡检测算法分析比较研究", 《科技视界》, vol. 26, no. 2, 15 September 2015 (2015-09-15), pages 152 - 153 *

Also Published As

Publication number Publication date
CN113192000B (en) 2022-07-22

Similar Documents

Publication Publication Date Title
US11354851B2 (en) Damage detection from multi-view visual data
CN109685732B (en) High-precision depth image restoration method based on boundary capture
CN113706713A (en) Live-action three-dimensional model cutting method and device and computer equipment
CN108919954B (en) Dynamic change scene virtual and real object collision interaction method
EP3293700B1 (en) 3d reconstruction for vehicle
WO2016029555A1 (en) Image interpolation method and device
CN111047698B (en) Real projection image acquisition method
CN112991420A (en) Stereo matching feature extraction and post-processing method for disparity map
CN115546027B (en) Image suture line determination method, device and storage medium
CN111105452A (en) High-low resolution fusion stereo matching method based on binocular vision
TWI417808B (en) Reconstructable geometry shadow mapping method
Krauss et al. Deterministic guided lidar depth map completion
CN113192000B (en) Occlusion detection method based on elevation and angle constraints
CN114266801A (en) Ground segmentation method for mobile robot in cross-country environment based on three-dimensional laser radar
US20210224973A1 (en) Mobile multi-camera multi-view capture
CN110335209B (en) Phase type three-dimensional laser point cloud noise filtering method
US20230290061A1 (en) Efficient texture mapping of a 3-d mesh
CN115294235B (en) Bitmap-based graphic filling method, terminal and storage medium
CN108986212B (en) Three-dimensional virtual terrain LOD model generation method based on crack elimination
Zhong et al. A vector-based backward projection method for robust detection of occlusions when generating true ortho photos
Hu et al. True ortho generation of urban area using high resolution aerial photos
CN112084854B (en) Obstacle detection method, obstacle detection device and robot
CN110189403B (en) Underwater target three-dimensional reconstruction method based on single-beam forward-looking sonar
CN109118565B (en) Electric power corridor three-dimensional model texture mapping method considering shielding of pole tower power line
Li et al. Depth image restoration method based on improved FMM algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant