CN117576333A - Method and device for determining visible region, electronic equipment and storage medium - Google Patents

Method and device for determining visible region, electronic equipment and storage medium Download PDF

Info

Publication number
CN117576333A
CN117576333A CN202410050531.5A CN202410050531A CN117576333A CN 117576333 A CN117576333 A CN 117576333A CN 202410050531 A CN202410050531 A CN 202410050531A CN 117576333 A CN117576333 A CN 117576333A
Authority
CN
China
Prior art keywords
point
determining
observed
elevation
closed loop
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410050531.5A
Other languages
Chinese (zh)
Inventor
徐常炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kq Geo Technologies Co ltd
Original Assignee
Kq Geo Technologies Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kq Geo Technologies Co ltd filed Critical Kq Geo Technologies Co ltd
Priority to CN202410050531.5A priority Critical patent/CN117576333A/en
Publication of CN117576333A publication Critical patent/CN117576333A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/54Browsing; Visualisation therefor
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The application provides a method, a device, an electronic device and a storage medium for determining a visual field, wherein the method can comprise the following steps: determining an observation point and an observed point in the three-dimensional terrain model; determining a circumferential elevation parameter between the observed point and the observation point; the circumferential elevation parameters are used to indicate the distance, azimuth and elevation degrees between the observed point and the observation point; the elevation angle degrees between the observed point and the observation point refer to the elevation angle degrees between a line-of-sight line segment formed between the observed point and the observation point and the ground in the three-dimensional terrain model; the determination of the visual field is made based on the circumferential elevation parameters. According to the embodiment of the application, the observed point can be any position except the observed point, so that the determination of the visible area of any area in the three-dimensional map model can be realized by determining the circumferential elevation angle parameter between the observed point and the observed point.

Description

Method and device for determining visible region, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of geographic data processing technologies, and in particular, to a method and apparatus for determining a visual field, an electronic device, and a storage medium.
Background
A three-dimensional terrain model may be used for the elevation of each discrete pixel point in the geographic area. The topographic relief state of the geographic area can be intuitively reflected through visual rendering. Based on the terrain relief conditions, a determination of the visual field may be made. For example, the visual field of the geographical area is determined by taking a certain position of the geographical area as an observation point and further taking the observation point as a reference position. In the related art, in the process of determining the visual field, the defect of large error exists, and how to reduce the error and improve the accuracy of determining the visual field is a problem in the industry.
Disclosure of Invention
The embodiment of the application provides a method and device for determining a visual field, electronic equipment and a storage medium.
In a first aspect, an embodiment of the present application provides a method for determining a visual field, where the method may include:
determining an observation point and an observed point in the three-dimensional terrain model;
determining a circumferential elevation parameter between the observed point and the observation point; the circumferential elevation parameters are used to indicate the distance, azimuth and elevation degrees between the observed point and the observation point; the elevation angle degrees between the observed point and the observation point refer to the elevation angle degrees between a line-of-sight line segment formed between the observed point and the observation point and the ground in the three-dimensional terrain model;
the determination of the visual field is made based on the circumferential elevation parameters.
In a second aspect, embodiments of the present application provide a device for determining a visual field, where the device may include:
the observation information determining module is used for determining an observation point and an observed point in the three-dimensional terrain model;
the circumference elevation angle parameter determining module is used for determining circumference elevation angle parameters between the observed point and the observation point; the circumferential elevation parameters are used to indicate the distance, azimuth and elevation degrees between the observed point and the observation point; the elevation angle degrees between the observed point and the observation point refer to the elevation angle degrees between a line-of-sight line segment formed between the observed point and the observation point and a horizon in the three-dimensional terrain model;
and the visual field determining module is used for determining the visual field according to the circumferential elevation angle parameter.
In a third aspect, embodiments of the present application provide an electronic device comprising a memory, a processor and a computer program stored on the memory, the processor implementing the method of any one of the preceding claims when the computer program is executed.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having a computer program stored therein, which when executed by a processor, implements a method as in any of the above.
Compared with the prior art, the application has the following advantages:
according to the embodiment of the application, the three-dimensional terrain model can be preloaded, so that the dynamic reading and the dynamic writing of the observation points and all observed points can be realized in the three-dimensional terrain model. Since the observed point can be any position other than the observed point, the determination of the visual field of any region in the three-dimensional map model can be realized by determining the circumferential elevation angle parameter between the observed point and the observed point.
The foregoing description is merely an overview of the technical solutions of the present application, and in order to make the technical means of the present application more clearly understood, it is possible to implement the present application according to the content of the present specification, and in order to make the above and other objects, features and advantages of the present application more clearly understood, the following detailed description of the present application will be given.
Drawings
In the drawings, the same reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily drawn to scale. It is appreciated that these drawings depict only some embodiments according to the application and are not to be considered limiting of its scope.
FIG. 1 is a flow chart of a method of visual field determination provided herein;
FIG. 2 is a schematic top view of a closed loop curve according to one embodiment of the present application;
FIG. 3 is a schematic diagram of a visual field determination principle according to an embodiment of the present application;
fig. 4 is a block diagram of the structure of a visual field determining apparatus according to an embodiment of the present application; and
fig. 5 is a block diagram of an electronic device used to implement an embodiment of the present application.
Detailed Description
Hereinafter, only certain exemplary embodiments are briefly described. As will be recognized by those of skill in the pertinent art, the described embodiments may be modified in various different ways without departing from the spirit or scope of the present application. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
In order to facilitate understanding of the technical solutions of the embodiments of the present application, the following describes related technologies of the embodiments of the present application. The following related technologies may be optionally combined with the technical solutions of the embodiments of the present application, which all belong to the protection scope of the embodiments of the present application.
An embodiment of the present application provides a method for determining a visual field, as shown in fig. 1, which is a flowchart of a method for determining a visual field according to an embodiment of the present application, and may include:
step S101: determining an observation point and an observed point in the three-dimensional terrain model;
step S102: determining a circumferential elevation parameter between the observed point and the observation point; the circumferential elevation parameters are used to indicate the distance, azimuth and elevation degrees between the observed point and the observation point; the elevation angle degrees between the observed point and the observation point refer to the elevation angle degrees between a line-of-sight line segment formed between the observed point and the observation point and the ground in the three-dimensional terrain model;
step S103: the determination of the visual field is made based on the circumferential elevation parameters.
The execution subject of the present application may be a computer or a server or the like for executing the three-dimensional map model visualization domain determination. In the related art, there are an R3 algorithm, an R2 algorithm, an XDraw algorithm, a PDERL algorithm, an XPDERL algorithm, and the like in the determination process of the visual field, but each of the above algorithms has a large disadvantage. The calculation result of the R3 algorithm, for example, is very accurate, but the calculation time overhead is also most expensive. The R2 algorithm sacrifices part of the accuracy compared to the R3 algorithm, but improves the computational efficiency. However, the R2 algorithm and the R3 algorithm need to calculate the pixel point through which each line of sight passes in advance, and then the visual field of the line of sight can be determined according to the pixel points. Therefore, the calculation time is relatively long, and the calculation efficiency is low. Compared with the R2 algorithm, the XDraw algorithm has the same computational complexity, but the accuracy is improved. The accuracy of the PDERL algorithm is improved again relative to the XDraw algorithm, and the accuracy of the R3 algorithm is approximated; however, under the conditions of consistent thread quantity and similar code optimization degree, the running time is about 4 times of that of the XDraw algorithm; and the consumption of computing resources is large. Compared with the PDERL algorithm, the XPDERL algorithm slightly improves the operation time, but basically, the XPDERL algorithm and the PDERL algorithm need to be calculated separately in 4 reference directions on the first hand, each direction needs to construct a three-dimensional terrain reference system for each layer of terrain line in the direction before calculation, and then the visible area is determined layer by layer, so that the consumption of the algorithm on calculation resources is still larger. In summary, the related art has a certain disadvantage in determining the visual field, so that the application adopts a new scheme to solve the disadvantage of the related art.
In this application, it is first necessary to determine the observation point and the observed point in the three-dimensional terrain model. In one example, the three-dimensional terrain model may be a digital elevation model (DEM, digital Elevation Model), which is a type of raster data that enables digital simulation of the terrain of the ground with limited terrain elevation data. The viewpoint may be latitude and longitude coordinates in the real world. Through coordinate system conversion, longitude and latitude coordinates in the real world can be converted into pixel coordinates in the three-dimensional terrain model. The coordinates of the observation point in the three-dimensional terrain model can be expressed as (R 1 ,C 1 ). Wherein R is 1 Can be expressed as the line number, C, of the pixel corresponding to the observation point in the three-dimensional terrain model 1 May be represented as a column number of the pixel corresponding to the viewpoint in the three-dimensional terrain model. The observed point may refer to a specific point or may refer to all pixels in the three-dimensional terrain model except the observed point. Similarly, the coordinates of the observed point in the three-dimensional terrain model can be obtained and can be expressed as (R 2 ,C 2 ). Thus, the three-dimensional terrain model can be preloaded, and the dynamic states of the observation points and all observed points can be realized in the three-dimensional terrain modelAnd reading and dynamically writing, so that preheating calculation is not needed, the resource cost except for data calculation is saved, and the utilization rate of hardware calculation resources is improved.
After the observed point and the observation point are determined, a circumferential elevation parameter between the observed point and the observation point can be determined. The circumferential elevation parameter may be used to indicate the distance between the observed point and the observation point, the azimuth angle, the degree of elevation, and the like. Illustratively, the observed points are a plurality of points on a terrain line corresponding to one mountain in the three-dimensional terrain model, and at least one of a distance difference, an azimuth difference and an elevation angle degree difference exists between each observed point and the observed point. The elevation angle degree is the elevation angle degree of a line segment of sight line between the observed point and the observation point and the horizon in the three-dimensional terrain model. The degree of elevation may be determined based on a difference in elevation between the point of view and the point of view. That is, in the present application, circumferential elevation parameters are employed to express relevant information in a three-dimensional terrain model. The circumferential elevation parameter contains information of a plurality of dimensions such as distance, azimuth angle, elevation angle degrees and the like, so that information can be conveniently inquired or calculated in the plurality of dimensions. In addition, according to the cache characteristics of a computer or a server for executing the three-dimensional map model visual domain determination, the data arrangement mode of the circumference elevation angle parameters can be optimized, so that the data reading and writing efficiency is improved, and the time consumption of the whole visual domain determination process is shortened.
In this way, in the present application, a circumferential elevation parameter curve corresponding to each observed point is constructed. That is, it is possible to determine whether there are other observed points between the observed point and the observation point, that is, whether there are other observed points on the line of sight line of the observed point and the observation point. If so, the circumferential elevation parameter curve of the observed point and the circumferential elevation parameter curves of other observed points can be compared, and whether the observed point is occluded by the other observed points can be determined. If occluded, the observed point is not visible. Conversely, by comparing the circumferential elevation parameter curve of the observed point with the circumferential elevation parameter curves of other observed points, it is determined that the observed point is not occluded, and the observed point is visible to the observed point. All visible observed points are combined into a set, and the visible areas are corresponding. The opposite of the visible region is the invisible region. The method has the advantages that the required computing resources are low, the method can be transplanted to different computing platforms without being limited by computing interfaces, and the determination of the visual field can be completed only by reading the three-dimensional map model, so that decoupling from a hardware architecture is achieved. Different applications such as visual field analysis, total visual field analysis, maximum observation range analysis, hidden maneuver analysis, etc. can be completed according to the determined visual field.
In the application, the observed point can be any position except the observed point, and the visual field of any area in the three-dimensional map model can be determined by determining the circumferential elevation angle parameter between the observed point and the observed point.
In one embodiment, the determining of the circumferential elevation parameter between the observed point and the observation point involved in step S102 may include:
step S1021: acquiring a topography line in a three-dimensional topography model;
step S1022: processing the topographic line according to the positions of the observation points to obtain a plurality of closed annular curves; the plurality of closed loop curves are formed by taking the positions of the observation points as reference positions, and the area wrapped by the closed loop curves is increased one by one;
step S1023: a plurality of closed loop curves are utilized to determine a circumferential elevation parameter between the observed point and the observation point.
In the present embodiment, the topographic line may be a line segment corresponding to an element such as a ridge, a river, or a road, or may be a line segment composed of pixels having the same line number or column number. Taking a line segment formed by pixel points with the same line number or column number as an example, in the application, the line segment with the same line number or column number is taken as a topography line, so that after the observation point is determined, the topography line can be processed based on the position of the observation point, and a plurality of closed annular curves are obtained. Illustratively, if the sitting of the viewpoint in the three-dimensional terrain model is marked (R 1 ,C 1 ) Then R can be 1 +1 row, R 1 +2 rows, R 1 -1 row, R 1 -2 rows, C 1 +1 column, C 1 +2 columns, C 1 -1 column C 1 -2 columns of pixels etc. as terrain lines in the three-dimensional terrain model. The above only shows a limited number of rows and a limited number of columns, in practice it is necessary to make a topography line determination from a three-dimensional topography model.
Referring to fig. 2, fig. 2 shows a plurality of closed loop curves having a top view effect, and black dots surrounded by the closed loop curves are indicated as observation points. The closed loop curve shown in fig. 2 is a standard rectangle, but in an actual three-dimensional terrain model, the closed loop curve is substantially a curve, not a standard rectangle, because the actual geographic location corresponding to each pixel point may be a mountain, depression, or flat land, etc. with high and low relief.
And processing the topographic line by taking the observation point as a reference position to obtain a plurality of closed circular curves. Each closed loop curve encloses a viewpoint within a closed loop, and the closed loop curves are increased in area one by one with the viewpoint as a reference location.
For the purpose of improving the accuracy of the confirmation of the visual field, the pitch of each closed loop curve may be 1 pixel unit, for example. If the observation point is in the three-dimensional terrain model, the sitting mark is (R 1 ,C 1 ) Then the layer 1 closed loop curve is represented by (R 1 +1,C 1 )、(R 1 +1,C 1 +1)、(R 1 ,C 1 +1)、(R 1 -1,C 1 +1)、(R 1 -1,C 1 )、(R 1 -1,C 1 -1)、(R 1 ,C 1 -1)、(R 1 +1,C 1 -1) a layer 1 closed loop curve consisting of 8 pixels. And then, continuing to expand the layer 1 closed loop curve to obtain a layer 2 closed loop curve consisting of 16 pixel points. That is, the interval between the layer 2 closed loop curve and the layer 1 closed loop curve is 1 pixel unit, and so on, a multi-layer closed loop curve covering the three-dimensional terrain model can be obtained. In addition, for the purpose of improving the confirmation speed of the visual field or improving the confirmation efficiencyThe partitioning of the closed loop curve may be non-equidistant for purposes. For example, the layer 1 closed loop curve still consists of 8 pixels adjacent around the observation point, the layer 2 closed loop curve may be 2 pixel units apart from the layer 1 loop curve, the layer 3 closed loop curve may be 3 pixel units apart from the layer 2 loop curve, and the layer 4 closed loop curve may be 4 pixel units apart from the layer 3 loop curve. That is, the spacing between each layer of closed loop curves is incremental. Thus, the coverage of the three-dimensional terrain model can be completed by using fewer closed loop curves.
In addition, the width of the closed loop curve may be set. For example, the width of each of the closed loop curves shown in FIG. 2 is the same. For example, the widths are each 1 pixel unit. Alternatively, the width of each closed loop curve may be different, for example, the width of the 1 st layer closed loop curve is 1 pixel unit, the width of the 2 nd layer closed loop curve is 2 pixel units, the width of the 3 rd layer closed loop curve is 3 pixel units, and so on. Specific examples are not described in detail.
Each layer closed loop curve may be provided with a number by which the distance between the observed point and the observation point can be determined. Further, since the layer closed loop curves are loop-shaped, the observation point can be used as a reference position, the angle can be designated as an interval, and all the closed loop curves can be divided into areas at one time, whereby the azimuth angle can be determined. Finally, the elevation angle degree of each object can be determined by taking the area as the object or taking the pixel point as the object, thereby completing the determination of the circumference elevation angle parameter.
Through the process, the closed loop curve is used as an aid, and the method is beneficial to the flexible characteristic of seamless connection of the closed loop curve, can be free from the influence of azimuth partition, and can be subjected to stepless subdivision in any angle range to determine the visual field.
In one embodiment, determining the circumferential elevation parameter between the observed point and the observation point using the plurality of closed loop curves in step S1023 may include:
step S10231: and determining the distance between the observed point and the observation point by using the serial number of the target closed loop curve where the observed point is located.
As mentioned above, for each closed loop curve, a sequence number may be provided. If the intervals of the closed loop curves are the same and the widths are the same, the distance between the observed point and the observation point can be directly determined according to the sequence number of the target closed loop curve where the observed point is located.
If the intervals or widths of the closed loop curves are different and the observed point falls on a certain closed loop curve, the distance between the observed point and the observation point can be determined according to the sequence number of the target closed loop curve where the observed point is located, the interval of each closed loop curve and the width of each closed loop curve. If the intervals or widths of the closed loop curves are different and the observed point does not fall on a certain closed loop curve, the closed loop curve closest to the observed point can be determined as a target closed loop curve, and the observed point is translated to the target closed loop curve. Further, the distance between the observed point and the observation point is determined based on the number of the target closed loop curve, the interval between the closed loop curves, and the width of each closed loop curve.
In one embodiment, determining the circumferential elevation parameter between the observed point and the observation point using the plurality of closed loop curves in step S1023 may include:
step S10232: dividing azimuth angles of a plurality of closed loop curves by taking observation points as reference positions and designated angles as intervals;
step S10233: and determining the azimuth angle between the observed point and the observation point according to the azimuth angle dividing result of the target closed loop curve where the observed point is located.
In the present embodiment, the observation point is taken as a reference position. Illustratively, the plurality of closed loop curves may be azimuthally divided at a specified angle of 1 °. That is, a plurality of closed loop curves may be divided into 360 areas with 1 ° as a designated angle. Thus, the azimuth angle can be determined from the region of the observed point located on the target closed-loop curve. For example, if the observed point is located in a region of 10 ° to 11 °, the azimuth angle may be determined to be 11 °.
If the spacing or width of the closed loop curves is different and the observed point falls exactly on a certain closed loop curve, the azimuth angle between the observed point and the observation point can be determined according to the region of the target closed loop curve where the observed point is located. If the intervals or widths of the closed loop curves are different and the observed point does not fall on a certain closed loop curve, the closed loop curve closest to the observed point may be determined as the target closed loop curve. The observed point is moved to a closed loop curve closest to the observed point, and the closed loop curve is taken as a target closed loop curve. And determining the azimuth angle between the observed point and the observation point according to the observed point and the area of the target closed circular curve.
In one embodiment, determining the circumferential elevation parameter between the observed point and the observation point using the plurality of closed loop curves in step S1023 may include:
step S10234: for a target closed loop curve where an observed point is located, acquiring the elevation angle degree of a specified pixel point on the target closed loop curve; the elevation angle degrees include at least one of a lateral elevation angle degree and a longitudinal elevation angle degree;
step S10235: the elevation angle degree between the observed point and the observation point is determined by using the elevation angle degree of the specified pixel point.
The specified pixels on the target closed loop curve may be pixels determined from 0 ° at equal intervals of the same degree. For example, the same degree may be 1 °, 5 °, etc. Thus, the determination of the degree of elevation angle can be performed for each pixel point. The elevation angle degrees may include at least one of a lateral elevation angle degree and a longitudinal elevation angle degree. Preferably, both the lateral elevation angle degrees and the longitudinal elevation angle degrees may be included.
Determining the degree of elevation angle between the observed point and the observation point using the degree of elevation angle of the specified pixel point may refer to: and selecting the designated pixel point nearest to the observed point according to the distance between the observed point and the designated pixel point. So that the degree of elevation of the designated pixel nearest to the observed point is taken as the degree of elevation between the observed point and the observed point. Alternatively, the degree of elevation angle between the observed point and the observation point may also refer to: selecting a plurality of specified pixel points with the distance from the observed point within a set distance threshold value, calculating the elevation angle degrees of the specified pixel points through mean value calculation and weighted mean value calculation, and taking the calculation result as the elevation angle degrees between the observed point and the observed point.
In the case where the horizontal elevation angle degree and the vertical elevation angle degree are present at the same time, two calculations are required. That is, the degree of the lateral elevation angle between the observed point and the observation point is calculated once, and the degree of the longitudinal elevation angle between the observed point and the observation point is calculated another time. For example, after a plurality of specified pixels are selected, a weighted average calculation is performed on the degrees of the lateral elevation angle of the plurality of pixels, and the calculation result is used as the degrees of the lateral elevation angle between the observed point and the observation point. In addition, weighted average calculation is performed on the longitudinal elevation angle degrees of the plurality of pixel points, and the calculation result is used as the longitudinal elevation angle degrees between the observed point and the observation point.
In one embodiment, the determining the degree of elevation between the observed point and the observation point using the degree of elevation of the specified pixel point referred to in step S10235 may include:
step S102351: determining the elevation angle degree of each pixel point on the target closed loop curve by utilizing interpolation according to the elevation angle degree of the appointed pixel point;
step S102352: and determining the degree of elevation angle between the observed point and the observed point according to the pixel point corresponding to the observed point.
After determining the degree of elevation of the specified pixel, an elevation degree closed curve of the degree of elevation represented by each pixel can be obtained. If both the lateral elevation degree and the longitudinal elevation degree are present, the number of layers of the elevation degree closed curve of the elevation degree represented by each pixel point is the same as the closed loop curve, but each layer of the elevation degree closed curve further includes the lateral elevation degree closed curve and the longitudinal elevation degree closed curve. And the transverse elevation angle degree closed curve and the longitudinal elevation angle degree closed curve can represent the elevation angle degrees between the corresponding pixel points and the observation points.
Since the elevation angle degree closed curve is formed by the elevation angle degree of each pixel point, there may be a case of an unevenness. Thus, a smooth curve can be constructed using an interpolation algorithm. Therefore, for each layer of pixel points on the closed loop curve, the corresponding transverse elevation angle degree closure and longitudinal elevation angle degree closure can be determined. Therefore, the full coverage of all pixel points on the three-dimensional terrain model can be achieved, and data support is provided for the determination of the visual field.
In one embodiment, the determining of the visual field according to the circumferential elevation parameter in step S103 may include:
step S1031: determining the position of the observed point on the target closed loop curve according to the circumferential elevation angle parameter between the observed point and the observed point;
step S1032: detecting the shielding condition of the observed point at the position of the target closed circular curve, and determining the visual field according to the detection result; the shadowing is determined based on the elevation angle degrees of the target closed loop curve and the other closed loop curves at the same azimuth angle.
After the observation point and the observed point are determined in the three-dimensional terrain model, a multi-layer closed loop curve can be constructed around the observation point. Thus, the number of layers of the target closed loop curve in which the observed points are located, that is, the distance between the observed point and each observed point, can be determined first. Referring to fig. 3, for example, one observed point is on the m-th layer closed loop curve, and m is a positive integer, then the distance between the observed point and the observed point can be determined according to the value m.
Further, the azimuth divided area where the observed point is located is checked. In the example shown in fig. 3, the observed point is located in an azimuth area corresponding to 44 ° to 45 °, and thus it is possible to determine that the azimuth angle between the observed point and the observation point is 45 °.
Finally, the shielding condition of the mth layer closed loop curve and other layers of closed loop curves on the inner side of the mth layer closed loop curve at the azimuth angle of 45 degrees can be compared. Fig. 3 shows exemplarily a q-th layer closed loop curve and an n-th layer closed loop curve inside the m-th layer closed loop curve, where n and m are both positive integers and q < n < m. The occlusion condition may be a condition in which the horizontal elevation angle degree and the vertical elevation angle degree of the region corresponding to the 45 ° azimuth angle from the first layer closed loop curve to the mth layer closed loop curve are compared. It will be appreciated that during the comparison, the lateral elevation angle degree is one comparison dimension and the longitudinal elevation angle degree is the other comparison dimension. In each comparison dimension, the m-th layer closed loop curve is maximum in both the transverse elevation angle degrees and the longitudinal elevation angle degrees of the area corresponding to the 45-degree azimuth angle, and the observed point can be determined to belong to the visible area. Conversely, if the m-th layer closed loop curve has a non-maximum value of either the lateral elevation angle degree or the longitudinal elevation angle degree in the area corresponding to the 45 ° azimuth angle, the observed point can be determined to belong to the invisible area.
Or if the multi-layer elevation angle degree closed curve is determined, for any one layer of elevation angle degree closed curve, whether the elevation angle degree closed curve is blocked by any one layer of elevation angle degree closed curve on the inner side at a certain position or not can be indicated to be invisible if the elevation angle degree closed curve is blocked. Thus, the visual condition of any observed point in any one-layer elevation angle degree closed curve can be determined at one time. In fig. 3, only one observed point is taken as an example, and in an actual scene, there may be multiple observed points at the same time or a domain composed of multiple observed points. That is, it may be determined whether or not a certain observed point is located in the visual field, or whether or not a plurality of observed points can constitute the visual field. In addition, since the closed loop curves are established layer by layer, when the observed points on the m-th closed loop curve are calculated, the circumferential elevation parameters of all the pixel points on the m-1 th closed loop curve on the inner side of the m-th closed loop curve are completely calculated. Therefore, the calculation sequence avoids repeated reading and repeated calculation, and improves the calculation efficiency in an algorithm level.
Corresponding to the application scene and the method of the method provided by the embodiment of the application, the embodiment of the application also provides a device for determining the visual field. Fig. 4 is a block diagram of a visual field determining apparatus according to an embodiment of the present application, where the visual field determining apparatus may include:
an observation information determining module 401 for determining an observation point and an observed point in the three-dimensional terrain model;
a circumferential elevation parameter determination module 402 for determining a circumferential elevation parameter between the observed point and the observation point; the circumferential elevation parameters are used to indicate the distance, azimuth and elevation degrees between the observed point and the observation point; the elevation angle degrees between the observed point and the observation point refer to the elevation angle degrees between a line-of-sight line segment formed between the observed point and the observation point and a horizon in the three-dimensional terrain model;
the visual field determining module 403 is configured to determine the visual field according to the circumferential elevation parameter.
In one embodiment, the circumference elevation parameter determining module 402 may specifically include:
the topographic line obtaining submodule is used for obtaining a topographic line in the three-dimensional topographic model;
the closed loop curve determining submodule is used for processing the topographic line according to the position of the observation point to obtain a plurality of closed loop curves; the plurality of closed loop curves are formed by taking the positions of the observation points as reference positions, and the area wrapped by the closed loop curves is increased one by one;
a circumferential elevation parameter determination execution sub-module for determining a circumferential elevation parameter between the observed point and the observation point using the plurality of closed loop curves.
In one embodiment, the circumference elevation parameter determination execution submodule may be specifically configured to:
and determining the distance between the observed point and the observation point by using the serial number of the target closed loop curve where the observed point is located.
In one embodiment, the circumference elevation parameter determination execution sub-module may include:
the azimuth dividing unit is used for dividing azimuth of the closed circular curves by taking the observation point as a reference position and the designated angle as an interval;
and the azimuth angle determining unit is used for determining the azimuth angle between the observed point and the observation point according to the azimuth angle division result of the target closed loop curve where the observed point is located.
In one embodiment, the circumference elevation parameter determination execution sub-module may include:
the elevation angle degree acquisition unit is used for acquiring the elevation angle degree of the appointed pixel point on the target closed loop curve for the target closed loop curve where the observed point is located; the elevation angle degrees include at least one of a lateral elevation angle degree and a longitudinal elevation angle degree;
and an elevation angle degree determining unit for determining the elevation angle degree between the observed point and the observation point by using the elevation angle degree of the specified pixel point.
In one embodiment, the elevation angle degree determining unit may include:
the difference value calculating subunit is used for determining the elevation angle degree of each pixel point on the target closed loop curve by interpolation according to the elevation angle degree of the appointed pixel point;
and the elevation angle degree determining and executing subunit is used for determining the elevation angle degree between the observed point and the observation point according to the pixel point corresponding to the observed point.
In one embodiment, the visual field determination module 403 may include:
the position determining submodule is used for determining the position of the observed point on the target closed circular curve according to the circumferential elevation angle parameter between the observed point and the observed point;
the visual field determination execution sub-module is used for detecting the shielding condition of the observed point at the position of the target closed circular curve and determining the visual field according to the detection result; the shadowing is determined based on the elevation angle degrees of the target closed loop curve and the other closed loop curves at the same azimuth angle.
The functions of each module in each device of the embodiments of the present application may be referred to the corresponding descriptions in the above methods, and have corresponding beneficial effects, which are not described herein. It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and provide corresponding operation entries for the user to select authorization or rejection.
Fig. 5 is a block diagram of an electronic device used to implement an embodiment of the present application. As shown in fig. 5, the electronic device includes: memory 510 and processor 520, memory 510 stores a computer program executable on processor 520. The processor 520, when executing the computer program, implements the methods of the above-described embodiments. The number of memories 510 and processors 520 may be one or more.
The electronic device further includes:
and the communication interface 530 is used for communicating with external equipment and carrying out data interaction transmission.
If the memory 510, the processor 520, and the communication interface 530 are implemented independently, the memory 510, the processor 520, and the communication interface 530 may be connected to each other and communicate with each other through buses. The bus may be an industry standard architecture (Industry Standard Architecture, ISA) bus, an external device interconnect (Peripheral Component Interconnect, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in fig. 5, but not only one bus or one type of bus.
Alternatively, in a specific implementation, if the memory 510, the processor 520, and the communication interface 530 are integrated on a chip, the memory 510, the processor 520, and the communication interface 530 may communicate with each other through internal interfaces.
The present embodiments provide a computer-readable storage medium storing a computer program that, when executed by a processor, implements the methods provided in the embodiments of the present application.
The embodiment of the application also provides a chip, which comprises a processor and is used for calling the instructions stored in the memory from the memory and running the instructions stored in the memory, so that the communication device provided with the chip executes the method provided by the embodiment of the application.
The embodiment of the application also provides a chip, which comprises: the input interface, the output interface, the processor and the memory are connected through an internal connection path, the processor is used for executing codes in the memory, and when the codes are executed, the processor is used for executing the method provided by the application embodiment.
It should be appreciated that the processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or any conventional processor or the like. It is noted that the processor may be a processor supporting an advanced reduced instruction set machine (Advanced RISC Machines, ARM) architecture.
Further alternatively, the memory may include a read-only memory and a random access memory. The memory may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), programmable ROM (PROM), erasable Programmable ROM (EPROM), electrically Erasable EPROM (EEPROM), or flash Memory, among others. Volatile memory can include random access memory (Random Access Memory, RAM), which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available. For example, static RAM (SRAM), dynamic RAM (Dynamic Random Access Memory, DRAM), synchronous DRAM (SDRAM), double Data Rate Synchronous DRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and Direct RAM (DR RAM).
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. Computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
Any process or method described in flow charts or otherwise herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process. And the scope of the preferred embodiments of the present application includes additional implementations in which functions may be performed in a substantially simultaneous manner or in an opposite order from that shown or discussed, including in accordance with the functions that are involved.
Logic and/or steps described in the flowcharts or otherwise described herein, e.g., may be considered a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. All or part of the steps of the methods of the embodiments described above may be performed by a program that, when executed, comprises one or a combination of the steps of the method embodiments, instructs the associated hardware to perform the method.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules described above, if implemented in the form of software functional modules and sold or used as a stand-alone product, may also be stored in a computer-readable storage medium. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
The foregoing is merely exemplary embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of various changes or substitutions within the technical scope of the present application, which should be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of determining a visual field, comprising:
determining an observation point and an observed point in the three-dimensional terrain model;
determining a circumferential elevation parameter between the observed point and the observation point; the circumferential elevation parameters are used to indicate a distance, azimuth and elevation degrees between the observed point and the observation point; the elevation angle degrees between the observed point and the observation point refer to elevation angle degrees of a line segment formed between the observed point and the observation point and a horizon in the three-dimensional terrain model;
and determining the visual field according to the circumferential elevation angle parameter.
2. The method of claim 1, wherein said determining a circumferential elevation parameter between said observed point and said observation point comprises:
acquiring a topographic line in the three-dimensional topographic model;
processing the topographic line according to the position of the observation point to obtain a plurality of closed annular curves; the plurality of closed loop curves are formed by taking the position of the observation point as a reference position, and the area wrapped by the closed loop curves is increased one by one;
determining a circumferential elevation parameter between the observed point and the observation point using a plurality of the closed loop curves.
3. The method of claim 2, wherein said determining a circumferential elevation parameter between said observed point and said observation point using a plurality of said closed loop curves comprises:
and determining the distance between the observed point and the observation point by using the serial number of the target closed loop curve where the observed point is located.
4. The method of claim 2, wherein said determining a circumferential elevation parameter between said observed point and said observation point using a plurality of said closed loop curves comprises:
dividing azimuth angles of the closed loop curves by taking the observation points as reference positions and the designated angles as intervals;
and determining the azimuth angle between the observed point and the observation point according to the azimuth angle dividing result of the target closed loop curve where the observed point is located.
5. The method of claim 2, wherein said determining a circumferential elevation parameter between said observed point and said observation point using a plurality of said closed loop curves comprises:
for a target closed loop curve where the observed point is located, acquiring the elevation angle degree of a specified pixel point on the target closed loop curve; the elevation angle degrees include at least one of a lateral elevation angle degree and a longitudinal elevation angle degree;
and determining the elevation angle degrees between the observed point and the observation point by using the elevation angle degrees of the specified pixel points.
6. The method of claim 5, wherein determining the degree of elevation between the observed point and the observation point using the degree of elevation of the specified pixel point comprises:
determining the elevation angle degree of each pixel point on the target closed loop curve by interpolation according to the elevation angle degree of the appointed pixel point;
and determining the degree of elevation angle between the observed point and the observed point according to the pixel point corresponding to the observed point.
7. The method of claim 2, wherein said determining said visual field from said circumferential elevation parameter comprises:
determining the position of the observed point on a target closed circular curve according to the circumferential elevation angle parameter between the observed point and the observed point;
detecting the shielding condition of the observed point at the position of the target closed circular curve, and determining the visual field according to the detection result; the shadowing condition is determined based on the elevation angle degrees of the target closed loop curve and the other closed loop curves at the same azimuth angle.
8. A visual field determining apparatus, comprising:
the observation information determining module is used for determining an observation point and an observed point in the three-dimensional terrain model;
a circumference elevation parameter determining module for determining circumference elevation parameters between the observed point and the observation point; the circumferential elevation parameters are used to indicate a distance, azimuth and elevation degrees between the observed point and the observation point; the elevation angle degrees between the observed point and the observation point refer to elevation angle degrees of a line segment formed between the observed point and the observation point and a horizon in the three-dimensional terrain model;
and the visual field determining module is used for determining the visual field according to the circumferential elevation angle parameter.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory, the processor implementing the method of any one of claims 1-7 when the computer program is executed.
10. A computer readable storage medium having stored therein a computer program which, when executed by a processor, implements the method of any of claims 1-7.
CN202410050531.5A 2024-01-15 2024-01-15 Method and device for determining visible region, electronic equipment and storage medium Pending CN117576333A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410050531.5A CN117576333A (en) 2024-01-15 2024-01-15 Method and device for determining visible region, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410050531.5A CN117576333A (en) 2024-01-15 2024-01-15 Method and device for determining visible region, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117576333A true CN117576333A (en) 2024-02-20

Family

ID=89864565

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410050531.5A Pending CN117576333A (en) 2024-01-15 2024-01-15 Method and device for determining visible region, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117576333A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB677913A (en) * 1949-10-11 1952-08-27 Marconi Wireless Telegraph Co Improvements in or relating to radar systems
WO2010001402A1 (en) * 2008-07-03 2010-01-07 Elta Systems Ltd. Sensing/emitting apparatus, system and method
WO2010101540A1 (en) * 2009-03-02 2010-09-10 Panchenko Borys Evgenijovich Method for the fully modifiable framework distribution of data in a data warehouse taking account of the preliminary etymological separation of said data
CN105389375A (en) * 2015-11-18 2016-03-09 福建师范大学 Viewshed based image index setting method and system, and retrieving method
CN105869211A (en) * 2016-06-16 2016-08-17 成都中科合迅科技有限公司 Analytical method and device for visible range
CN110362923A (en) * 2019-07-16 2019-10-22 成都酷博空间科技有限公司 3 D monitoring coverage rate algorithm and monitoring installation method and monitoring system based on three-dimensional visible domain analysis
CN110704914A (en) * 2019-09-20 2020-01-17 同济大学建筑设计研究院(集团)有限公司 Sight line analysis method and device, computer equipment and storage medium
CN114937131A (en) * 2022-06-23 2022-08-23 南京师范大学 Single-viewpoint topographic visual field space topological feature extraction and representation method
CN115794414A (en) * 2023-01-28 2023-03-14 中国人民解放军国防科技大学 Satellite-to-ground full-view analysis method, device and equipment based on parallel computing
CN117095150A (en) * 2023-09-13 2023-11-21 北京五一视界数字孪生科技股份有限公司 Visual field degree of freedom analysis method and device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB677913A (en) * 1949-10-11 1952-08-27 Marconi Wireless Telegraph Co Improvements in or relating to radar systems
WO2010001402A1 (en) * 2008-07-03 2010-01-07 Elta Systems Ltd. Sensing/emitting apparatus, system and method
WO2010101540A1 (en) * 2009-03-02 2010-09-10 Panchenko Borys Evgenijovich Method for the fully modifiable framework distribution of data in a data warehouse taking account of the preliminary etymological separation of said data
CN105389375A (en) * 2015-11-18 2016-03-09 福建师范大学 Viewshed based image index setting method and system, and retrieving method
CN105869211A (en) * 2016-06-16 2016-08-17 成都中科合迅科技有限公司 Analytical method and device for visible range
CN110362923A (en) * 2019-07-16 2019-10-22 成都酷博空间科技有限公司 3 D monitoring coverage rate algorithm and monitoring installation method and monitoring system based on three-dimensional visible domain analysis
CN110704914A (en) * 2019-09-20 2020-01-17 同济大学建筑设计研究院(集团)有限公司 Sight line analysis method and device, computer equipment and storage medium
CN114937131A (en) * 2022-06-23 2022-08-23 南京师范大学 Single-viewpoint topographic visual field space topological feature extraction and representation method
CN115794414A (en) * 2023-01-28 2023-03-14 中国人民解放军国防科技大学 Satellite-to-ground full-view analysis method, device and equipment based on parallel computing
CN117095150A (en) * 2023-09-13 2023-11-21 北京五一视界数字孪生科技股份有限公司 Visual field degree of freedom analysis method and device

Similar Documents

Publication Publication Date Title
US11226431B2 (en) Method and device for filling invalid regions of terrain elevation model data
WO2013106856A1 (en) Place heat geometries
US7991240B2 (en) Methods, systems and apparatuses for modeling optical images
CN111090716A (en) Vector tile data processing method, device, equipment and storage medium
CN113657252B (en) Efficient SAR image ship target detection method based on encoding and decoding device
CN114677494B (en) Method, device and equipment for calculating radar detection capability based on subdivision grids
CN116030180B (en) Irradiance cache illumination calculation method and device, storage medium and computer equipment
CN116910290B (en) Method, device, equipment and medium for loading slice-free remote sensing image
CN110458954B (en) Contour line generation method, device and equipment
CN111899323A (en) Three-dimensional earth drawing method and device
CN111652931A (en) Geographic positioning method, device, equipment and computer readable storage medium
CN117576333A (en) Method and device for determining visible region, electronic equipment and storage medium
CN103809937B (en) A kind of intervisibility method for parallel processing based on GPU
CN112465886A (en) Model generation method, device, equipment and readable storage medium
WO2023131236A1 (en) Image processing method and apparatus, and electronic device
CN114022518B (en) Method, device, equipment and medium for acquiring optical flow information of image
US9275481B2 (en) Viewport-based contrast adjustment for map features
CN115209349A (en) Geofence detection method and device and electronic equipment
CN114820396A (en) Image processing method, device, equipment and storage medium
CN109241207A (en) A kind of method and device showing data on map
CN114663615A (en) Electronic map display method and device and electronic equipment
CN112465932A (en) Image filling method, device, equipment and storage medium
CN112667761A (en) Geographic information data generation method and device, map presentation method and device, storage medium and computing equipment
CN112348021A (en) Text detection method, device, equipment and storage medium
TWI786874B (en) Method of the digital grid model and system thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination