CN116559123A - Method and device for measuring target visibility - Google Patents

Method and device for measuring target visibility Download PDF

Info

Publication number
CN116559123A
CN116559123A CN202310714120.7A CN202310714120A CN116559123A CN 116559123 A CN116559123 A CN 116559123A CN 202310714120 A CN202310714120 A CN 202310714120A CN 116559123 A CN116559123 A CN 116559123A
Authority
CN
China
Prior art keywords
visibility
target area
transmittance
relation
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310714120.7A
Other languages
Chinese (zh)
Inventor
邱赛
张泽
张绍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu Aerospace Information Research Institute
Aerospace Information Research Institute of CAS
Original Assignee
Qilu Aerospace Information Research Institute
Aerospace Information Research Institute of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu Aerospace Information Research Institute, Aerospace Information Research Institute of CAS filed Critical Qilu Aerospace Information Research Institute
Priority to CN202310714120.7A priority Critical patent/CN116559123A/en
Publication of CN116559123A publication Critical patent/CN116559123A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/01Arrangements or apparatus for facilitating the optical investigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/59Transmissivity
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/01Arrangements or apparatus for facilitating the optical investigation
    • G01N2021/0106General arrangement of respective parts
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N2021/1765Method using an image detector and processing of image signal
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N2021/1765Method using an image detector and processing of image signal
    • G01N2021/177Detector of the video camera type
    • G01N2021/1772Array detector
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The invention provides a method and a device for measuring target visibility, which relate to the field of target measurement, and the method comprises the following steps: dividing the target image according to the material to obtain a plurality of divided areas with different materials; acquiring a first relation between the transmittance and the visibility of the segmented regions of the same material at different distances and a second relation between the transmittance and the visibility of the segmented regions of different materials at the same distance; generating a transmittance-to-visibility relationship library based on the first relationship and the second relationship; acquiring attribute information of a target area, wherein the attribute information comprises transmissivity corresponding to a material of the target area; obtaining distance information of a target area, wherein the distance information is a distance parameter of the target area from an observation point; acquiring a transmittance and visibility relation corresponding to the target area from a transmittance and visibility relation library based on the attribute information and the distance information; and calculating according to the relation between the transmissivity and the visibility to obtain the visibility value of the target area.

Description

Method and device for measuring target visibility
Technical Field
The invention relates to the field of target measurement, in particular to a method and a device for measuring target visibility.
Background
At present, visibility has important roles in the fields of transportation, taking off and landing of an airplane and the like. The common visibility detecting device is an infrared scattering type visibility detector, but the device has high cost, can only measure the visibility near the instrument, and cannot meet the requirement of measuring the visibility in a large range.
Compared with the method, the method for measuring the visibility by using the camera has the advantages that the large visual field of the camera can measure a large range of visibility values, and the cost of the camera is low, so that the method is suitable for wide deployment. The algorithm for measuring the visibility by the camera mainly comprises a contrast measurement method, a brightness method and a neural network measurement method, wherein the contrast measurement method and the brightness method are easily influenced by various factors, the measurement result is unstable, and when the neural network method is adopted, different observation targets have great influence on the visibility measurement result, so that the traditional visibility measurement method has low stability and is easily influenced by surrounding environment factors.
Disclosure of Invention
First, the technical problem to be solved
In view of the above problems, the present invention provides a method and an apparatus for measuring visibility of a target, which analyze visibility measurement parameters of an observed target of different materials and different distances, and estimate a relationship between transmittance and visibility according to the material of the target and the distance of the target, so as to obtain an accurate visibility calculation relationship to calculate a visibility value of the target.
(II) technical scheme
An aspect of an embodiment of the present invention provides a method for measuring visibility of a target, including: dividing the target image according to the material to obtain a plurality of divided areas with different materials; acquiring a first relation between the transmittance and the visibility of the segmented regions of the same material at different distances and a second relation between the transmittance and the visibility of the segmented regions of different materials at the same distance; generating a transmittance-to-visibility relationship library based on the first relationship and the second relationship; acquiring attribute information of a target area, wherein the attribute information comprises transmissivity corresponding to a material of the target area; obtaining distance information of a target area, wherein the distance information is a distance parameter of the target area from an observation point; acquiring a transmittance and visibility relation corresponding to the target area from a transmittance and visibility relation library based on the attribute information and the distance information; and calculating according to the relation between the transmissivity and the visibility to obtain the visibility value of the target area.
In an embodiment of the present invention, obtaining distance information of a target area includes: acquiring distance information of the target area by using a rich energy sensor, wherein acquiring the distance information of the target area by using the rich energy sensor comprises the following steps: adopting an unmanned aerial vehicle to scan a digital model of a target area; carrying out virtual-real fusion treatment on the digital model; obtaining distance parameters of all pixels in a target area according to the digital model after virtual-real fusion processing; and calculating the average value of the distance parameters of all pixels in the target area to obtain the distance information of the target area.
In an embodiment of the present invention, performing segmentation processing on a target image according to a material to obtain a plurality of segmented regions with different materials includes: and dividing the target image according to the materials by using a neural network to obtain a plurality of divided areas with different materials, wherein the neural network comprises a mask-CNN neural network, a segNet neural network or a target detection neural network.
In an embodiment of the present invention, obtaining attribute information of a target area includes: acquiring attribute information of a target area by using a convolutional neural network, wherein the structure of the convolutional neural network comprises an input layer, an implicit layer and an output layer; the input layer is used for carrying out normalization processing on the target area; the hidden layer is used for carrying out classification recognition processing on the target area after normalization processing to obtain a classification recognition result; the output layer is used for outputting attribute information corresponding to the material of the target area according to the classification and identification result.
In an embodiment of the present invention, the hidden layers include a first hidden layer and a second hidden layer, where the first hidden layer and the second hidden layer include a convolution layer, a pooling layer, and a full connection layer, respectively; the convolution layer is used for extracting characteristics of the target area; the pooling layer is used for carrying out average pooling treatment on the extracted features; the full-connection layer is used for carrying out classification recognition processing on the characteristics after the average pooling processing to obtain classification recognition results.
In an embodiment of the present invention, obtaining a first relationship between transmittance and visibility of a segmented region of a same material at different distances, and a second relationship between transmittance and visibility of a segmented region of a different material at the same distance includes: and obtaining a first relation between the transmittance and the visibility of the segmented areas of the same material at different distances and a second relation between the transmittance and the visibility of the segmented areas of different materials at the same distance by adopting manual observation, wherein the manual observation time is at least half a year.
In an embodiment of the present invention, obtaining the transmittance and visibility relationship of the target area from the transmittance and visibility relationship library based on the attribute information and the distance information includes: selecting the closest transmittance and visibility relation corresponding to the target area from the transmittance and visibility relation library as the transmittance and visibility relation of the target area based on the attribute information and the distance information; or selecting a plurality of nearest transmittance and visibility relations corresponding to the target area to obtain an average value as the transmittance and visibility relation of the target area.
In an embodiment of the present invention, the visibility value of the target area is calculated based on a dark channel algorithm according to the relation between the transmittance and the visibility.
In an embodiment of the present invention, before performing segmentation processing on the target image according to the material to obtain a plurality of segmented regions with different materials, the method further includes: a visible light camera, an infrared camera, or a camera array is used to acquire the target image.
Another aspect of an embodiment of the present invention provides a device for measuring visibility of a target, including: the segmentation module is used for carrying out segmentation processing on the target image according to the materials to obtain a plurality of segmentation areas with different materials; the first acquisition module is used for acquiring a first relation between the transmittance and the visibility of the segmented areas of the same material at different distances and a second relation between the transmittance and the visibility of the segmented areas of different materials at the same distance; the generating module is used for generating a transmissivity and visibility relation library based on the first relation and the second relation; the second acquisition module is used for acquiring attribute information of the target area, wherein the attribute information comprises transmissivity corresponding to the material of the target area; the third acquisition module is used for acquiring the distance information of the target area, wherein the distance information is a distance parameter of the target area from the observation point; a fourth acquisition module, configured to acquire a transmittance and visibility relationship corresponding to the target area from a transmittance and visibility relationship library based on the attribute information and the distance information; and the calculation module is used for calculating the visibility value of the target area according to the relation between the transmissivity and the visibility.
(III) beneficial effects
The method and the device for measuring the target visibility provided by the embodiment of the invention have at least the following beneficial effects:
(1) According to the method and the device for measuring the visibility of the target, provided by the embodiment of the invention, the visibility measurement parameters of the observed targets with different materials and different distances are analyzed, the transmittance of the target materials is classified and identified by adopting the convolutional neural network, the distance parameter of the target area is acquired by adopting the energy-rich sensor, and the relationship between the transmittance and the visibility is estimated according to the transmittance of the target materials and the distance parameter of the target, so that the visibility value of the target is calculated according to the accurate visibility calculation relationship.
(2) According to the method and the device for measuring the visibility of the target, which are provided by the embodiment of the invention, as the material information and the distance information of the target can be identified, the method and the device for measuring the visibility of the target are not limited by scenes, can be applied to any scene, and greatly expand the application range.
(3) The method and the device for measuring the target visibility provided by the embodiment of the invention only depend on the optical camera, and the optical camera has low cost and is easy to be laid in a large range.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 schematically shows a flowchart of a method for measuring visibility of a target according to an embodiment of the present invention.
Fig. 2 schematically illustrates a segmentation processing diagram of a target image in the method for measuring target visibility according to the embodiment of the present invention.
Fig. 3 schematically illustrates a structure diagram of a convolutional neural network in the method for measuring target visibility according to the embodiment of the present invention.
Fig. 4 schematically shows a block diagram of a measurement apparatus for target visibility provided by an embodiment of the present invention.
Detailed Description
The present invention will be further described in detail below with reference to specific embodiments and with reference to the accompanying drawings, in order to make the objects, technical solutions and advantages of the present invention more apparent. It will be apparent that the described embodiments are some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
In the present invention, unless explicitly specified and limited otherwise, the terms "mounted," "connected," "secured," and the like are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally formed; may be mechanically connected, may be electrically connected or may communicate with each other; can be directly connected or indirectly connected through an intermediate medium, and can be communicated with the inside of two elements or the interaction relationship of the two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
In the description of the present invention, it should be understood that the terms "longitudinal," "length," "circumferential," "front," "rear," "left," "right," "top," "bottom," "inner," "outer," and the like indicate an orientation or a positional relationship based on that shown in the drawings, merely for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the subsystem or element in question must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention.
Like elements are denoted by like or similar reference numerals throughout the drawings. Conventional structures or constructions will be omitted when they may cause confusion in the understanding of the invention. And the shape, size and position relation of each component in the figure do not reflect the actual size, proportion and actual position relation. In addition, in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim.
Similarly, in the description of exemplary embodiments of the invention above, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. The description of the terms "one embodiment," "some embodiments," "example," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
Fig. 1 schematically shows a flowchart of a method for measuring visibility of a target according to an embodiment of the present invention.
As shown in fig. 1, the method for measuring target visibility provided by the embodiment of the present invention may include:
s1, dividing the target image according to the materials to obtain a plurality of divided areas with different materials.
Before the target image is segmented according to the material to obtain a plurality of segmented regions with different materials, the method further comprises: a visible light camera, an infrared camera, or a camera array is used to acquire the target image.
S2, acquiring a first relation between the transmittance and the visibility of the segmented regions of the same material at different distances and a second relation between the transmittance and the visibility of the segmented regions of different materials at the same distance.
The method for obtaining the first relation between the transmittance and the visibility of the segmented regions of the same material at different distances and the second relation between the transmittance and the visibility of the segmented regions of different materials at the same distance comprises the following steps:
and obtaining a first relation between the transmittance and the visibility of the segmented areas of the same material at different distances and a second relation between the transmittance and the visibility of the segmented areas of different materials at the same distance by adopting manual observation, wherein the manual observation time is at least half a year.
And S3, generating a transmittance and visibility relation library based on the first relation and the second relation.
S4, acquiring attribute information of the target area, wherein the attribute information comprises transmissivity corresponding to the material of the target area.
Wherein, obtaining the attribute information of the target area includes: and acquiring attribute information of the target area by using the convolutional neural network. The structure of the convolutional neural network comprises an input layer, an implicit layer and an output layer; the input layer is used for carrying out normalization processing on the target area; the hidden layer is used for carrying out classification recognition processing on the target area after normalization processing to obtain a classification recognition result; the output layer is used for outputting attribute information corresponding to the material of the target area according to the classification and identification result.
S5, obtaining distance information of the target area, wherein the distance information is a distance parameter of the target area from an observation point.
Wherein, obtaining the distance information of the target area includes: and acquiring distance information of the target area by using a rich energy sensor.
The method for acquiring the distance information of the target area by using the energy-rich sensor comprises the following steps:
s50, all digital models of the target area are obtained through unmanned aerial vehicle scanning.
S502, carrying out virtual-real fusion processing on the digital model, and ensuring that the image shot by the camera system is consistent with the image shot by the virtual camera system.
S503, obtaining distance parameters of all pixels in the target area according to the digital model after virtual-real fusion processing.
S504, calculating the average value of the distance parameters of all pixels in the target area to obtain the distance information of the target area.
And S6, acquiring the relation between the transmissivity and the visibility corresponding to the target area from a relation library of the transmissivity and the visibility based on the attribute information and the distance information.
Wherein obtaining the transmittance and visibility relationship of the target area from the transmittance and visibility relationship library based on the attribute information and the distance information comprises:
selecting the closest transmittance and visibility relation corresponding to the target area from the transmittance and visibility relation library as the transmittance and visibility relation of the target area based on the attribute information and the distance information; or selecting a plurality of nearest transmittance and visibility relations corresponding to the target area to obtain an average value as the transmittance and visibility relation of the target area.
And S7, calculating to obtain a visibility value of the target area according to the relation between the transmissivity and the visibility.
The calculating the visibility value of the target area according to the relation between the transmissivity and the visibility comprises the following steps:
and calculating a visibility value of the target area based on a dark channel algorithm according to the relation between the transmissivity and the visibility.
According to the method for measuring the visibility of the target, provided by the embodiment of the invention, the visibility measurement parameters of the observed targets with different materials and different distances are analyzed, the transmittance of the target materials is classified and identified by adopting the convolutional neural network, the distance parameter of the target area is acquired by adopting the energy-rich sensor, and the relationship between the transmittance and the visibility is estimated according to the transmittance of the target materials and the distance parameter of the target, so that the visibility value of the target is calculated according to the accurate visibility calculation relationship.
According to the method for measuring the visibility of the target, which is provided by the embodiment of the invention, as the material information and the distance information of the target can be identified, the method and the device for measuring the visibility of the target are not limited by scenes, can be applied to any scene, and greatly expands the application range.
The method for measuring the target visibility provided by the embodiment of the invention only depends on the optical camera, and the optical camera has low cost and is easy to be laid in a large range.
Fig. 2 schematically illustrates a segmentation processing diagram of a target image in the method for measuring target visibility according to the embodiment of the present invention.
As shown in fig. 2, in the method for measuring target visibility according to the embodiment of the present invention, the segmentation processing for the target image includes:
and dividing the target image according to the material by using a neural network to obtain a plurality of divided areas with different materials, wherein the neural network comprises a Mask R-CNN neural network, a segNet neural network or an FPN (target detection) neural network.
Fig. 3 schematically illustrates a structure diagram of a convolutional neural network in the method for measuring target visibility according to the embodiment of the present invention.
As shown in fig. 3, the structure of the convolutional neural network includes an input layer, an hidden layer, and an output layer.
The input layer is used for carrying out normalization processing on the target area.
The hidden layer is used for carrying out classification recognition processing on the target area after normalization processing to obtain a classification recognition result.
The output layer is used for outputting attribute information corresponding to the material of the target area according to the classification and identification result.
Wherein the hidden layers include a first hidden layer and a second hidden layer.
The first hidden layer and the second hidden layer respectively comprise a convolution layer, a pooling layer and a full connection layer.
The convolution layer is used for extracting the characteristics of the target area.
The pooling layer is used for carrying out average pooling treatment on the extracted features.
The full-connection layer is used for carrying out classification recognition processing on the characteristics after the average pooling processing to obtain classification recognition results.
The convolutional neural network is input as a segmented material area image, and output as attribute information, namely material parameters, wherein the material parameters reflect the intensity of the reflection capability of the material.
Fig. 4 schematically shows a block diagram of a measurement apparatus for target visibility provided by an embodiment of the present invention.
As shown in fig. 4, a measurement device for target visibility provided by an embodiment of the present invention may include: a segmentation module 401, a first acquisition module 402, a generation module 403, a second acquisition module 404, a third acquisition module 405, a fourth acquisition module 406 and a calculation module 407.
The segmentation module 401 is configured to perform segmentation processing on the target image according to the material, so as to obtain a plurality of segmented regions with different materials.
The first obtaining module 402 is configured to obtain a first relationship between transmittance and visibility of a segmented region of the same material at different distances, and a second relationship between transmittance and visibility of a segmented region of different materials at the same distance.
A generating module 403, configured to generate a transmittance versus visibility relationship library based on the first relationship and the second relationship.
The second obtaining module 404 is configured to obtain attribute information of the target area, where the attribute information includes a transmittance corresponding to a material of the target area.
And a third obtaining module 405, configured to obtain distance information of the target area, where the distance information is a distance parameter of the target area from the observation point.
A fourth obtaining module 406, configured to obtain, from the transmittance-visibility relation library, a transmittance-visibility relation corresponding to the target area based on the attribute information and the distance information.
The calculating module 407 is configured to calculate a visibility value of the target area according to the relationship between the transmittance and the visibility.
According to an embodiment of the present invention, any of the splitting module 401, the first obtaining module 402, the generating module 403, the second obtaining module 404, the third obtaining module 405, the fourth obtaining module 406, and the calculating module 407 may be combined into one module to be implemented, or any one of the modules may be split into a plurality of modules, or at least part of the functions of one or more of the modules may be combined with at least part of the functions of other modules to be implemented in one module. According to embodiments of the present disclosure, at least one of the segmentation module 401, the first acquisition module 402, the generation module 403, the second acquisition module 404, the third acquisition module 405, the fourth acquisition module 406, and the calculation module 407 may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-chip, a system-on-substrate, a system-on-package, an Application Specific Integrated Circuit (ASIC), or in hardware or firmware in any other reasonable manner of integrating or packaging the circuitry, or in any one of or a suitable combination of three of software, hardware, and firmware. Alternatively, at least one of the preprocessing module, the extraction module, the generation module, the detection module and the measurement module may be at least partially implemented as a computer program module, which when executed may perform the respective functions.
It should be noted that, the device for measuring the target visibility in the embodiment of the present invention corresponds to the portion of the method for measuring the target visibility in the embodiment of the present invention, and specific implementation details and technical effects brought by the specific implementation details are the same, which is not repeated here.
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive.
Those skilled in the art will appreciate that the features recited in the various embodiments of the invention and/or in the claims can be combined in a wide variety of ways and/or combinations, even if such combinations or combinations are not explicitly recited in the present invention. In particular, the features recited in the various embodiments of the invention and/or in the claims can be combined in various combinations and/or combinations without departing from the spirit and teachings of the invention. All such combinations and/or combinations fall within the scope of the invention.
While the present invention has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the appended claims and their equivalents. The scope of the invention should, therefore, be determined not with reference to the above-described embodiments, but instead should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.

Claims (10)

1. A method for measuring visibility of a target, comprising:
dividing the target image according to the material to obtain a plurality of divided areas with different materials;
acquiring a first relation between the transmittance and the visibility of the segmented regions of the same material at different distances and a second relation between the transmittance and the visibility of the segmented regions of different materials at the same distance;
generating a transmittance-to-visibility relationship library based on the first relationship and the second relationship;
acquiring attribute information of a target area, wherein the attribute information comprises transmissivity corresponding to a material of the target area;
obtaining distance information of the target area, wherein the distance information is a distance parameter of the target area from an observation point;
acquiring a transmittance and visibility relation corresponding to the target area from the transmittance and visibility relation library based on the attribute information and the distance information;
and calculating the visibility value of the target area according to the relation between the transmissivity and the visibility.
2. The method for measuring visibility of a target according to claim 1, wherein the acquiring distance information of the target area includes:
acquiring the distance information of the target area by using a rich energy sensor, wherein the acquiring the distance information of the target area by using the rich energy sensor comprises the following steps:
scanning a digital model of the target area by adopting an unmanned aerial vehicle;
carrying out virtual-real fusion processing on the digital model;
acquiring distance parameters of all pixels in the target area according to the digital model after virtual-real fusion processing;
and calculating the average value of the distance parameters of all pixels in the target area to obtain the distance information of the target area.
3. The method for measuring the visibility of a target according to claim 1, wherein the dividing the target image according to the material to obtain a plurality of divided areas of different materials includes:
and dividing the target image according to the material by using a neural network to obtain a plurality of divided areas with different materials, wherein the neural network comprises a Mask R-CNN neural network, a segNet neural network or a target detection neural network.
4. The method for measuring the visibility of a target according to claim 1, wherein the acquiring attribute information of the target area includes:
acquiring attribute information of a target area by using a convolutional neural network, wherein the structure of the convolutional neural network comprises an input layer, an implicit layer and an output layer;
the input layer is used for carrying out normalization processing on the target area;
the hidden layer is used for carrying out classification recognition processing on the target area after normalization processing to obtain a classification recognition result;
and the output layer is used for outputting attribute information corresponding to the material of the target area according to the classification and identification result.
5. The method of claim 4, wherein the hidden layers comprise a first hidden layer and a second hidden layer, wherein the first hidden layer and the second hidden layer comprise a convolution layer, a pooling layer, and a full connection layer, respectively;
the convolution layer is used for extracting characteristics of the target area;
the pooling layer is used for carrying out average pooling treatment on the extracted features;
the full connection layer is used for carrying out classification recognition processing on the characteristics after the average pooling processing to obtain classification recognition results.
6. The method according to claim 1, wherein the obtaining the first relation between the transmittance and the visibility of the segmented regions of the same material at different distances and the second relation between the transmittance and the visibility of the segmented regions of different materials at the same distance comprises:
and obtaining a first relation between the transmittance and the visibility of the segmented areas of the same material at different distances and a second relation between the transmittance and the visibility of the segmented areas of different materials at the same distance by adopting manual observation, wherein the manual observation time is at least half a year.
7. The method according to claim 1, wherein the acquiring the transmittance-to-visibility relationship of the target area from the transmittance-to-visibility relationship library based on the attribute information and the distance information includes:
selecting the closest transmittance and visibility relation corresponding to the target area from the transmittance and visibility relation library as the transmittance and visibility relation of the target area based on the attribute information and the distance information; or selecting a plurality of closest transmittance and visibility relations corresponding to the target area to obtain an average value as the transmittance and visibility relation of the target area.
8. The method according to claim 1, wherein calculating the visibility value of the target area according to the relation between the transmittance and the visibility comprises:
and calculating the visibility value of the target area based on a dark channel algorithm according to the relation between the transmissivity and the visibility.
9. The method according to claim 1, further comprising, before the segmenting the target image according to the material to obtain a plurality of segmented regions of different materials:
the target image is acquired using a visible light camera, an infrared camera, or a camera array.
10. A measurement device for target visibility, comprising:
the segmentation module is used for carrying out segmentation processing on the target image according to the materials to obtain a plurality of segmentation areas with different materials;
the first acquisition module is used for acquiring a first relation between the transmittance and the visibility of the segmented areas of the same material at different distances and a second relation between the transmittance and the visibility of the segmented areas of different materials at the same distance;
the generation module is used for generating a transmissivity and visibility relation library based on the first relation and the second relation;
the second acquisition module is used for acquiring attribute information of a target area, wherein the attribute information comprises transmissivity corresponding to the material of the target area;
the third acquisition module is used for acquiring the distance information of the target area, wherein the distance information is a distance parameter of the target area from an observation point;
a fourth obtaining module, configured to obtain a transmittance and visibility relationship corresponding to the target area from the transmittance and visibility relationship library based on the attribute information and the distance information;
and the calculation module is used for calculating the visibility value of the target area according to the relation between the transmissivity and the visibility.
CN202310714120.7A 2023-06-16 2023-06-16 Method and device for measuring target visibility Pending CN116559123A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310714120.7A CN116559123A (en) 2023-06-16 2023-06-16 Method and device for measuring target visibility

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310714120.7A CN116559123A (en) 2023-06-16 2023-06-16 Method and device for measuring target visibility

Publications (1)

Publication Number Publication Date
CN116559123A true CN116559123A (en) 2023-08-08

Family

ID=87486251

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310714120.7A Pending CN116559123A (en) 2023-06-16 2023-06-16 Method and device for measuring target visibility

Country Status (1)

Country Link
CN (1) CN116559123A (en)

Similar Documents

Publication Publication Date Title
CN104574393B (en) A kind of three-dimensional pavement crack pattern picture generates system and method
CN107527352B (en) Remote sensing ship target contour segmentation and detection method based on deep learning FCN network
CN109961468B (en) Volume measurement method and device based on binocular vision and storage medium
CN108920584A (en) A kind of semanteme grating map generation method and its device
CN113989662B (en) Remote sensing image fine-grained target identification method based on self-supervision mechanism
CN111462128B (en) Pixel-level image segmentation system and method based on multi-mode spectrum image
CN109253722B (en) Monocular distance measuring system, method, equipment and storage medium fusing semantic segmentation
CN108764082A (en) A kind of Aircraft Targets detection method, electronic equipment, storage medium and system
CN116978009A (en) Dynamic object filtering method based on 4D millimeter wave radar
Kurmi et al. Pose error reduction for focus enhancement in thermal synthetic aperture visualization
CN104658034A (en) Fusion rendering method for CT (Computed Tomography) image data
CN105787870A (en) Graphic image splicing fusion system
Motayyeb et al. Fusion of UAV-based infrared and visible images for thermal leakage map generation of building facades
CN112017243A (en) Medium visibility identification method
CN112699748A (en) Human-vehicle distance estimation method based on YOLO and RGB image
CN116559123A (en) Method and device for measuring target visibility
CN112016558A (en) Medium visibility identification method based on image quality
Kochi et al. 3D modeling of architecture by edge-matching and integrating the point clouds of laser scanner and those of digital camera
Hassanein et al. A new automatic system calibration of multi-cameras and lidar sensors
CN110910379B (en) Incomplete detection method and device
Haala et al. Combining Laser Scanning and Photogrammetry-A Hybrid Approach for Heritage Documentation.
CN209279912U (en) A kind of object dimensional information collecting device
CN114359891A (en) Three-dimensional vehicle detection method, system, device and medium
Mei et al. Supervised learning for semantic segmentation of 3D LiDAR data
Ren et al. Vehicle Positioning Method of Roadside Monocular Camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination