WO2021194018A1 - Procédé de calcul d'informations de volume d'un objet compris dans un signal d'image satellite - Google Patents

Procédé de calcul d'informations de volume d'un objet compris dans un signal d'image satellite Download PDF

Info

Publication number
WO2021194018A1
WO2021194018A1 PCT/KR2020/008830 KR2020008830W WO2021194018A1 WO 2021194018 A1 WO2021194018 A1 WO 2021194018A1 KR 2020008830 W KR2020008830 W KR 2020008830W WO 2021194018 A1 WO2021194018 A1 WO 2021194018A1
Authority
WO
WIPO (PCT)
Prior art keywords
point
computer
image patch
calculating
image
Prior art date
Application number
PCT/KR2020/008830
Other languages
English (en)
Korean (ko)
Inventor
백민영
Original Assignee
주식회사 에스아이에이
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 에스아이에이 filed Critical 주식회사 에스아이에이
Publication of WO2021194018A1 publication Critical patent/WO2021194018A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Definitions

  • the present disclosure relates to a method of measuring the volume of an object included in an image signal, and more particularly, to a method of estimating and calculating the volume of an object, particularly, the amount of storage in an oil storage having a floating roof using a satellite image. .
  • the SAR image imaging is performed in the order in which a signal is transmitted from a satellite and then reflected back from an object. Accordingly, distortion such as a layover in which the height or shape of the object is different from the actual one may occur.
  • the upper part of the storage is observed from the side closer to the satellite than the real one, and the lower part is observed from the part corresponding to the actual location. Therefore, the height of the object can be estimated from the distance difference between the two points.
  • the present disclosure has been devised in response to the above-mentioned background art, and specifically aims to provide a method of measuring the volume of an oil storage tank having a floating roof using a layover of a SAR image.
  • a computer program stored in a computer-readable storage medium includes instructions for causing a processor to perform the following steps, the steps comprising: generating an image patch including an object to be measured from input data; extracting an azimuth direction center line based on an average signal intensity of each of a plurality of pixel columns or each of a plurality of pixel rows of the image patch; extracting one or more analysis points based on signal intensities of a plurality of pixels located on the extracted center line; and calculating a volume of the measurement target object based on the extracted one or more analysis points.
  • the computer program includes instructions for causing a processor to perform the following steps, the steps comprising: generating an image patch including an object to be measured from input data; extracting an azimuth direction center line based on an average signal intensity of each of a plurality of pixel columns or each of a plurality of pixel rows of the image patch; extracting one or more analysis points based on signal intensities of a plurality of pixels located on the extracted center line; and calculating a volume
  • the generating of the image patch including the measurement target object may include: determining a reference point of the measurement target object based on signal strengths of individual points representing the measurement target object on the image patch; and determining an area in which the image patch is to be generated based on the reference point. may include.
  • the reference point may be a leftmost point of the measurement target object.
  • the generating of the image patch may include: performing segmentation on the measurement target object; may include.
  • the generating of the image patch may include: performing pre-processing on the image patch; may include.
  • the preprocessing of the image patch may be at least one of oversampling, Lee filtering, and morphologic erosion of the image patch.
  • the extracting of the center line may include: recognizing a plurality of pixel columns or a plurality of pixel rows parallel to the azimuth direction; calculating an average signal intensity of each of the plurality of pixel columns or each of the plurality of pixel rows; and determining a pixel column or pixel row having the highest average signal intensity as the center line. may include.
  • analysis points may include a first floor point, a height point, and a floating roof point.
  • the analysis points may correspond to a plurality of signal intensity peak points located on the center line.
  • calculating the volume of the measurement target object, calculating the area and the floating roof height of the measurement target object may include.
  • the area of the measurement target object may be determined using the azimuth direction point.
  • the calculating of the floating roof height of the measurement target object may include calculating a layover for the floating roof; may include.
  • the calculating of the layover length may include calculating a second bottom point based on the first bottom point and the radius; and calculating the layover length based on the second floor point and the floating roof point. may include.
  • a computing device for calculating volume information of an object included in an image signal.
  • the computing device may include a processor; and memory; wherein the processor generates an image patch including a measurement target object, and based on an average signal intensity of each of a plurality of pixel columns or a plurality of pixel rows of the image patch, an azimuth direction center line extracts, extracts one or more analysis points based on the signal intensity of a plurality of pixels located on the extracted center line, and calculates the volume of the measurement target object based on the extracted one or more analysis points have.
  • a method for calculating volume information of an object included in an image signal includes: generating an image patch including a measurement target object; extracting an azimuth direction center line based on the average signal intensity of individual pixel columns or individual pixel rows of the image patch; extracting one or more analysis points based on signal intensities of a plurality of pixels located on the extracted center line; and calculating a volume of the measurement target object based on the extracted one or more analysis points.
  • the method includes: generating an image patch including a measurement target object; extracting an azimuth direction center line based on the average signal intensity of individual pixel columns or individual pixel rows of the image patch; extracting one or more analysis points based on signal intensities of a plurality of pixels located on the extracted center line; and calculating a volume of the measurement target object based on the extracted one or more analysis points.
  • the present disclosure makes it possible to measure the cross-sectional area and height of an object using a layover, so that it is possible to easily measure the volume of a storage with a floating roof only with a satellite image.
  • FIG. 1 is a block diagram illustrating a configuration of a computing device according to some embodiments of the present disclosure.
  • FIGS. 2A and 2B are diagrams for explaining an example of an image according to some embodiments of the present disclosure.
  • FIG. 3 is a diagram illustrating that a plurality of points for calculating a volume of an object are determined from a satellite image according to some embodiments of the present disclosure
  • FIG. 4 is a flowchart illustrating a process in which a processor calculates a volume of an object according to some embodiments of the present disclosure.
  • FIG. 5 is a flowchart illustrating a process in which a processor generates an image patch according to some embodiments of the present disclosure.
  • FIG. 6 is a flowchart illustrating a process in which a processor extracts an azimuth direction centerline according to some embodiments of the present disclosure.
  • FIG. 7 is a flowchart illustrating an example in which a processor calculates a volume of an object according to some embodiments of the present disclosure
  • FIG. 8 is a flowchart illustrating an example in which a processor calculates a volume of an object according to some embodiments of the present disclosure.
  • FIG. 9 shows a simplified, general schematic diagram of an example computing environment in which some embodiments of the present disclosure may be implemented.
  • FIG. 1 is a block diagram of a computing device according to some embodiments of the present disclosure.
  • the computing device 100 may include a processor 110 , a memory 120 , and a communication unit (not shown).
  • the processor 110 may include one or more cores, and a central processing unit (CPU) of a computing device, a general purpose graphics processing unit (GPGPU), and a tensor processing unit (TPU). unit), and the like, by executing instructions stored in the memory, and thus may include any type of processor that calculates the volume information of the object included in the image signal.
  • the processor 110 may read the computer program stored in the memory and calculate the volume information of the object included in the image signal according to some embodiments of the present disclosure.
  • the processor 110 may control overall operations of components of the computing device 100 to perform a method of calculating volume information of an object included in an image signal.
  • the communication unit (not shown) may include a wired/wireless Internet module for network connection.
  • the communication unit (not shown) may communicate with an external device such as a satellite.
  • wireless Internet technology wireless LAN (WLAN) (Wi-Fi), wireless broadband (Wibro), world interoperability for microwave access (Wimax), high speed downlink packet access (HSDPA), etc. may be used.
  • Wi-Fi wireless LAN
  • Wibro wireless broadband
  • Wimax wireless high speed downlink packet access
  • HSDPA high speed downlink packet access
  • XDSL Digital Subscriber Line
  • FTH Fibers to the home
  • PLC Power Line Communication
  • the computing device 100 may further include a memory 120 .
  • the memory may store a program for the operation of the processor 110 , and may temporarily or permanently store input/output data.
  • the memory includes a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (eg SD or XD memory, etc.), and a random access memory (RAM).
  • Memory RAM
  • SRAM Static Random Access Memory
  • ROM Read-Only Memory
  • EEPROM Electrical Erasable Programmable Read-Only Memory
  • PROM Programgrammable Read-Only Memory
  • magnetic memory magnetic disk, optical disk It may include at least one type of storage medium.
  • Such a memory may be operated under the control of the processor.
  • SAR Synthetic Aperture Radar
  • UAV Unmanned Aerial Vehicles
  • Distortion occurring in the SAR image may be representative of foreshortening, layover, and radar shadow.
  • An example of a method for calculating volume information of an object included in an image signal according to the present disclosure may be to measure a floating roof height of an object having a floating roof by using a layover phenomenon among distortions of a SAR image.
  • the layover in the SAR image refers to a phenomenon in which the image of the object to be observed is turned upside down due to the geometrical features of the surface of the object to be observed.
  • FIGS. 2A and 2B are diagrams for explaining an example of an image according to some embodiments of the present disclosure.
  • the processor 110 may generate an image of the SAR image as shown in FIG. 2B corresponding to the optical image of FIG. 2A .
  • SAR Synthetic Aperture Radar
  • FIG. 2A shows an optical image of an observation point
  • FIG. 2B shows an SAR image of the same observation point as FIG. 2A.
  • the processor 110 may calculate the height of the circular storage and the floating roof height from the layover length of the circular storage using the features of the SAR image.
  • FIG. 3 is a diagram illustrating that a plurality of points for calculating a volume of an object are determined from a satellite image according to some embodiments of the present disclosure
  • the processor 110 may generate an image similar to the image patch of FIG. 3 by performing image preprocessing from the image of FIG. 2B .
  • the processor 110 may perform at least one of oversampling, Lee filtering, and morphologic erosion for image preprocessing.
  • Oversampling may be a process of sampling a signal with a bandwidth of twice or more, or the highest sampling frequency that can be sampled in signal processing.
  • Lee filtering is a process for reducing speckle noise, which is random noise appearing in SAR images, and is a process generally included in most studies using SAR images.
  • morphology processing may be a technique of reducing or enlarging a bright area or a dark area of an image.
  • image processing morphology processing may be a technique of reducing or enlarging a bright area or a dark area of an image.
  • the optimal threshold is used and the wrong separation region is corrected through post-processing. can be used, and this is called morphological processing.
  • the morphological erosion operation may be an operation in which the target area is narrowed.
  • the erosion operation is an operation in which the result value becomes 255 only when the input image must also have a value of 255 for all positions where the mask has a value of 255 when a mask is placed on each pixel of the input binary image. If even one pixel at the target position has a value of 0, the resulting value becomes 0, so the overall area having 255 is reduced.
  • Such pre-processing may be performed automatically without user intervention, and by performing such pre-processing, object volume measurement according to the present disclosure may be more accurately performed.
  • the processor 110 may generate an image patch including one or more objects by performing image pre-processing from the image of FIG. 2B .
  • the image patch may be generated by a detection or segmentation technique using a neural network model.
  • the detection technique may be a technique of using a neural network to determine where an object corresponding to a specific class is included in a given image.
  • the segmentation technique may be a technique of performing segmentation by pixel unit of an object corresponding to a specific class in a given image, in which position is included, using a neural network.
  • the image patch may be generated with a preset size by using the reference point after setting a reference point for generating the image patch.
  • the processor 110 may set the leftmost point of the object as a reference point.
  • the processor 110 may set an image patch area including all pixel points constituting the recognized object from the set reference point.
  • the corner point of the building appears brighter than the layover because a lot of double reflection occurs due to the vertical structure between the building and the ground, and includes information about the building outline.
  • the height of the object can be obtained using the length of the layout of the building and the angle of incidence of the satellite.
  • Equation 1 shows an equation for estimating the height of an object using the length of the layover.
  • the length of the layover can be obtained as the product of the number of pixels in the layover and the distance of the pixels.
  • the height of the object may mean an angle of incidence.
  • the processor 110 may determine a reference point.
  • the processor 110 may determine the leftmost point of the object in the image patch as the reference point.
  • the reference point may correspond to the height point 210 on the image patch.
  • the position of the reference point and the meaning of the reference point are not limited thereto.
  • the processor 110 may determine a center line from the image patch.
  • the center line may be determined based on signal intensities of each of a plurality of pixel columns and a plurality of pixel rows included in the image patch.
  • the image patch of FIG. 3 will be described as an example.
  • the processor 110 may calculate an average signal intensity of a plurality of pixel rows included in the image patch along an azimuth direction.
  • the radar (or antenna) is attached parallel to the body of the satellite to which the radar is attached, and in this case, the azimuth direction may mean a straight direction in which a satellite to which the radar is attached flies.
  • the processor 110 may recognize a pixel row having the highest signal strength among the plurality of pixel rows.
  • the processor 110 may determine a corresponding pixel row as a center line.
  • the center line may be a pixel row in which the height point 210 , the first floor point 220 , and the floating roof point 230 are located.
  • layover In SAR, characteristic scattering occurs in buildings by side observation.
  • the layover is one of its representative features. As described above, the layover refers to a phenomenon in which the image of the object to be observed is turned upside down due to the geometrical characteristics of the surface of the object to be observed, which means that the signal reflected from the top of the building takes precedence over the signal reflected from the lower part of the building. Occurs upon arrival As a result, an image of the upper part of the target is generated closer to the radar than its actual position on the ground.
  • the height point of the wall closest to the radar is expressed closest to the radar. Accordingly, the reference point may be the same as the height point of the object.
  • the bottom point of the wall close to the radar is expressed in its original position. Accordingly, the first floor point 220 corresponds to the floor point of the wall close to the radar.
  • the floating roof point 230 is a roof that dynamically changes according to the height of the volume included in the object, independent of the overall height of the object, and the roof of the object when the object has a floating roof. It may be a point at which the height is measured.
  • the floating roof point may be one of the points located on the centerline in the floating roof section of the object.
  • a point having a high signal strength located on the rightmost side of the object may correspond to the floating roof point 230 .
  • the processor 110 may calculate an average signal intensity for each of a plurality of pixel rows and pixel columns present in the image patch.
  • the processor 110 may generate the average signal strength graph 300a in the azimuth direction by selecting either the signal strength for each pixel column or the signal strength for each pixel row.
  • the azimuth direction average signal strength graph 300a may be formed along the azimuth direction. Accordingly, the processor 110 may generate a graph by using the signal strength information in a direction coincident with the azimuth direction among the signal strength information on the pixel column or the signal strength information on the pixel row.
  • the processor 110 may select an azimuth direction point having the highest average signal strength in the average signal strength graph 300a as the azimuth direction center point 360 .
  • a pixel column or a pixel row corresponding to the azimuth direction center point may be a center line.
  • the processor 110 may generate a centerline signal strength graph 300b by recognizing signal strengths of a plurality of pixels located on the centerline.
  • the processor 110 may recognize three peak points located on the center line.
  • the processor 110 may determine a point on the centerline signal strength graph 300b at which at least one of a signal strength, a change rate of a signal strength, and an interval between the peak points is greater than or equal to a preset value as the peak point.
  • the processor 110 may increase the preset signal strength value. This process can be repeated until only three peak points are derived.
  • the processor 110 may determine the three peak points located on the center line as the first peak point 310 , the second peak point 320 , and the third peak point 330 based on a preset rule.
  • the first peak point 310 represents the signal strength of the height point 210 .
  • the second peak point 320 represents the signal strength of the first bottom point 220 .
  • the third peak point 330 represents the signal strength of the floating roof point 230 .
  • the processor 110 may obtain the diameter of the object based on the azimuth brightness graph 300a.
  • the first azimuth direction point 340 the second azimuth direction point 350 , and the azimuth direction center point 360 may be found.
  • the first azimuth direction point 340 and the second azimuth direction point 350 are points forming a plateau in the azimuth direction brightness graph 300a, and the azimuth direction center point 360 is the azimuth direction brightness graph It may be a point with the strongest brightness within 300a.
  • the processor 110 may recognize a point where the signal strength is equal to or greater than a preset value, and a point where the rate of change of the signal strength with respect to the azimuth direction is the largest and the smallest. Subsequently, the processor 110 may determine the smallest point as the first azimuth point 340 and the largest point as the second azimuth direction point 350 .
  • the processor 110 may repeatedly perform the above process until three azimuth points satisfying the criterion remain.
  • the first azimuth point 340 corresponds to a first radial point 240 on the image patch
  • the second azimuth point 350 corresponds to a second radial point 250 on the image patch.
  • the processor 110 may calculate the diameter of the object included in the image patch based on the pixel distance in the azimuth direction of the first azimuth point 340 and the second azimuth direction point 350 . have.
  • the processor 110 fetches an image based on a pixel distance between a point having the strongest brightness among the first azimuth direction point 340 and the second azimuth direction point 350 and the azimuth direction center point 360 . You can calculate the diameter of the object included in .
  • the processor 110 may calculate the diameter of the object included in the image patch based on the method shown in [Equation 2] below.
  • the processor 110 may determine twice the calculated R as the diameter of the object.
  • the method of calculating the diameter of the object using the signal strength of the SAR image is not limited thereto.
  • FIG. 4 is a flowchart illustrating a process in which a processor calculates a volume of an object according to some embodiments of the present disclosure.
  • the processor 110 may generate the image patch 200 including the measurement target object from the input data (S100).
  • the processor 110 may generate an image patch including one or more objects by performing image pre-processing from the image of FIG. 2B .
  • the image patch may be generated by a detection or segmentation technique using a neural network model.
  • the image patch may be generated with a preset size by using the reference point after setting a reference point for generating the image patch.
  • the processor 110 may set the leftmost point of the object as a reference point.
  • the processor 110 may set an image patch area including all pixel points constituting the recognized object from the set reference point.
  • the processor 110 may extract an azimuth centerline based on an average signal intensity of each of a plurality of pixel columns or a plurality of pixel rows of the image patch ( S200 ).
  • the processor 110 may determine a center line from the image patch.
  • the center line may be determined based on signal intensities of each of a plurality of pixel columns and a plurality of pixel rows included in the image patch.
  • the processor 110 may calculate an average signal intensity of a plurality of pixel rows included in the image patch along an azimuth direction.
  • the processor 110 may recognize a pixel row having the highest signal strength among the plurality of pixel rows.
  • the processor 110 may determine a corresponding pixel row as a center line.
  • the center line may be a pixel row in which the height point 210 , the first floor point 220 , and the floating roof point 230 are located.
  • the processor 110 may extract one or more analysis points based on signal strengths of a plurality of pixels located on the extracted center line ( S300 ).
  • the analysis point may include a height point 210 , a first floor point 220 , a second floor point (not shown), and a floating roof point 230 .
  • the second floor point is a point predicted to be opposite the first floor point.
  • the second bottom point is not revealed on the satellite image shown in FIG. 2 and the image patch shown in FIG. 3 because of the shadow phenomenon occurring in the SAR image.
  • the shadow phenomenon occurs when the radar beam emitted from the radar cannot illuminate the ground surface. Since the second bottom point located opposite the first bottom point is an area where the radar beam for capturing the SAR image does not reach, the processor 110 uses the following [Equation 3] to determine the pixel position of the second bottom point can be inferred.
  • pixel position of the second floor point is the pixel position of the first floor point, is half the diameter of the cross-section of the object, is the angle of incidence, may mean a distance direction pixel distance.
  • the processor 110 may calculate the volume of the measurement target object based on the extracted one or more analysis points (S400).
  • the actual position of the second floor point coincides with the actual position of the floating roof point 230 , but the pixel position of the second floor point and the pixel position of the floating roof point 230 are different from each other due to the layover phenomenon of the SAR image.
  • the processor 110 may calculate the floating roof height as shown in Equation 4 below based on the pixel position of the second floor point and the pixel position of the floating roof point.
  • the height of the floating roof is the pixel position of the floating roof point 230, is the pixel position of the second floor point, is the angle of incidence, may mean a distance direction pixel distance.
  • the processor 110 may calculate the floor area (cross-sectional area) of the object included in the image patch based on the calculated object diameter.
  • the processor 110 may calculate the volume of the object, that is, the volume of the fluid contained in the object, by using the calculated cross-sectional area of the object and the height of the floating roof.
  • image preprocessing image patch extraction, analysis point setting, and volume measurement of floating roof storage are all possible with only the SAR image.
  • FIG. 5 is a flowchart illustrating a process in which a processor generates an image patch according to some embodiments of the present disclosure.
  • the processor 110 may determine a reference point of the measurement target object (S110).
  • the processor 110 may determine a reference point.
  • the processor 110 may determine the leftmost point of the object in the image patch as the reference point.
  • the reference point may correspond to the height point 210 on the image patch.
  • the position of the reference point and the meaning of the reference point are not limited thereto.
  • the processor 110 may determine a region in which an image patch is to be generated based on the reference point ( S120 ).
  • the processor 110 may generate an image patch including one or more objects by performing image pre-processing from the image of FIG. 2B .
  • the image patch may be generated by a detection or segmentation technique using a neural network model.
  • the detection technique may be a technique of using a neural network to determine where an object corresponding to a specific class is included in a given image.
  • the segmentation technique may be a technique of performing segmentation by pixel unit of an object corresponding to a specific class in a given image, in which position is included, using a neural network.
  • the image patch may be generated with a preset size by using the reference point after setting a reference point for generating the image patch.
  • FIG. 6 is a flowchart illustrating a process in which a processor extracts an azimuth direction centerline according to some embodiments of the present disclosure.
  • the processor 110 may recognize a plurality of pixel columns or a plurality of pixel rows parallel to the azimuth direction ( S210 ).
  • the radar (or antenna) is attached parallel to the body of the satellite to which the radar is attached, and in this case, the azimuth direction may mean a straight direction in which a satellite to which the radar is attached flies.
  • the processor 110 may calculate an average signal intensity of each of the plurality of pixel columns or each of the plurality of pixel rows ( S220 ).
  • the processor 110 may recognize a pixel row having the highest signal strength among the plurality of pixel rows.
  • the processor 110 may determine a corresponding pixel row as a center line.
  • the center line may be a row of pixels in which the height point 210 , the first floor point 220 , and the floating roof point 230 are located.
  • the processor 110 may determine a pixel column or pixel row having the highest average signal strength as the center line ( S230 ).
  • the processor 110 may obtain the diameter of the object based on the azimuth brightness graph 300a.
  • the first azimuth direction point 340 the second azimuth direction point 350 , and the azimuth direction center point 360 may be found.
  • the first azimuth direction point 340 and the second azimuth direction point 350 are points forming a plateau in the azimuth direction brightness graph 300a, and the azimuth direction center point 360 is the azimuth direction brightness graph It may be a point with the strongest brightness within 300a.
  • the processor 110 may recognize a point where the signal strength is equal to or greater than a preset value, and a point where the rate of change of the signal strength with respect to the azimuth direction is the largest and the smallest. Subsequently, the processor 110 may determine the smallest point as the first azimuth point 340 and the largest point as the second azimuth direction point 350 .
  • the first azimuth point 340 corresponds to a first radial point 240 on the image patch
  • the second azimuth point 350 corresponds to a second radial point 250 on the image patch.
  • the processor 110 may calculate the diameter of the object included in the image patch based on the pixel distance in the azimuth direction of the first azimuth point 340 and the second azimuth direction point 350 . have.
  • the processor 110 fetches an image based on a pixel distance between a point having the strongest brightness among the first azimuth direction point 340 and the second azimuth direction point 350 and the azimuth direction center point 360 . You can calculate the diameter of the object included in .
  • the processor 110 may calculate the radius of the object included in the image patch based on the method presented in Equation 2 above.
  • the method of calculating the diameter of the object using the signal strength of the SAR image is not limited thereto.
  • FIG. 7 is a flowchart illustrating an example in which a processor calculates a volume of an object according to some embodiments of the present disclosure
  • the processor 110 may determine the area based on the azimuth direction point ( S410 ).
  • An azimuth point includes a first azimuth direction point 340 and a second azimuth direction point 350 .
  • the first azimuth direction point 340 may be found.
  • the second azimuth direction point 350 may be found.
  • the first azimuth direction point 340 and the second azimuth direction point 350 are points forming a plateau in the azimuth direction brightness graph 300a, and the azimuth direction center point 360 is the azimuth direction brightness graph It may be a point with the strongest brightness within 300a.
  • the processor 110 may recognize a point where the signal strength is equal to or greater than a preset value, and a point where the rate of change of the signal strength with respect to the azimuth direction is the largest and the smallest. Then, the processor 110 may determine the smallest point as the first azimuth direction point 340 and the larger and smaller point as the second azimuth direction point 350 .
  • the first azimuth point 340 corresponds to a first radial point 240 on the image patch
  • the second azimuth point 350 corresponds to a second radial point 250 on the image patch.
  • the processor 110 may calculate the diameter of the object included in the image patch based on the pixel distance in the azimuth direction of the first azimuth point 340 and the second azimuth direction point 350 . have.
  • the processor 110 may calculate the area of the object based on the calculated diameter (and the radius calculated by the diameter) of the object.
  • the height of the object can be obtained using the length of the layout of the building and the angle of incidence of the satellite.
  • Equation 1 represents an equation for estimating the height of an object using the length of the layover.
  • the length of the layover can be obtained as the product of the number of pixels in the layover and the distance of the pixels.
  • the same can be applied to floating roofs.
  • the floating roof point 230 is a roof that dynamically changes according to the height of the volume included in the object, independent of the overall height of the object, and the roof of the object when the object has a floating roof. It may be a point at which the height is measured.
  • the floating roof point may be one of the points located on the centerline in the floating roof section of the object.
  • the processor 110 may calculate the floating roof height based on the layover for the floating roof (S420).
  • the height of the floating roof is as presented in Equation 4 above.
  • FIG. 8 is a flowchart illustrating an example in which a processor calculates a volume of an object according to some embodiments of the present disclosure.
  • the processor 110 may calculate a second bottom point based on the first bottom point (S421).
  • the first floor point may be a floor point of an outer wall of an object located at a point close to the radar.
  • the second floor point may be a point opposite to the first floor point on the object or a floor point vertically connected to the floating roof.
  • the processor 110 may calculate a layover length based on the second floor point and the floating roof point ( S422 ).
  • the layover may correspond to the calculated distance between the second floor point and the floating roof point 230 . This is because the second floor point and the floating roof point 230 are located on substantially the same ground surface, but are shown in different pixels on the two-dimensional image patch due to the characteristics of the layover described above.
  • the processor 110 may calculate the floating roof height based on the corresponding layover and the pixel distance.
  • the accuracy of the method for calculating the volume information of an object according to the present disclosure may be evaluated through comparison with a volume measurement result using an existing optical image. For an object photographed at the same time, there was an average difference of 0.83 m between the height of the object estimated using the panchromatic (PAN) image and the height of the object measured using the SAR image according to the present disclosure, and the relative error was 7.43 % was equivalent.
  • PAN panchromatic
  • the above results may mean that the new measurement method using the SAR image has a sufficient degree of accuracy to track the trends of the height and volume of an object having a floating roof.
  • the object volume measurement method according to the present disclosure using the SAR image is free from the influence of weather conditions. Furthermore, it can be seen that the object volume measurement method according to the present disclosure sufficiently overcomes image distortion due to the characteristics of the SAR image.
  • FIG. 9 shows a simplified, general schematic diagram of an example computing environment in which some embodiments of the present disclosure may be implemented.
  • the computer 1102 illustrated in FIG. 9 may correspond to the computing device 100 .
  • modules herein include routines, procedures, programs, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • modules herein include routines, procedures, programs, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • modules herein include routines, procedures, programs, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • modules herein include routines, procedures, programs, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • modules herein include routines, procedures, programs, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • modules herein include routines, procedures, programs, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • modules herein include routines, procedures, programs, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • modules herein include routines, procedures, programs, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • modules herein include routines
  • the described embodiments of the present disclosure may also be practiced in distributed computing environments where certain tasks are performed by remote processing devices that are linked through a communications network.
  • program modules may be located in both local and remote memory storage devices.
  • Computers typically include a variety of computer-readable media.
  • Media accessible by a computer includes volatile and nonvolatile media, transitory and non-transitory media, removable and non-removable media.
  • computer-readable media may include computer-readable storage media and computer-readable transmission media.
  • Computer-readable storage media includes volatile and non-volatile media, transitory and non-transitory media, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. includes media.
  • a computer-readable storage medium may be RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital video disk (DVD) or other optical disk storage device, magnetic cassette, magnetic tape, magnetic disk storage device, or other magnetic storage device. device, or any other medium that can be accessed by a computer and used to store the desired information.
  • a computer readable transmission medium typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism, and Includes any information delivery medium.
  • modulated data signal means a signal in which one or more of the characteristics of the signal is set or changed so as to encode information in the signal.
  • computer-readable transmission media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, and other wireless media. Combinations of any of the above are also intended to be included within the scope of computer-readable transmission media.
  • An example environment 1100 implementing various aspects of the disclosure is shown including a computer 1102 , the computer 1102 including a processing unit 1104 , a system memory 1106 , and a system bus 1108 . do.
  • the system bus 1108 couples system components, including but not limited to system memory 1106 , to the processing device 1104 .
  • the processing device 1104 may be any of a variety of commercially available processors. Dual processor and other multiprocessor architectures may also be used as processing unit 1104 .
  • the system bus 1108 may be any of several types of bus structures that may further be interconnected to a memory bus, a peripheral bus, and a local bus using any of a variety of commercial bus architectures.
  • System memory 1106 includes read only memory (ROM) 1110 and random access memory (RAM) 1112 .
  • ROM read only memory
  • RAM random access memory
  • a basic input/output system (BIOS) is stored in non-volatile memory 1110, such as ROM, EPROM, EEPROM, etc., which is the basic input/output system (BIOS) that helps transfer information between components within computer 1102, such as during startup. contains routines.
  • BIOS basic input/output system
  • RAM 1112 may also include high-speed RAM, such as static RAM, for caching data.
  • the computer 1102 may also be configured with an internal hard disk drive (HDD) 1114 (eg, EIDE, SATA) - this internal hard disk drive 1114 may also be configured for external use within a suitable chassis (not shown).
  • HDD hard disk drive
  • FDD magnetic floppy disk drive
  • optical disk drive 1120 eg, a CD-ROM
  • the hard disk drive 1114 , the magnetic disk drive 1116 , and the optical disk drive 1120 are connected to the system bus 1108 by the hard disk drive interface 1124 , the magnetic disk drive interface 1126 , and the optical drive interface 1128 , respectively.
  • the interface 1124 for external drive implementation includes, for example, at least one or both of Universal Serial Bus (USB) and IEEE 1394 interface technologies.
  • drives and their associated computer-readable media provide non-volatile storage of data, data structures, computer-executable instructions, and the like.
  • drives and media correspond to storing any data in a suitable digital format.
  • computer readable storage media refers to HDDs, removable magnetic disks, and removable optical media such as CDs or DVDs, those skilled in the art will use zip drives, magnetic cassettes, flash memory cards, cartridges, It will be appreciated that other tangible computer-readable storage media and the like may also be used in the exemplary operating environment and any such media may include computer-executable instructions for performing the methods of the present disclosure. .
  • a number of program modules may be stored in the drive and RAM 1112 , including an operating system 1130 , one or more application programs 1132 , other program modules 1134 , and program data 1136 . All or portions of the operating system, applications, modules, and/or data may also be cached in RAM 1112 . It will be appreciated that the present disclosure may be implemented in various commercially available operating systems or combinations of operating systems.
  • a user may enter commands and information into the computer 1102 via one or more wired/wireless input devices, for example, a pointing device such as a keyboard 1138 and a mouse 1140 .
  • Other input devices may include a microphone, IR remote control, joystick, game pad, stylus pen, touch screen, and the like.
  • these and other input devices are often connected to the processing unit 1104 through an input device interface 1142 that is connected to the system bus 1108, parallel ports, IEEE 1394 serial ports, game ports, USB ports, IR interfaces, It may be connected by other interfaces, etc.
  • a monitor 1144 or other type of display device is also coupled to the system bus 1108 via an interface, such as a video adapter 1146 .
  • the computer typically includes other peripheral output devices (not shown), such as speakers, printers, and the like.
  • Computer 1102 may operate in a networked environment using logical connections to one or more remote computers, such as remote computer(s) 1148 via wired and/or wireless communications.
  • Remote computer(s) 1148 may be workstations, server computers, routers, personal computers, portable computers, microprocessor-based entertainment devices, peer devices, or other common network nodes, and are generally Although including many or all of the components described, only memory storage device 1150 is shown for simplicity.
  • the logical connections shown include wired/wireless connections to a local area network (LAN) 1152 and/or a larger network, eg, a wide area network (WAN) 1154 .
  • LAN and WAN networking environments are common in offices and companies, and facilitate enterprise-wide computer networks, such as intranets, all of which can be connected to a worldwide computer network, for example, the Internet.
  • the computer 1102 When used in a LAN networking environment, the computer 1102 is connected to the local network 1152 through a wired and/or wireless communication network interface or adapter 1156 .
  • Adapter 1156 may facilitate wired or wireless communication to LAN 1152 , which also includes a wireless access point installed therein for communicating with wireless adapter 1156 .
  • the computer 1102 may include a modem 1158 , connected to a communication server on the WAN 1154 , or otherwise establishing communications over the WAN 1154 , such as over the Internet. have the means
  • a modem 1158 which may be internal or external and a wired or wireless device, is coupled to the system bus 1108 via a serial port interface 1142 .
  • program modules described for computer 1102 may be stored in remote memory/storage device 1150 . It will be appreciated that the network connections shown are exemplary and other means of establishing a communication link between the computers may be used.
  • Computer 1102 may be associated with any wireless device or object that is deployed and operates in wireless communication, for example, printers, scanners, desktop and/or portable computers, portable data assistants (PDAs), communications satellites, wireless detectable tags. It operates to communicate with any device or place, and phone. This includes at least Wi-Fi and Bluetooth wireless technologies. Accordingly, the communication may be a predefined structure as in a conventional network or may simply be an ad hoc communication between at least two devices.
  • PDAs portable data assistants
  • communications satellites for example, printers, scanners, desktop and/or portable computers, portable data assistants (PDAs), communications satellites, wireless detectable tags. It operates to communicate with any device or place, and phone. This includes at least Wi-Fi and Bluetooth wireless technologies. Accordingly, the communication may be a predefined structure as in a conventional network or may simply be an ad hoc communication between at least two devices.
  • Wi-Fi Wireless Fidelity
  • Wi-Fi is a wireless technology such as cell phones that allows these devices, eg, computers, to transmit and receive data indoors and outdoors, ie anywhere within range of a base station.
  • Wi-Fi networks use a radio technology called IEEE 802.11 (a, b, g, etc.) to provide secure, reliable, and high-speed wireless connections.
  • Wi-Fi can be used to connect computers to each other, to the Internet, and to wired networks (using IEEE 802.3 or Ethernet).
  • Wi-Fi networks may operate in unlicensed 2.4 and 5 GHz radio bands, for example, at 11 Mbps (802.11a) or 54 Mbps (802.11b) data rates, or in products that include both bands (dual band). have.
  • the various embodiments presented herein may be implemented as methods, apparatus, or articles of manufacture using standard programming and/or engineering techniques.
  • article of manufacture includes a computer program, carrier, or media accessible from any computer-readable device.
  • computer-readable storage media include magnetic storage devices (eg, hard disks, floppy disks, magnetic strips, etc.), optical disks (eg, CDs, DVDs, etc.), smart cards, and flash drives. memory devices (eg, EEPROMs, cards, sticks, key drives, etc.).
  • machine-readable medium includes, but is not limited to, wireless channels and various other media capable of storing, retaining, and/or carrying instruction(s) and/or data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Image Processing (AREA)

Abstract

Selon un de de ses modes de réalisation, la divulgation concerne un programme informatique stocké sur un support de stockage lisible par ordinateur. Le programme informatique comprend des instructions pour amener un processeur à exécuter les étapes ci-dessous, les étapes pouvant comprendre : une étape consistant à générer, à partir de données d'entrée, un correctif d'image comprenant un objet à mesurer ; une étape consistant à extraire une ligne centrale d'une direction azimutale sur la base d'une intensité de signal moyenne de chacune d'une pluralité de colonnes de pixels ou chacune d'une pluralité de rangées de pixels du correctif d'image ; une étape consistant à extraire un ou plusieurs points d'analyse sur la base des intensités de signal d'une pluralité de pixels situés sur la ligne centrale extraite ; et une étape consistant à calculer, sur la base du ou des points d'analyse extraits, le volume de l'objet à mesurer.
PCT/KR2020/008830 2020-03-26 2020-07-07 Procédé de calcul d'informations de volume d'un objet compris dans un signal d'image satellite WO2021194018A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2020-0036893 2020-03-26
KR1020200036893A KR102309251B1 (ko) 2020-03-26 2020-03-26 위성 영상 신호에 포함된 객체의 부피 정보를 계산하는 방법

Publications (1)

Publication Number Publication Date
WO2021194018A1 true WO2021194018A1 (fr) 2021-09-30

Family

ID=77892691

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/008830 WO2021194018A1 (fr) 2020-03-26 2020-07-07 Procédé de calcul d'informations de volume d'un objet compris dans un signal d'image satellite

Country Status (2)

Country Link
KR (2) KR102309251B1 (fr)
WO (1) WO2021194018A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102598888B1 (ko) * 2023-03-15 2023-11-06 한화시스템 주식회사 원유 저장량 예측장치 및 예측방법

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101489984B1 (ko) * 2007-10-23 2015-02-04 이스라엘 에어로스페이스 인더스트리즈 리미티드 스테레오-영상 정합 및 변화 검출 시스템 및 방법
US9417323B2 (en) * 2012-11-07 2016-08-16 Neva Ridge Technologies SAR point cloud generation system
US10222178B1 (en) * 2011-04-13 2019-03-05 Litel Instruments Precision geographic location system and method utilizing an image product
US10388006B2 (en) * 2017-05-31 2019-08-20 Institut National D'optique Synthetic aperture imaging assisted by three-dimensional scanning imaging for height reconstruction

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5632173B2 (ja) * 2010-03-10 2014-11-26 一般財団法人宇宙システム開発利用推進機構 Sarデータ処理方法及びsarデータ処理システム
KR101622748B1 (ko) * 2014-03-31 2016-05-20 한국과학기술원 입력 이미지로부터 객체를 검출하기 위한 방법, 장치 및 컴퓨터 판독가능 기록매체
WO2019215819A1 (fr) * 2018-05-08 2019-11-14 日本電気株式会社 Système d'analyse d'image radar à ouverture synthétique, procédé d'analyse d'image radar à ouverture synthétique et programme d'analyse d'image radar à ouverture synthétique
KR102067242B1 (ko) * 2018-05-23 2020-01-16 한국해양과학기술원 인공위성 sar 영상기반 인공신경망을 이용한 오일 유출 탐지 방법

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101489984B1 (ko) * 2007-10-23 2015-02-04 이스라엘 에어로스페이스 인더스트리즈 리미티드 스테레오-영상 정합 및 변화 검출 시스템 및 방법
US10222178B1 (en) * 2011-04-13 2019-03-05 Litel Instruments Precision geographic location system and method utilizing an image product
US9417323B2 (en) * 2012-11-07 2016-08-16 Neva Ridge Technologies SAR point cloud generation system
US10388006B2 (en) * 2017-05-31 2019-08-20 Institut National D'optique Synthetic aperture imaging assisted by three-dimensional scanning imaging for height reconstruction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KANG AH-REUM, LEE SEUNG-KUK, KIM SANG-WAN: "Urban Area Building Reconstruction Using High Resolution SAR Image", KOREAN JOURNAL OF REMOTE SENSING, vol. 29, no. 4, 30 August 2013 (2013-08-30), pages 361 - 373, XP055852766, ISSN: 1225-6161, DOI: 10.7780/kjrs.2013.29.4.2 *

Also Published As

Publication number Publication date
KR102309251B1 (ko) 2021-10-07
KR20210124120A (ko) 2021-10-14
KR102583627B1 (ko) 2023-09-27

Similar Documents

Publication Publication Date Title
WO2020175786A1 (fr) Procédés et appareils de détection de présence d'objet et d'estimation de distance
WO2016122042A9 (fr) Système et procédé de détection automatique de rivière au moyen d'une combinaison d'images satellite et d'un classificateur de forêt aléatoire
WO2020138745A1 (fr) Procédé de traitement d'image, appareil, dispositif électronique et support d'informations lisible par ordinateur
WO2019066470A1 (fr) Procédé et appareil pour analyser un environnement de communication dans un système de communication sans fil
WO2018038441A1 (fr) Dispositif électronique et procédé de fonctionnement correspondant
WO2020054973A1 (fr) Dispositif électronique de détection d'objet externe à l'aide d'un réseau d'antennes et procédé de fonctionnement associé
EP3669181A1 (fr) Procédé de gestion d'inspection visuelle et système d'inspection visuelle
WO2022025706A1 (fr) Capacité d'angle d'arrivée dans des dispositifs électroniques
EP3649460A1 (fr) Appareil pour optimiser l'inspection de l'extérieur d'un objet cible et procédé associé
WO2021194018A1 (fr) Procédé de calcul d'informations de volume d'un objet compris dans un signal d'image satellite
WO2019231042A1 (fr) Dispositif d'authentification biométrique
WO2022039560A1 (fr) Capacité d'angle d'arrivée dans des dispositifs électroniques avec fusion de capteur de mouvement
WO2019135475A1 (fr) Appareil électronique et son procédé de commande
WO2022139461A1 (fr) Capacité tridimensionnelle d'angle d'arrivée dans des dispositifs électroniques
WO2016017906A1 (fr) Dispositif d'affichage, dispositif de correction d'affichage, système de correction d'affichage, et procédé de correction d'affichage
WO2017082607A1 (fr) Dispositif électronique et son procédé de fonctionnement
WO2023013928A1 (fr) Estimation d'exposition aux radiofréquences par radar destinée à des dispositifs mobiles
WO2011087249A2 (fr) Système de reconnaissance d'objets et procédé de reconnaissance d'objets l'utilisant
WO2020256458A1 (fr) Dispositif électronique pour déterminer des informations de localisation d'un dispositif externe
EP3335155A1 (fr) Dispositif électronique et son procédé de fonctionnement
EP3949363A1 (fr) Dispositif électronique et procédé d'acquisition d'informations biométriques en utilisant une lumière d'affichage
WO2022103195A1 (fr) Système de robot
WO2020009335A1 (fr) Procédé, support de stockage, et dispositif électronique pour une conception de réseau sans fil
WO2020091253A1 (fr) Dispositif électronique et procédé de commande d'un dispositif électronique
WO2011068315A2 (fr) Appareil permettant de sélectionner une base de données optimale en utilisant une technique de reconnaissance de force conceptuelle maximale et procédé associé

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20927597

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20927597

Country of ref document: EP

Kind code of ref document: A1