WO2018198212A1 - Information processing device, information processing method, and computer-readable storage medium - Google Patents

Information processing device, information processing method, and computer-readable storage medium Download PDF

Info

Publication number
WO2018198212A1
WO2018198212A1 PCT/JP2017/016451 JP2017016451W WO2018198212A1 WO 2018198212 A1 WO2018198212 A1 WO 2018198212A1 JP 2017016451 W JP2017016451 W JP 2017016451W WO 2018198212 A1 WO2018198212 A1 WO 2018198212A1
Authority
WO
WIPO (PCT)
Prior art keywords
display mode
point
candidate point
candidate
image
Prior art date
Application number
PCT/JP2017/016451
Other languages
French (fr)
Japanese (ja)
Inventor
健太 先崎
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to PCT/JP2017/016451 priority Critical patent/WO2018198212A1/en
Publication of WO2018198212A1 publication Critical patent/WO2018198212A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/9021SAR image post-processing techniques
    • G01S13/9027Pattern recognition for feature extraction
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques

Definitions

  • This disclosure relates to processing of data acquired by a radar.
  • Synthetic Aperture Radar is a technology that observes the state of the earth's surface by irradiating electromagnetic waves from above and acquiring the intensity of the reflected electromagnetic waves (hereinafter also referred to as “reflected waves”). One. Based on the data acquired by the SAR, a two-dimensional map (hereinafter, “SAR image”) of the intensity of the reflected wave can be generated.
  • the SAR image is a map in which the reflected wave is regarded as a reflected wave from a defined reference plane (for example, the ground surface) and the intensity of the reflected wave is represented on a plane representing the reference plane.
  • the position where the intensity of the reflected wave is represented in the SAR image is based on the distance between the position where the reflected wave is generated and the position of the antenna that receives the reflected wave. Therefore, the intensity of the reflected wave from a position away from the reference plane is represented in the SAR image at a position shifted to the radar side according to the height from the reference plane with respect to the actual position.
  • the image formed in the SAR image by the reflected wave from the object whose shape is not flat is an image in which the shape of the actual object is distorted.
  • the phenomenon in which such a distorted image is generated is called foreshortening.
  • Patent Documents 1 and 2 disclose apparatuses that perform correction processing called ortho correction in order to correct foreshortening.
  • Patent Document 3 discloses a technique for correcting not only foreshortening but also a phenomenon called layover.
  • the layover is a phenomenon in which a reflected wave signal from a position at a certain height and a reflected wave signal from a position different from the position overlap in the SAR image.
  • Patent Document 4 which is a document related to the present disclosure has a description regarding an occlusion area in an image photographed by a camera.
  • the ortho correction as disclosed in Patent Documents 1 and 2 is not assumed to be performed on a SAR image in which a layover has occurred.
  • the ortho correction is a correction in which the position of a point where distortion occurs in the SAR image is shifted to a position estimated as a true position where a signal (reflected wave) represented at the point is generated.
  • the ortho correction is a correction that is performed on the assumption that there is one position candidate that is estimated as the true position where the reflected wave is emitted at the point to be corrected.
  • Patent Document 3 discloses a method for correcting layover, but this method requires a plurality of SAR images having different distortion methods. Thus, in the absence of any supplemental information, the reflected waves from two or more points that contribute to the signal at a point in the region where layover occurs in one SAR image are distinguished. That is impossible in principle.
  • layover that is, if a candidate point that contributes to a signal at a certain point in the SAR image is not narrowed down, a person can select a candidate point that contributes to the signal while viewing the SAR image and the optical image. It is customary to estimate based on experience and various information.
  • the images used in the present invention are obtained by other methods for estimating the state of an object by observing the reflection of electromagnetic waves such as images based on RAR (Real Aperture Radar) in addition to SAR images. It may be an image.
  • RAR Real Aperture Radar
  • An information processing apparatus includes a position in a three-dimensional space of a target point, which is a point specified in an intensity map of a signal from an observed object acquired by a radar, and a shape of the observed object
  • candidate display means for extracting candidate points that are points that contribute to the signal at the target point, and a display mode for displaying the position of the candidate point in the spatial image in which the observed object is captured
  • Display mode determining means for determining the position of the candidate point based on the position of the candidate point in the three-dimensional space and the imaging condition of the spatial image, and the position of the candidate point in the spatial image is displayed according to the determined display mode.
  • Image generating means for generating a processed image.
  • An information processing method includes a position in a three-dimensional space of a target point, which is a point specified in an intensity map of a signal from an observed object acquired by a radar, and a shape of the observed object Based on the above, a candidate point that is a point contributing to the signal at the target point is extracted, and a display mode of display indicating the position of the candidate point in the spatial image in which the observed object is captured
  • the position is determined based on the position in the three-dimensional space and the shooting condition of the space image, and an image is generated in which the position of the candidate point in the space image is displayed according to the determined display mode.
  • a program provides a computer with a position in a three-dimensional space of a target point, which is a point specified in an intensity map of a signal from an observed object acquired by a radar, and the observed object.
  • a candidate point extracting means for extracting a candidate point that is a point contributing to the signal at the target point based on a shape, and a display mode for displaying a position of the candidate point in a spatial image in which the object is observed
  • Display mode determining means for determining the position of the candidate point based on the position of the candidate point in the three-dimensional space and the imaging condition of the spatial image, and the display mode of determining the position of the candidate point in the spatial image
  • an image generation process for generating the displayed image.
  • the program is stored in, for example, a computer-readable non-volatile storage medium.
  • the present invention in the intensity map of the signal from the observed object acquired by the radar, it is easy to understand the point on the observed object that contributes to the signal at the point in the region where the layover occurs. can do.
  • FIG. 1 is a diagram for explaining layover.
  • an observation device S 0 that performs observation by SAR and an object M that exists in the observed range are shown.
  • Observation device S for example, mounting the radar, a satellite or aircraft or the like.
  • Observation device S while moving the sky, transmits electromagnetic waves by the radar receives the reflected waves.
  • the arrows indicate the traveling direction of the observation device S 0 , that is, the radar traveling direction (also referred to as azimuth direction).
  • Observation electromagnetic waves emitted from the device S 0 is the surface, and reflected by the back-scattered by the structure M in the ground, part of the reflected wave is received back to the radar.
  • the distance between the electromagnetic wave reflection point of the position and the structure M of the observation devices S 0 is specified.
  • a point Q a is a point on the ground surface
  • a point Q b is a point away from the ground surface on the surface of the structure M.
  • the distance between the observation equipment S 0 and the point Q a is equal to the distance between the observation equipment S 0 and the point Q b.
  • the straight line connecting the point Q b and the point Q a, and the traveling direction of the radar are in a vertical relationship.
  • the reflected wave at the point Q a, the reflected wave at the point Q b can not be distinguished by taking the observation equipment S 0. That is, the intensity of the reflected waves from the intensity of the reflected wave and the point Q b from the point Q a, is observed intermingled.
  • FIG. 2 An example of an image representing the intensity distribution of the reflected wave (hereinafter referred to as “SAR image”) generated in such a case is shown in FIG.
  • the arrow indicates the traveling direction of the radar.
  • the SAR image is generated based on the intensity of the reflected wave received by the radar and the distance between the point where the reflected wave is emitted and the radar.
  • reflected waves from two or more points that are on the plane perpendicular to the traveling direction of the radar including the position of the radar and are equal in distance from the radar are not distinguished.
  • Point P is a point which reflects the intensity of the reflected wave from a point Q a, the strength indicated in this respect P, is also reflected in the strength of the reflected wave from the point Q b.
  • a white area including the point P is an area where a layover has occurred.
  • an area painted black represents an area shaded by the structure M against the radar. This region is also called radar shadow.
  • a reference three-dimensional space is defined in the processing performed by the information processing apparatus 11.
  • a three-dimensional coordinate system is defined for the reference three-dimensional space.
  • this three-dimensional coordinate system is referred to as a reference three-dimensional coordinate system or a reference coordinate system.
  • the reference coordinate system may be, for example, a geodetic system or a coordinate system of model data 1113 that is three-dimensional data described later.
  • the first coordinate system is related to the second coordinate system. Is written.
  • FIG. 3 is a block diagram showing a configuration of the information processing apparatus 11 according to the first embodiment.
  • the information processing apparatus 11 includes a storage unit 111, a feature point extraction unit 112, a geocoding unit 113, a candidate point extraction unit 114, a display mode determination unit 115, an image generation unit 116, and a display control unit 117.
  • the storage unit 111, the feature point extraction unit 112, the geocoding unit 113, the candidate point extraction unit 114, the display mode determination unit 115, and the image generation unit 116 are connected so as to be able to transmit / receive data to / from each other.
  • data transmission between each unit of the information processing apparatus 11 may be performed directly via a signal line, or may be performed by reading and writing to a shared storage area (for example, the storage unit 111).
  • data movement is described by the words “send data” and “receive data”, but the method of transmitting data is not limited to the method of transmitting directly.
  • the information processing apparatus 11 is connected to the display device 21 so as to be communicable.
  • the storage unit 111 stores SAR data 1111, SAR data parameters 1112, model data 1113, an aerial image 1114, and shooting condition information 1115.
  • SAR data 1111 is data obtained by observation using SAR.
  • Targets observed by the SAR (hereinafter also referred to as “observed object”) are, for example, the ground surface and buildings.
  • the SAR data 1111 is data that can generate at least a SAR image represented under a coordinate system related to a reference coordinate system.
  • the SAR data 1111 includes an observation value and information associated with the observation value.
  • the observed value is, for example, the intensity of the observed reflected wave.
  • the information associated with the observation value is, for example, the position and traveling direction of the radar that observed the reflected wave, and the position between the reflected point and the radar derived from the observation of the reflected wave. Information such as distance.
  • the SAR data 1111 may include information on the depression angle of the radar (the elevation angle of the radar viewed from the reflection point) with respect to the object to be observed.
  • the information regarding the position is described by, for example, a combination of longitude, latitude, and altitude in the geodetic system.
  • the SAR data 1111 may be a SAR image itself.
  • observation data by SAR is assumed as data to be used.
  • data of observation results by, for example, RAR (Real Aperture Radar) is used instead of SAR. May be used.
  • the SAR data parameter 1112 is a parameter indicating the relationship between the data included in the SAR data 1111 and the reference coordinate system.
  • the SAR data parameter 1112 is a parameter for assigning a position in the reference coordinate system to the observation value included in the SAR data 1111.
  • the SAR data parameter Reference numeral 1112 denotes a parameter for converting the information into information described under a reference coordinate system.
  • the coordinate system of the SAR image is related to the reference coordinate system by the SAR data parameter 1112. That is, an arbitrary point in the SAR image is associated with one point in the reference coordinate system.
  • the model data 1113 is data representing the shape of an object in three dimensions, such as topography and building structure.
  • the model data 1113 is, for example, DEM (Digital Elevation Model; digital elevation model).
  • the model data 1113 may be DSM (Digital Surface Model) that is data of the earth surface including the structure, or DTM (Digital Terrain Model) that is data of the shape of the ground surface.
  • the model data 1113 may have DTM and three-dimensional data of a structure separately.
  • the coordinate system used for the model data 1113 is related to the reference coordinate system. That is, an arbitrary point in the model data 1113 can be described by coordinates in the reference coordinate system.
  • the space image 1114 is an image in which a space including the SAR observed object is copied.
  • the spatial image 1114 may be, for example, any of optical images such as satellite photographs and aerial photographs, maps, topographic maps, and CG (Computer Graphics) images representing the topography.
  • the aerial image 1114 may be a projection view of the model data 1113.
  • the spatial image 1114 the geographical shape and arrangement of the object in the space represented by the spatial image 1114 are viewed by the user of the information processing apparatus 11 (that is, the image output by the information processing apparatus 11). It is an image that is easy to understand intuitively.
  • the spatial image 1114 may be captured from outside the information processing apparatus 11 or may be generated by projecting the model data 1113 by the image generation unit 116 described later.
  • Shooting condition information 1115 is information related to shooting conditions (capturing conditions) of the spatial image 1114.
  • the imaging condition of the spatial image 1114 is how the spatial image 1114 is acquired.
  • the shooting condition information 1115 is information that can uniquely specify the shooting range of the spatial image 1114.
  • the shooting condition information 1115 is represented by a plurality of parameter values related to the shooting range of the spatial image 1114, for example.
  • the spatial image is a captured image captured from a specific position, and a subject (for example, an imaging device such as a camera) that has performed the imaging is referred to as an imaging object.
  • a subject for example, an imaging device such as a camera
  • the spatial image 1114 is an image obtained without actually performing the photographing process by the apparatus, such as when the spatial image 1114 is generated by projecting the model data 1113, the photographing body may be virtually assumed. .
  • the photographing condition information 1115 is described by, for example, the position of the photographing object and information indicating the range of the photographing object.
  • the imaging condition information 1115 includes four coordinates in the reference coordinate system corresponding to the coordinates in the reference coordinate system of the photographic object and the points that are captured at the four corners of the spatial image 1114. And may be described by coordinates.
  • the shooting range is an area surrounded by four half lines extending from the position of the shooting body to the four coordinates.
  • the position of the photographing object is strictly the position of the viewpoint of the photographing object with respect to the spatial image 1114, but in practice, the information on the position of the photographing object used by the display mode determination unit 115 does not have to be exact.
  • the display mode determination unit 115 is acquired by a device having a GPS (Global Positioning System) function mounted on a device (aircraft, artificial satellite, etc.) on which the photographing body is mounted as information indicating the position of the photographing body. Position information may be used.
  • GPS Global Positioning System
  • the information indicating the position in the shooting condition information 1115 is given by, for example, a set of values of parameters (for example, longitude, latitude, and altitude) in the reference coordinate system.
  • the position in the reference three-dimensional space of any point included in the range of the space included in the spatial image 1114 can be uniquely specified by the shooting condition information 1115.
  • the spatial image of the point is based on the shooting condition information 1115.
  • the position at 1114 may be uniquely identified.
  • Each parameter of the imaging condition information 1115 may be a parameter of a coordinate system different from the reference coordinate system. In that case, the imaging condition information 1115 only needs to include a conversion parameter for converting the parameter value in the coordinate system to the parameter value in the reference coordinate system.
  • the photographing condition information 1115 may be described by, for example, the position, posture, and angle of view of the photographing body.
  • the posture of the photographing object can be described by a photographing direction, that is, an optical axis direction of the photographing object at the time of photographing, and a parameter indicating a relationship between a vertical direction of the spatial image 1114 and a reference coordinate system.
  • the angle of view can be described by parameters indicating a vertical viewing angle and a horizontal viewing angle.
  • the information indicating the position of the photographic body is described by the value of a parameter indicating the direction of the photographic body viewed from the subject. May be.
  • the information indicating the position of the photographic object may be a set of azimuth and elevation angle.
  • the shooting condition information 1115 may include shooting time information.
  • the storage unit 111 does not always need to hold data in the information processing apparatus 11.
  • the storage unit 111 may record data on a device or a recording medium outside the information processing apparatus 11 and acquire the data as necessary. That is, the storage unit 111 only needs to be configured to acquire data requested by each unit in the processing of each unit of the information processing apparatus 11 described below.
  • a feature point is a point extracted by a predetermined method from points indicating signal intensity that is not at least 0 in the SAR data 1111. That is, the feature point extraction unit 112 extracts one or more points from the SAR data 1111 by a predetermined method for extracting points.
  • the points extracted from the SAR data 1111 are a data group related to one point in the SAR image (for example, a set of an observation value and information associated with the observation value).
  • the feature point extraction unit 112 extracts feature points by, for example, a method of extracting points that may give useful information in the analysis of the SAR data 1111.
  • the feature point extraction unit 112 may extract points by a technique called PS-InSAR (Permanent Scatterers Interferometric SAR).
  • PS-InSAR is a technique for extracting a point where a change in signal intensity is observed based on a phase shift from a plurality of SAR images.
  • the feature point extraction unit 112 may extract a point that satisfies a predetermined condition (for example, the signal intensity exceeds a predetermined threshold) as the feature point.
  • a predetermined condition for example, the signal intensity exceeds a predetermined threshold
  • This predetermined condition may be set by a user or a designer of the information processing apparatus 11, for example.
  • the feature point extraction unit 112 may extract points selected by human judgment as feature points.
  • the feature point extraction unit 112 sends the extracted feature point information to the geocoding unit 113.
  • the feature point information includes at least information capable of specifying coordinates in the reference coordinate system.
  • the feature point information is represented by the position and traveling direction of the observation device that has acquired the SAR data in the range including the feature point, and the distance between the observation device and the signal reflection point at the feature point.
  • the geocoding unit 113 converts the information based on the SAR data parameter 1112 into information represented by the position, traveling direction, and distance of the observation device in the reference coordinate system. Then, the geocoding unit 113 identifies points (coordinates) that satisfy all of the following conditions in the reference coordinate system. -The distance between the point and the position of the observation device is the distance indicated by the feature point information. ⁇ Included in a plane perpendicular to the direction of travel of the observation equipment.
  • the coordinates of the identified point are the coordinates in the reference coordinate system of the feature point indicated by the feature point information.
  • the geocoding unit 113 assigns the coordinates of the points specified in this way to the feature points indicated by the feature point information.
  • candidate point a point related to the feature point (hereinafter, “candidate point”) with the feature point given the coordinate in the reference coordinate system.
  • the candidate points related to the feature points will be described below.
  • the signal intensity indicated at the feature point (referred to as point P) in the region where the layover occurs may be the sum of the intensity of the reflected waves from a plurality of points.
  • a point in the three-dimensional space that may contribute to the signal intensity indicated at the point P is referred to as a candidate point related to the point P in this embodiment.
  • FIG. 4 is a diagram for explaining an example of candidate points.
  • FIG. 4 is a cross-sectional view of the reference three-dimensional space cut out by a plane passing through the point P and perpendicular to the radar traveling direction (azimuth direction).
  • a line GL is a cross-sectional line of a reference plane in a reference three-dimensional space, that is, a plane on which a feature point is located.
  • a line ML is a cross-sectional line of the three-dimensional structure represented by the model data 1113.
  • Point S 1 is a point indicating the position of the radar.
  • the position of the point P is a coordinate position given by the geocoding unit 113.
  • the distance between the points P and S 1 is assumed to be "R".
  • Reflected in the signal intensity indicated at the point P is a reflected wave from a point whose distance from the point S 1 is “R” in the cross-sectional view. That is, the point involved in the point P is the point at which arc centered on the point S 1 radius "R” intersects the line ML.
  • points Q 1 , Q 2 , Q 3 , and Q 4 are points other than the point P where the circular arc with the radius “R” centering on the point S 1 intersects the line ML. Therefore, these points Q 1 , Q 2 , Q 3 , Q 4 are candidate points related to the point P.
  • the candidate point extraction unit 114 extracts, as candidate points, points on the plane that includes the point P and that is perpendicular to the traveling direction of the radar and whose distance from the radar is equal to the distance between the radar and the point P. That's fine.
  • the candidate points extracted by the candidate point extraction unit 114 may be points Q 1 , Q 2 , Q 4 excluding the point Q 3 .
  • the candidate point extracting unit 114 based on the line segment connecting the point Q 3 and the point S 1 is crossing the line ML outside point Q 3, may exclude the point Q 3 from the candidate point.
  • the three-dimensional space as a reference according to a plane perpendicular to the point P as the azimuth direction, the section line of the model data 1113, the position of the point S 1 and the point P, In addition, the distance “R” between the point S 1 and the point P.
  • a candidate point Q 3 is that a straight line passing through point Q 3 and parallel to the incident line of the electromagnetic wave from the radar intersects line ML (ie, is in the radar shadow). May be removed from the point.
  • the candidate point extraction unit 114 may extract candidate points under the approximation that the incident directions of the electromagnetic waves from the observation device to the object to be observed are all parallel to each other.
  • the position of the candidate point can be calculated using the azimuth and depression angle ⁇ of the point S 1 instead of the coordinate of the point S 1 and the distance “R”.
  • the candidate point extraction unit 114 sends candidate points related to the feature points to the display mode determination unit 115 and the image generation unit 116.
  • the display mode is a state of display determined by, for example, the shape, size, color, brightness, transparency, movement, and change with time of a figure to be displayed.
  • the “candidate point display mode” is a display mode for displaying the position of the candidate point.
  • Display candidate points means to display a display indicating the positions of candidate points.
  • the shooting condition is indicated by the shooting condition information 1115 as described above.
  • the display mode determination unit 115 receives the coordinates of the candidate points from the candidate point extraction unit 114 when determining the display mode of the candidate points. Further, the display mode determination unit 115 reads out the model data 1113 and the imaging condition information 1115 of the spatial image used for the image generated by the image generation unit 116 from the storage unit 111.
  • the display mode determination unit 115 displays different display modes depending on whether the candidate point is positioned in a region that is a blind spot in a spatial image in a three-dimensional space, for example. decide.
  • An area that is a blind spot in a spatial image is an area that is included in the imaging range of the spatial image, but is blocked by an object that appears in the spatial image and is not visible from the position of the photographic object that captured the spatial image. .
  • an area that becomes a blind spot in a spatial image is also referred to as an occlusion area.
  • FIG. 6 is a diagram for explaining the occlusion area.
  • the point S 2 represents the position of the imaging member.
  • the solid M is a rectangular parallelepiped structure.
  • the range of the three-dimensional area indicated by the dotted line is the shooting range.
  • Straight Lc, Ld, Le, respectively, through a point Qc on sterically M, Qd, the Qe, is a straight line extending from the position of the point S 2.
  • a three-dimensional area indicated by diagonal lines in FIG. 6 is an occlusion area.
  • the display mode determination unit 115 first determines whether each candidate point is located in the occlusion area.
  • FIG. 7 is a diagram for explaining a method for determining whether a candidate point is located in the occlusion area.
  • Figure 7 is a S 2 points representing the position of the imaging of the spatial image, which is a diagram of a plane including the point Q 2.
  • a line GL is a cross-sectional line of the reference plane of the reference three-dimensional space
  • a line ML is a cross-sectional line of the three-dimensional structure represented by the model data 1113.
  • Display mode decision unit 115 first calculates a line segment connecting the candidate points Q 2 and the point S 2. The display mode decision unit 115 determines that whether the segment has an intersection outside line ML and the point Q 2 (i.e., intersects the model data 1113). When the line segment has an intersection with the line ML, the display mode determination unit 115 determines that the candidate point is located in the occlusion area. When the line segment does not have an intersection with the line ML, the display mode determination unit 115 determines that the candidate point is not located in the occlusion area. In the example of FIG. 7, a line segment connecting the point Q 2 and the point S 2 has a line ML and the intersection Q f. Thus, the display mode determination unit 115 determines that the candidate point Q 2 is positioned in the occlusion region.
  • the display mode determination unit 115 determines whether or not the candidate points included in the imaging range of the spatial image are located in the occlusion area, for example, as described above.
  • the line segment connecting the candidate points and the point S 2 may be a half line extending in the direction of the imaging member from the candidate point.
  • the direction of the imaging member to each of the candidate points are considered to be identical. That is, the display mode determination unit 115, by the point S 2 is regarded as sufficiently far, all of the candidate points in space image, the determination can be performed by using a half-line in the same direction. In this case, the calculation cost related to the determination can be reduced.
  • the display mode determination unit 115 determines the display mode of each candidate point so that the display mode of candidate points located in the occlusion area is different from the display mode of candidate points not positioned in the occlusion area.
  • the display mode determination unit 115 associates information indicating the determined display mode with candidate points. For example, the display mode determination unit 115 may set a property value related to the display mode of each candidate point. Or the display mode determination part 115 should just associate the identification information of the set of the property value relevant to the display mode already prepared with a candidate point. To associate the second information with the first information is to generate data indicating that the first information and the second information are associated with each other.
  • the display mode determination unit 115 sets the transmittance value of the candidate points not located in the occlusion area to a value different from the transmittance of the candidate points located in the occlusion area.
  • the transmittance is a parameter indicating the degree of contribution to the pixel value at the superimposed position when the displayed graphic is superimposed on the image. For example, when a graphic with a transmittance of 0% is superimposed, the pixel value at the superimposed position of the graphic depends only on the color of the graphic. When a graphic with a transmittance of 0% is superimposed, the pixel value at the superimposed position of the graphic is a value that also depends on the color of the pixel value before superimposition. In other words, a graphic whose transmittance is not 0% is displayed as a semitransparent graphic.
  • FIG. 8 is a display mode showing examples of properties relating to the display mode and examples of values of each property in two types of display modes (“first display mode” and “second display mode”).
  • the display mode determination unit 115 stores, for example, data representing the contents of the table as shown in FIG. 8 in advance, or stores the data in the storage unit 111, for each candidate point.
  • One display mode "or" second display mode may be associated. Note that as a property related to the display mode, there may be a property for designating whether or not a graphic is displayed.
  • the display mode determination unit 115 associates the first display mode as a display mode of candidate points that are not located in the occlusion area, and sets the first display mode of candidate points that are positioned in the occlusion area.
  • the two display modes may be associated with each other. By doing so, the candidate points located in the occlusion area are displayed in the transmissive color on the display device 21.
  • the transmittance may be set for each portion of the graphic to be displayed (such as an outline and an internal region).
  • the display mode determination unit 115 displays different display modes according to the number of times the half line extending from the candidate point to the observation device intersects the line ML. You may determine with respect to each candidate point.
  • the display mode determination unit 115 seems to have a transmittance of 50% for the candidate point where the number of times the half line intersects the line ML is one time, and the number of times that the half line intersects the line ML is two times or more.
  • the transmittance of a candidate point may be set to 80%.
  • the display mode determination unit 115 may determine a display mode that changes according to the distance between the candidate point and the photographic object, for example.
  • the display mode determination unit 115 may determine the display mode of each candidate point so that, for example, as the candidate point is farther from the photographic object, the size of the graphic indicating the candidate point becomes smaller. Or the display mode determination part 115 may determine the display mode of each candidate point so that the brightness of the figure which shows a candidate point becomes low, for example, so that a candidate point is far from a to-be-photographed body. Such a configuration makes it easy to understand the positional relationship between candidate points.
  • the display mode determination unit 115 sends the display mode of each candidate point to the image generation unit 116.
  • the format of data generated by the image generation unit 116 is not limited to the image format. The image generated by the image generation unit 116 may be data having information necessary for the display device 21 to display.
  • the image generation unit 116 reads a spatial image used for the generated image from the spatial image 1114 stored in the storage unit 111.
  • the image generation unit 116 may determine the image to be read based on an instruction from the user, for example.
  • the image generation unit 116 may receive information specifying one of the plurality of spatial images 1114 from the user.
  • the image generation unit 116 may receive information specifying a range in the three-dimensional space and read a spatial image including the specified range.
  • the image generation unit 116 may accept information designating feature points or candidate points that the user desires to display. Then, the image generation unit 116 may specify a range in the reference three-dimensional space including the designated feature point or candidate point, and read a spatial image including the specified range. Note that the information that specifies the feature points or candidate points that the user desires to display may be information that specifies the SAR data 1111.
  • the image generation unit 116 may cut out a part of the spatial image 1114 stored in the storage unit 111 and read it as a spatial image to be used. For example, when the image generation unit 116 reads a spatial image based on candidate points that the user desires to display, the image generation unit 116 cuts out a range including all the candidate points from the spatial image 1114 and uses the cut-out image as a spatial image. May be read out.
  • the image generation unit 116 receives the coordinates of the candidate points extracted by the candidate point extraction unit 114 from the candidate point extraction unit 114. Further, the image generation unit 116 acquires information indicating the display mode of each candidate point from the display mode determination unit 115.
  • the image generation unit 116 superimposes a display indicating the position of the candidate point extracted by the candidate point extraction unit 114 on the read spatial image according to the display mode determined by the display mode determination unit 115. Thereby, a spatial image in which candidate points are shown is generated.
  • the spatial image generated by the image generation unit 116 and indicating the candidate points is also referred to as a “point display image”.
  • the image generation unit 116 may specify the position of the candidate point in the spatial image by calculation based on the shooting condition information 1115.
  • the image generation unit 116 specifies the shooting range and shooting direction of the spatial image based on the shooting condition information 1115. Then, the image generation unit 116 obtains a cut surface of the shooting range by a plane that includes the candidate point and is perpendicular to the shooting direction. The positional relationship between the cut surface and the candidate point corresponds to the positional relationship between the spatial image and the candidate point.
  • the image generation unit 116 may specify the coordinates of the candidate points when the coordinates of the cut surface are related to the coordinates of the spatial image.
  • the identified coordinates are the coordinates of candidate points in the spatial image.
  • the optical satellite image may be corrected by ortho correction or the like.
  • the position where the candidate point is indicated is also corrected.
  • the position of the candidate point may be corrected using the correction parameter used in the correction for the optical satellite image.
  • the above-described method for specifying the position of the candidate point in the spatial image is an example.
  • the image generation unit 116 may specify the position of the candidate point in the spatial image based on the position of the candidate point in the reference coordinate system and the relationship between the spatial image and the reference coordinate system.
  • the image generation unit 116 sends the generated point display image to the display control unit 117.
  • the display control unit 117 causes the display device 21 to display the point display image by, for example, outputting the point display image to the display device 21.
  • the display device 21 is a display such as a liquid crystal monitor or a projector.
  • the display device 21 may have a function as an input unit like a touch panel.
  • the display device 21 is connected to the information processing device 11 as an external device of the information processing device 11, but even if the display device 21 is included in the information processing device 11 as a display unit. Good.
  • the viewer who sees the display on the display device 21 knows the result of the processing by the information processing device 11. Specifically, the viewer can observe the point display image generated by the image generation unit 116.
  • the feature point extraction unit 112 of the information processing apparatus 11 acquires the SAR data 1111 from the storage unit 111 (S111).
  • the acquired SAR data 1111 includes at least SAR data in a range included in the spatial image used in step S121 described later.
  • the feature point extraction unit 112 extracts feature points from the acquired SAR data 1111 (step S112).
  • the geocoding unit 113 assigns coordinates indicating the position of the feature point in the reference coordinate system to the extracted feature point (step S113).
  • the geocoding unit 113 sends the coordinates assigned to the extracted feature points to the candidate point extraction unit 114.
  • the candidate point extraction unit 114 extracts candidate points related to the feature point based on the coordinates of the feature point and the model data 1113 (step S114). That is, the candidate point extraction unit 114 specifies the coordinates of candidate points related to the feature points. Then, the candidate point extraction unit 114 sends the coordinates of the candidate points to the display mode determination unit 115 and the image generation unit 116.
  • the candidate point extraction unit 114 may store the coordinates of the candidate points in the storage unit 111 in a format in which the feature points and the candidate points are associated with each other.
  • the display mode determination unit 115 reads shooting condition information of a spatial image used in the process of step S124 described later (step S121). In addition, the display mode determination unit 115 acquires the coordinates of the candidate points extracted by the candidate point extraction unit 114 from the candidate point extraction unit 114. When the coordinates of the candidate points are stored in the storage unit 111, the display mode determination unit 115 may read the coordinates of the candidate points from the storage unit 111. And the display mode determination part 115 specifies the candidate point contained in the range of a spatial image based on imaging condition information (step S122).
  • the display mode determination unit 115 determines the display mode of each candidate point included in the range of the spatial image based on the position of the candidate point included in the range of the spatial image, the model data 1113, and the shooting condition information. (Step S123).
  • the display mode determination unit 115 sends information on the display mode of the determined candidate points to the image generation unit 116.
  • the image generation unit 116 generates a point display image that is a spatial image indicating the position of the candidate point (step S124). Specifically, the image generation unit 116 reads a spatial image from the storage unit 111 and superimposes a display indicating the position of the candidate point on the spatial image according to the display mode determined by the display mode determination unit 115.
  • the timing at which the spatial image used by the image generation unit 116 is determined may be before or after the timing at which the process for acquiring the SAR data is performed. That is, in one example, after the spatial image to be used is determined, the information processing apparatus 11 specifies the SAR data 1111 that is data in the range included in the determined spatial image, and the specified SAR data Alternatively, the processing from step S111 to step S114 may be executed. In one example, the information processing apparatus 11 performs the processes from step S111 to step S114 on the SAR data 1111 in a range that can be included in the spatial image 1114 in advance before the spatial image to be used is determined. May be.
  • the image generation unit 116 sends the generated image to the display control unit 117.
  • the display control unit 117 causes the display device 21 to display the point display image generated by the image generation unit 116. Thereby, the display device 21 displays the point display image generated by the image generation unit 116 (step S125). By viewing this display, the viewer can easily understand the candidate points related to the feature points extracted in the SAR data 1111.
  • a viewer can easily understand a point that contributes to a signal at a point in a region where a layover occurs in the SAR image.
  • the reason is that the candidate point extraction unit 114 extracts candidate points that may have contributed to the signal at the feature point based on the model data 1113, and the image generation unit 116 displays the position of the candidate point. This is because a point display image that is a spatial image is generated.
  • the display mode determination unit 115 displays the position of the candidate point displayed in the point display image in a more easily understandable display mode based on the position of the candidate point and the shooting conditions.
  • FIG. 11 is a diagram illustrating an example of a dot display image displayed by the display device 21.
  • the letters “Q 5 ” and “Q 6 ” and the curves indicating the points are shown for convenience of explanation and are not included in the actually displayed image.
  • a circle representing a candidate point is superimposed on an optical satellite image showing a building.
  • no candidate point is located in the occlusion area. Therefore, the display mode of each candidate point is uniform.
  • FIG. 12 is a diagram illustrating an example of a point display image in which the building and the candidate point displayed in FIG. 11 are used as an optical image taken from another position as a spatial image.
  • the candidate point Q 5 and Q 6 are to positioned in the occlusion area are displayed in a manner different from the other candidate points.
  • FIG. 13 shows a point display with the same angle of view as the point display image shown in FIG. 12 when the display mode determination unit 115 determines the display mode so that the size of the circle indicating the candidate point differs according to the distance. It is a figure which shows an image.
  • circles indicate the candidate point Q 5 and Q 6 to be displayed smaller than the circle that indicates the other candidate points located in front of the building, the candidate point Q 5 and Q 6 buildings It turns out that it is not a point located in front of.
  • FIG. 14 is a diagram showing a point display image having the same angle of view as the point display image shown in FIG. 13 when the display mode determination unit 115 does not function.
  • the display modes of the candidate points Q 5 and Q 6 are the same as the display modes of the other candidate points.
  • the viewer will only see the point display image shown in FIG. 14, the candidate point Q 5 and Q 6 can not be identified whether the position to which the surface of the building.
  • viewers can even obtain useful information from the information represented by the point Q 5, the information can not determine whether the information relating to which position of a building.
  • PS-InSAR fluctuations in feature points can be observed using two SAR data 1111 having different data acquisition times.
  • the candidate point Q 5 is even know viewers that feature points involved varies, specifically which location can not be determined whether varied. Alternatively, the viewer may mistakenly recognize that the front of the building has changed.
  • a signal to the degree of contribution of each candidate point is a feature point, if you are trying to determine, based on the position of the candidate point, the position of the point Q 5 and Q 6 are not fixed, the point Q 5 and Q 6 can not determine the degree contribute to the signal of the feature points.
  • the display mode determination unit 115 The inconvenience as described above can be solved by the display mode determination unit 115. That is, the information processing apparatus 11 provides the viewer with more easily understandable information regarding the feature points.
  • ⁇ Modification 1 In the operation example of the information processing apparatus 11 described above, the order of the process of step S111 and the process of step S112 may be reversed. That is, the feature point extraction unit 112 may extract a feature point from the points given coordinates by the geocoding unit 113.
  • the display mode determination unit 115 changes the display mode so that the display indicating the position of the candidate point further indicates the direction of the radar that received the signal from the candidate point or the incident direction of the electromagnetic wave from the radar. You may decide.
  • FIG. 15 is a diagram illustrating an example of a point display image when the display mode determination unit 115 determines a display mode that indicates the incident direction of the electromagnetic wave from the radar with respect to each candidate point.
  • the incident direction of the electromagnetic wave from the radar is indicated by an arrow that overlaps the circle indicating the candidate point.
  • the viewer can know the direction of the radar.
  • the figure showing the incident direction may be a figure showing not only a direction parallel to the image but also a three-dimensional direction.
  • FIG. 16 is a diagram illustrating an example of a figure indicating a three-dimensional direction.
  • the arrow shown in FIG. 16 points to the lower right direction closer to the back side of the page.
  • the incident direction is a direction indicated by an arrow.
  • the direction opposite to the direction indicated by the arrow is the direction of the radar viewed from the candidate point. According to such a display, the viewer can more specifically know the incident direction of the electromagnetic wave from the radar.
  • the candidate point is the point where the electromagnetic wave from the radar is incident, it can be seen that the candidate point is not included in the occlusion area if at least the candidate point is viewed in the indicated direction. Therefore, the display showing the incident direction of the electromagnetic wave from the radar allows the viewer to know the shooting direction in which the candidate point is not hidden. Further, when there are a plurality of candidate positions of the displayed candidate points in the three-dimensional space, points that cannot be observed from the indicated direction can be excluded from the candidates. Therefore, the viewer may be able to deepen understanding of the position of the candidate point in the three-dimensional space.
  • the display mode determination unit 115 may further be configured to determine the display mode so that the display mode of candidate points related to a specific feature point is different from the display mode of other candidate points.
  • the display mode determination unit 115 may determine the display mode so that candidate points related to the feature points designated by the user are displayed in white and other candidate points are displayed in black.
  • FIG. 17 is a block diagram illustrating a configuration of the information processing apparatus 12 including the designation receiving unit 118.
  • the designation accepting unit 118 accepts designation of feature points from the user of the information processing apparatus 12, for example.
  • the information processing apparatus 12 may cause the display device 21 to display a SAR image showing the feature points.
  • designated reception part 118 may receive selection of 1 or more of the feature points shown in the SAR image by a user. The selection may be performed via an input / output device such as a mouse. The selected feature point is the designated feature point.
  • the designation accepting unit 118 may accept designation of a plurality of feature points.
  • the designation receiving unit 118 sends information on the designated feature points to the display mode determining unit 115.
  • the designated feature point information is, for example, an identification number or coordinates associated with each feature point.
  • the display mode determination unit 115 identifies candidate points related to the specified feature point. For example, the display mode determination unit 115 may cause the candidate point extraction unit 114 to extract candidate points related to the designated feature point and receive information on the extracted candidate points. Alternatively, when information that associates the feature points with the candidate points is stored in the storage unit 111, the display mode determination unit 115 may identify the candidate points based on the information.
  • the designation accepting unit 118 may accept designation of candidate points instead of designation of feature points. For example, the user may select any candidate point among the candidate points included in the point display image displayed by the process of step S125. The designation accepting unit 118 may accept the selection and specify a feature point related to the selected candidate point. And the designation
  • the display mode determination unit 115 determines a display mode different from the display mode of other candidate points as the display mode of the identified candidate points. Then, the image generation unit 116 generates a point display image in which candidate points are displayed according to the determined display mode. By displaying this point display image on the display device 21, the viewer can see information on candidate points related to the designated feature point.
  • FIG. 18 is a diagram illustrating an example of a point display image generated by the information processing apparatus 12 according to the third modification.
  • candidate points related to a specific feature point are displayed in white, and other candidate points are displayed in black.
  • the transmittance of a figure indicating a candidate point located in the occlusion area among candidate points related to a specific feature point is 50%.
  • the display mode determination unit 115 displays the candidate points related to the specific feature point so that the display of the candidate points related to the specific feature point includes a display indicating the incident direction of the radar as in the second modification. Aspects may be determined.
  • Such a display can further suppress the possibility that the viewer misidentifies the position of the candidate point.
  • the candidate points located on the wall surface of the building shown in FIG. 18 are located on a surface different from the front surface of the building.
  • FIG. 19 is a block diagram illustrating a configuration of the information processing apparatus 10.
  • the information processing apparatus 10 includes a candidate point extraction unit 104, a display mode determination unit 105, and an image generation unit 106.
  • Candidate point extraction unit 104 determines the target point based on the position in the three-dimensional space of the target point, which is a point specified in the intensity map of the signal from the target object acquired by the radar, and the shape of the target object Candidate points that are points that contribute to the signal at the point are extracted.
  • the candidate point extraction unit 114 of each of the above embodiments is an example of the candidate point extraction unit 104.
  • the signal intensity map is, for example, a SAR image.
  • a point specified in the intensity map is associated with a point in the three-dimensional space.
  • An example of the target point is a feature point in the first embodiment.
  • the shape of the object to be observed is given by, for example, three-dimensional model data.
  • the display mode determination unit 105 displays the display mode indicating the position of the candidate point in the spatial image in which the observed object is captured based on the position of the candidate point in the three-dimensional space and the imaging condition of the spatial image. decide.
  • the shape of the subject shown in the spatial image is given by, for example, three-dimensional model data.
  • the display mode determination unit 115 of each of the above embodiments is an example of the display mode determination unit 105.
  • the image generation unit 106 generates an image in which the position of the candidate point in the spatial image is displayed according to the determined display mode.
  • the relationship between the points in the three-dimensional space and the points in the spatial image may be performed in advance or may be performed by the image generation unit 106.
  • the image generation unit 106 generates an image indicating the position of the candidate point in the spatial image based on the position of the candidate point in the three-dimensional space and the relationship between the spatial image and the three-dimensional space.
  • the image generation unit 116 in each of the above embodiments is an example of the image generation unit 106.
  • FIG. 20 is a flowchart showing an operation flow of the information processing apparatus 10.
  • Candidate point extraction unit 104 determines the target point based on the position of the target point, which is a point specified in the intensity map of the signal from the observed object acquired by the radar, in the three-dimensional space and the shape of the observed object. Candidate points that are points that contribute to the signal at the point are extracted (step S101).
  • the display mode determining unit 105 displays the display mode indicating the position of the candidate point in the spatial image in which the observed object is captured, based on the position of the candidate point in the three-dimensional space and the shooting condition of the spatial image. Based on the determination (step S102).
  • the image generation unit 106 generates an image in which the position of the candidate point in the spatial image is displayed according to the determined display mode (step S103).
  • the candidate point extraction unit 104 extracts candidate points that contribute to the signal at the target point based on the model data, and the image generation unit 106 generates an image in which the position of the candidate point in the spatial image is displayed. Because. Furthermore, the display mode of the display indicating the position of the candidate point is determined based on the position in the three-dimensional space and the imaging condition of the spatial image.
  • each component of each device represents a functional unit block.
  • Computer-readable storage media includes, for example, portable media such as optical disks, magnetic disks, magneto-optical disks, and non-volatile semiconductor memories, and ROMs (Read Only Memory) and hard disks built into computer systems. It is a storage device.
  • Computer-readable storage medium is a medium that dynamically holds a program for a short time, such as a communication line when transmitting a program via a network such as the Internet or a communication line such as a telephone line,
  • a program or a program that temporarily holds a program such as a volatile memory in a computer system corresponding to a server or a client is also included.
  • the program may be a program for realizing a part of the functions described above, and may be a program capable of realizing the functions described above in combination with a program already stored in a computer system.
  • the “computer system” is a system including a computer 900 as shown in FIG. 21 as an example.
  • the computer 900 includes the following configuration.
  • CPU Central Processing Unit
  • ROM902 -RAM Random Access Memory
  • a program 904A and storage information 904B loaded into the RAM 903
  • a storage device 905 that stores the program 904A and storage information 904B
  • a drive device 907 that reads / writes from / to the storage medium 906
  • a communication interface 908 connected to the communication network 909
  • each component of each device in each embodiment is realized when the CPU 901 loads the RAM 903 and executes a program 904A that realizes the function of the component.
  • a program 904A for realizing the function of each component of each device is stored in advance in the storage device 905 or the ROM 902, for example. Then, the CPU 901 reads the program 904A as necessary.
  • the storage device 905 is, for example, a hard disk.
  • the program 904A may be supplied to the CPU 901 via the communication network 909, or may be stored in advance in the storage medium 906, read out to the drive device 907, and supplied to the CPU 901.
  • the storage medium 906 is a portable medium such as an optical disk, a magnetic disk, a magneto-optical disk, and a nonvolatile semiconductor memory.
  • each device may be realized by a possible combination of a separate computer 900 and a program for each component.
  • a plurality of constituent elements included in each device may be realized by a possible combination of one computer 900 and a program.
  • each device may be realized by other general-purpose or dedicated circuits, computers, or combinations thereof. These may be configured by a single chip or may be configured by a plurality of chips connected via a bus.
  • each component of each device When a part or all of each component of each device is realized by a plurality of computers, circuits, etc., the plurality of computers, circuits, etc. may be centrally arranged or distributedly arranged.
  • the computer, the circuit, and the like may be realized as a form in which each is connected via a communication network, such as a client and server system and a cloud computing system.
  • Appendix 1 Based on the position in the three-dimensional space of the target point, which is the point specified in the intensity map of the signal from the observed object acquired by the radar, and the shape of the observed object, the signal at the target point
  • Candidate point extracting means for extracting candidate points that are contributing points A display mode in which a display mode indicating the position of the candidate point in the spatial image in which the observed object is captured is determined based on the position of the candidate point in the three-dimensional space and the imaging condition of the spatial image.
  • An information processing apparatus comprising: [Appendix 2] The display mode determination unit according to claim 1, wherein the display mode determination unit determines a different display mode depending on whether or not the candidate point is located in an area that is a blind spot when viewed from a photographic body that has captured the spatial image. Information processing device. [Appendix 3] The information processing apparatus according to appendix 2, wherein the display mode determination unit determines a different display mode according to the number of times a half line from the candidate point to the photographic object intersects the surface of the subject in the spatial image.
  • the display mode determining means is further configured to display the position of the candidate point such that an indication of an incident direction of an electromagnetic wave from the radar that is a basis of the signal contributed by the candidate point. Determine the display mode of the display showing the position, The information processing apparatus according to any one of appendices 1 to 3.
  • the display mode determining means determines the display mode of the display so that a part or all of the display indicating the position of the candidate point is a transparent color for at least one of the candidate points.
  • the information processing apparatus according to any one of appendices 1 to 4.
  • the display mode determination means further uses the position of the other candidate point in the spatial image as a display mode of display indicating the position of the candidate point contributing to the signal at the target point specified by a predetermined method. Determine a display mode different from the display mode of the display The information processing apparatus according to any one of appendices 1 to 5.
  • Appendix 8 The information processing method according to appendix 7, wherein a different display mode is determined depending on whether or not the candidate point is located in a region that is a blind spot when viewed from a photographing body that has captured the spatial image.
  • Appendix 9 9. The information processing method according to appendix 8, wherein a different display mode is determined according to the number of times a half line from the candidate point to the photographic object intersects the surface of the subject in the spatial image.
  • Appendix 10 A display mode for indicating the position of the candidate point so that the display indicating the position of the candidate point further indicates the incident direction of the electromagnetic wave from the radar, which is the basis of the signal contributed by the candidate point. To decide, The information processing method according to any one of appendices 7 to 9.
  • the display mode of the display is determined so that a part or all of the display indicating the position of the candidate point is a transparent color for at least one of the candidate points.
  • the information processing method according to any one of appendices 7 to 10.
  • the display mode indicating the position of the candidate point contributing to the signal at the target point specified by a predetermined method is different from the display mode indicating the position of the other candidate point in the spatial image. Determine the display mode, The information processing method according to any one of appendices 7 to 11.
  • [Appendix 14] The display mode determination process according to appendix 13, wherein the display mode determination process determines a different display mode depending on whether or not the candidate point is located in a region that is a blind spot when viewed from the photographic body that has captured the spatial image.
  • [Appendix 15] 15 The storage medium according to appendix 14, wherein the display mode determination process determines a different display mode according to the number of times a half line from the candidate point to the photographic object intersects the surface of the subject in the spatial image.
  • the display mode determination process is performed so that the display indicating the position of the candidate point further indicates an incident direction of an electromagnetic wave from the radar that is a basis of the signal contributed by the candidate point.
  • the display mode determination process determines the display mode of the display so that a part or all of the display indicating the position of the candidate point is a transparent color with respect to at least one of the candidate points.
  • the position of another candidate point in the spatial image is further displayed as a display mode indicating the position of the candidate point contributing to the signal at the target point specified by a predetermined method. Determine a display mode different from the display mode of the display The storage medium according to any one of appendices 13 to 17.

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The present invention facilitates the understanding of a point on an observed object that contributes to the signal at a point in a region in which there is layover in an intensity map of a signal from the observed object acquired through radar. An information processing device (11) according to an embodiment is provided with: a candidate point extraction unit (114) for extracting, on the basis of the position in three-dimensional space of a target point specified in an intensity map (1111) of a signal from an observed object acquired through radar and of the shape of the observed object, a candidate point that contributes to the signal at the target point; a display mode determination unit (115) for determining, on the basis of the position in three-dimensional space of the target point and the photography conditions for a spatial image (1114) showing the observed object, a display mode for display indicating the position of the candidate point in the spatial image (1114); and an image generation unit (116) for generating an image indicating the position of the candidate point in the spatial image (1114) using the determined display mode.

Description

情報処理装置、情報処理方法、およびコンピュータ読み取り可能な記憶媒体Information processing apparatus, information processing method, and computer-readable storage medium
 本開示は、レーダによって取得されたデータの処理に関する。 This disclosure relates to processing of data acquired by a radar.
 地表の様子等を観測することを目的として、観測したい地域を上空から観測して解析する技術が普及している。 For the purpose of observing the state of the earth's surface, etc., a technique for observing and analyzing the area to be observed from the sky is widespread.
 合成開口レーダ(Synthetic Aperture Radar;SAR)は、上空から電磁波を照射し、反射された電磁波(以下、「反射波」とも表記)の強度を取得することで、地表の様子を観測する技術の1つである。SARにより取得されるデータにより、反射波の強度の二次元マップ(以下、「SAR画像」)が生成可能である。SAR画像は、反射波を、定義された基準面(例えば地表面)からの反射波とみなして、基準面を表す面上に反射波の強度が表されたマップである。 Synthetic Aperture Radar (SAR) is a technology that observes the state of the earth's surface by irradiating electromagnetic waves from above and acquiring the intensity of the reflected electromagnetic waves (hereinafter also referred to as “reflected waves”). One. Based on the data acquired by the SAR, a two-dimensional map (hereinafter, “SAR image”) of the intensity of the reflected wave can be generated. The SAR image is a map in which the reflected wave is regarded as a reflected wave from a defined reference plane (for example, the ground surface) and the intensity of the reflected wave is represented on a plane representing the reference plane.
 反射波の強度がSAR画像において表される位置は、その反射波が発生した位置と反射波を受信するアンテナの位置との間の距離に基づく。そのため、基準面から離れた位置からの反射波の強度は、SAR画像において、実際の位置よりも、基準面からの高さに応じてレーダ側にずれた位置にて表される。結果として、形状が平面でない物体からの反射波がSAR画像においてなす像は、実際の物体の形状が歪められたような像となる。このように歪められた像が生成する現象は、フォアショートニングと呼ばれる。 The position where the intensity of the reflected wave is represented in the SAR image is based on the distance between the position where the reflected wave is generated and the position of the antenna that receives the reflected wave. Therefore, the intensity of the reflected wave from a position away from the reference plane is represented in the SAR image at a position shifted to the radar side according to the height from the reference plane with respect to the actual position. As a result, the image formed in the SAR image by the reflected wave from the object whose shape is not flat is an image in which the shape of the actual object is distorted. The phenomenon in which such a distorted image is generated is called foreshortening.
 フォアショートニングを補正するために、オルソ補正と呼ばれる補正の処理を行う装置が、特許文献1や2に開示されている。 Patent Documents 1 and 2 disclose apparatuses that perform correction processing called ortho correction in order to correct foreshortening.
 特許文献3は、フォアショートニングだけでなく、レイオーバと呼ばれる現象に対しても補正を行う技術を開示している。レイオーバとは、ある高さの位置からの反射波の信号と、その位置とは別の位置からの反射波の信号とが、SAR画像中で重なりあってしまう現象である。 Patent Document 3 discloses a technique for correcting not only foreshortening but also a phenomenon called layover. The layover is a phenomenon in which a reflected wave signal from a position at a certain height and a reflected wave signal from a position different from the position overlap in the SAR image.
 なお、本開示に関連する文献である特許文献4には、カメラで撮影された画像におけるオクルージョン領域に関する記載がある。 Note that Patent Document 4 which is a document related to the present disclosure has a description regarding an occlusion area in an image photographed by a camera.
特開2007-248216号公報JP 2007-248216 A 特開2008-90808号公報JP 2008-90808 A 特開2008-185375号公報JP 2008-185375 A 特開2014-160405号公報JP 2014-160405 A
 特許文献1および2に開示されるようなオルソ補正は、レイオーバが生じているSAR画像に対して補正を行うことは想定されていない。具体的には、オルソ補正は、SAR画像において歪みが生じた点の位置を、その点において表される信号(反射波)が発せられた真の位置と推定される位置にずらす補正である。換言すれば、オルソ補正は、補正の対象となる点における反射波が発せられた真の位置と推定される位置の候補が、1つである場合を前提として行われる補正である。 The ortho correction as disclosed in Patent Documents 1 and 2 is not assumed to be performed on a SAR image in which a layover has occurred. Specifically, the ortho correction is a correction in which the position of a point where distortion occurs in the SAR image is shifted to a position estimated as a true position where a signal (reflected wave) represented at the point is generated. In other words, the ortho correction is a correction that is performed on the assumption that there is one position candidate that is estimated as the true position where the reflected wave is emitted at the point to be corrected.
 特許文献1や2に開示されるようなオルソ補正では、レイオーバが生じている領域内にある点については補正をすることができない。なぜなら、レイオーバが生じている場合、レイオーバが生じている領域に存在する点において表される信号が発せられた真の位置と推定される位置の候補は、複数存在し得るからである。 In the ortho correction as disclosed in Patent Documents 1 and 2, it is not possible to correct a point in a region where a layover occurs. This is because when there is a layover, there may be a plurality of position candidates estimated as the true position from which a signal represented at a point existing in the area where the layover occurs.
 特許文献3はレイオーバを補正する方法を開示しているが、この方法では、歪み方の異なる複数のSAR画像が必要である。このように、何らかの補足的な情報がなければ、1つのSAR画像中の、レイオーバが生じている領域内にある点における信号に寄与している、2つ以上の地点からの反射波を区別することは、原理的に不可能である。 Patent Document 3 discloses a method for correcting layover, but this method requires a plurality of SAR images having different distortion methods. Thus, in the absence of any supplemental information, the reflected waves from two or more points that contribute to the signal at a point in the region where layover occurs in one SAR image are distinguished. That is impossible in principle.
 レイオーバが補正されない場合、すなわち、SAR画像中のある点における信号に寄与する地点の候補が絞られない場合、人が、SAR画像と光学画像とを見ながら、その信号に寄与する地点の候補を、経験や諸々の情報に基づいて推定するのが通例である。 If the layover is not corrected, that is, if a candidate point that contributes to a signal at a certain point in the SAR image is not narrowed down, a person can select a candidate point that contributes to the signal while viewing the SAR image and the optical image. It is customary to estimate based on experience and various information.
 しかし、SAR画像を理解し、SAR画像中の点が示す信号に寄与する地点の候補を推定することは、難しい。 However, it is difficult to understand a SAR image and estimate a candidate point that contributes to a signal indicated by a point in the SAR image.
 本発明は、SAR画像中の、レイオーバが生じている領域内の点における信号に寄与する地点への理解を容易にする装置および方法等を提供することを目的の1つとする。ただし、本発明に用いられる画像は、SAR画像の他、RAR(Real Aperture Radar;実開口レーダ)に基づく画像等、電磁波の反射を観測することにより対象物の状態を推定する他の手法により取得される画像でもよい。 It is an object of the present invention to provide an apparatus, a method, and the like that make it easy to understand a point that contributes to a signal at a point in a region where a layover occurs in a SAR image. However, the images used in the present invention are obtained by other methods for estimating the state of an object by observing the reflection of electromagnetic waves such as images based on RAR (Real Aperture Radar) in addition to SAR images. It may be an image.
 本発明の一態様に係る情報処理装置は、レーダによって取得された被観測体からの信号の強度マップにおいて特定される点である対象点の、三次元空間における位置と、前記被観測体の形状とに基づいて、前記対象点における前記信号に寄与する点である候補点を抽出する候補点抽出手段と、前記被観測体が写った空間画像における前記候補点の位置を示す表示の表示態様を、前記候補点の前記三次元空間における位置と、当該空間画像の撮影条件と、に基づいて決定する表示態様決定手段と、前記空間画像における前記候補点の位置が前記決定された表示態様により表示された画像を生成する画像生成手段と、を備える。 An information processing apparatus according to an aspect of the present invention includes a position in a three-dimensional space of a target point, which is a point specified in an intensity map of a signal from an observed object acquired by a radar, and a shape of the observed object And candidate display means for extracting candidate points that are points that contribute to the signal at the target point, and a display mode for displaying the position of the candidate point in the spatial image in which the observed object is captured Display mode determining means for determining the position of the candidate point based on the position of the candidate point in the three-dimensional space and the imaging condition of the spatial image, and the position of the candidate point in the spatial image is displayed according to the determined display mode. Image generating means for generating a processed image.
 本発明の一態様に係る情報処理方法は、レーダによって取得された被観測体からの信号の強度マップにおいて特定される点である対象点の、三次元空間における位置と、前記被観測体の形状とに基づいて、前記対象点における前記信号に寄与する点である候補点を抽出し、前記被観測体が写った空間画像における前記候補点の位置を示す表示の表示態様を、前記候補点の前記三次元空間における位置と、当該空間画像の撮影条件と、に基づいて決定し、前記空間画像における前記候補点の位置が前記決定された表示態様により表示された画像を生成する。 An information processing method according to an aspect of the present invention includes a position in a three-dimensional space of a target point, which is a point specified in an intensity map of a signal from an observed object acquired by a radar, and a shape of the observed object Based on the above, a candidate point that is a point contributing to the signal at the target point is extracted, and a display mode of display indicating the position of the candidate point in the spatial image in which the observed object is captured The position is determined based on the position in the three-dimensional space and the shooting condition of the space image, and an image is generated in which the position of the candidate point in the space image is displayed according to the determined display mode.
 本発明の一態様に係るプログラムは、コンピュータに、レーダによって取得された被観測体からの信号の強度マップにおいて特定される点である対象点の、三次元空間における位置と、前記被観測体の形状とに基づいて、前記対象点における前記信号に寄与する点である候補点を抽出する候補点抽出手段と、前記被観測体が写った空間画像における前記候補点の位置を示す表示の表示態様を、前記候補点の前記三次元空間における位置と、当該空間画像の撮影条件と、に基づいて決定する表示態様決定手段と、前記空間画像における前記候補点の位置が前記決定された表示態様により表示された画像を生成する画像生成処理と、を実行させる。上記プログラムは、例えば、コンピュータ読み取り可能な不揮発性の記憶媒体に記憶される。 A program according to an aspect of the present invention provides a computer with a position in a three-dimensional space of a target point, which is a point specified in an intensity map of a signal from an observed object acquired by a radar, and the observed object. A candidate point extracting means for extracting a candidate point that is a point contributing to the signal at the target point based on a shape, and a display mode for displaying a position of the candidate point in a spatial image in which the object is observed Display mode determining means for determining the position of the candidate point based on the position of the candidate point in the three-dimensional space and the imaging condition of the spatial image, and the display mode of determining the position of the candidate point in the spatial image And an image generation process for generating the displayed image. The program is stored in, for example, a computer-readable non-volatile storage medium.
 本発明によれば、レーダによって取得された被観測体からの信号の強度マップ中の、レイオーバが生じている領域内の点における信号に寄与する、被観測体上の点への理解を容易にすることができる。 According to the present invention, in the intensity map of the signal from the observed object acquired by the radar, it is easy to understand the point on the observed object that contributes to the signal at the point in the region where the layover occurs. can do.
レイオーバについて説明するための図である。It is a figure for demonstrating a layover. SAR画像の例である。It is an example of a SAR image. 本発明の第1の実施形態に係る情報処理装置の構成を示すブロック図である。It is a block diagram which shows the structure of the information processing apparatus which concerns on the 1st Embodiment of this invention. 候補点の例を説明するための図である。It is a figure for demonstrating the example of a candidate point. 候補点の算出方法の別の例を説明するための図である。It is a figure for demonstrating another example of the calculation method of a candidate point. オクルージョン領域を説明するための図である。It is a figure for demonstrating an occlusion area | region. 候補点がオクルージョン領域に位置するか否かを判定する方法を説明するための図である。It is a figure for demonstrating the method to determine whether a candidate point is located in an occlusion area | region. 表示態様に関するプロパティおよびその値の例を示す図である。It is a figure which shows the example of the property regarding a display mode, and its value. 第1の実施形態に係る情報処理装置による、候補点の抽出に関する処理の流れを示すフローチャートである。It is a flowchart which shows the flow of the process regarding extraction of a candidate point by the information processing apparatus which concerns on 1st Embodiment. 第1の実施形態に係る情報処理装置による、候補点が示された空間画像の生成に関する処理の流れを示すフローチャートである。It is a flowchart which shows the flow of the process regarding the production | generation of the space image by which the candidate point was shown by the information processing apparatus which concerns on 1st Embodiment. 候補点が示された空間画像の一つの例を示す図である。It is a figure which shows one example of the spatial image in which the candidate point was shown. 候補点が示された空間画像の一つの例を示す図である。It is a figure which shows one example of the spatial image in which the candidate point was shown. 候補点が示された空間画像の一つの例を示す図である。It is a figure which shows one example of the spatial image in which the candidate point was shown. 表示態様決定部が機能しなかった場合の、候補点が示された空間画像を例示する図である。It is a figure which illustrates the spatial image in which the candidate point was shown when a display mode determination part did not function. 第1の実施形態の変形例2における画像生成部により生成される画像の例である。It is an example of the image produced | generated by the image generation part in the modification 2 of 1st Embodiment. 立体的な方向を示す図形の例を示す図である。It is a figure which shows the example of the figure which shows a three-dimensional direction. 第1の実施形態の変形例3に係る情報処理装置の構成を示すブロック図である。It is a block diagram which shows the structure of the information processing apparatus which concerns on the modification 3 of 1st Embodiment. 変形例3に係る画像生成部により生成される画像の例である。It is an example of the image produced | generated by the image generation part which concerns on the modification 3. 本発明の一実施形態に係る情報処理装置の構成を示すブロック図である。It is a block diagram which shows the structure of the information processing apparatus which concerns on one Embodiment of this invention. 一実施形態に係る情報処理装置の動作の流れを示すフローチャートである。It is a flowchart which shows the flow of operation | movement of the information processing apparatus which concerns on one Embodiment. 本発明の各実施形態の各部を構成するハードウェアの例を示すブロック図である。It is a block diagram which shows the example of the hardware which comprises each part of each embodiment of this invention.
 本発明の実施形態の説明に先んじて、SARによる観測においてレイオーバが生じる原理を説明する。 Prior to the description of the embodiment of the present invention, the principle of the occurrence of layover in observation by SAR will be described.
 図1は、レイオーバについて説明するための図である。図1では、SARによる観測を行う観測機器Sと、観測される範囲に存在する物体Mが示されている。観測機器Sは、例えば、レーダを搭載する、人工衛星又は航空機等である。観測機器Sは、上空を移動しながら、レーダにより電磁波を発信し、反射された電磁波を受信する。図1において、矢印は、観測機器Sの進行方向、すなわちレーダの進行方向(アジマス方向とも言う)を示す。観測機器Sから発せられた電磁波は、地表、および地表にある構造物Mで後方散乱により反射し、その反射波の一部がレーダに戻って受信される。それにより、観測機器Sの位置と構造物Mにおける電磁波の反射点との間の距離が特定される。 FIG. 1 is a diagram for explaining layover. In FIG. 1, an observation device S 0 that performs observation by SAR and an object M that exists in the observed range are shown. Observation device S 0, for example, mounting the radar, a satellite or aircraft or the like. Observation device S 0, while moving the sky, transmits electromagnetic waves by the radar receives the reflected waves. In FIG. 1, the arrows indicate the traveling direction of the observation device S 0 , that is, the radar traveling direction (also referred to as azimuth direction). Observation electromagnetic waves emitted from the device S 0 is the surface, and reflected by the back-scattered by the structure M in the ground, part of the reflected wave is received back to the radar. Thereby, the distance between the electromagnetic wave reflection point of the position and the structure M of the observation devices S 0 is specified.
 図1において、点Qは地表上の点、点Qは構造物Mの表面上の地表から離れた点である。観測機器Sと点Qとの距離は、観測機器Sと点Qとの距離に等しいとする。また、点Qと点Qとを結ぶ直線と、レーダの進行方向とは、垂直な関係にある。このような場合、点Qにおける反射波と、点Qにおける反射波とは、観測機器Sにとって区別することができない。すなわち、点Qからの反射波の強度と点Qからの反射波の強度とは、混ざり合って観測される。 In FIG. 1, a point Q a is a point on the ground surface, and a point Q b is a point away from the ground surface on the surface of the structure M. The distance between the observation equipment S 0 and the point Q a is equal to the distance between the observation equipment S 0 and the point Q b. Further, the straight line connecting the point Q b and the point Q a, and the traveling direction of the radar, are in a vertical relationship. In such a case, the reflected wave at the point Q a, the reflected wave at the point Q b, can not be distinguished by taking the observation equipment S 0. That is, the intensity of the reflected waves from the intensity of the reflected wave and the point Q b from the point Q a, is observed intermingled.
 このような場合に生成される、反射波の強度分布を表す画像(以下、「SAR画像」と称す)の例が、図2に示される。図2において、矢印は、レーダの進行方向を表す。SAR画像は、レーダにより受信された反射波の強度と、その反射波が発せられた地点とレーダとの間の距離と、に基づいて、生成される。SARでは、レーダの位置を含みレーダの進行方向に対して垂直な平面上にある、レーダからの距離が等しい2以上の地点からの反射波は、区別されない。点Pは、点Qからの反射波の強度を反映している点であるが、この点Pにおいて示される強度には、点Qからの反射波の強度も反映されている。このように、2以上の地点からの反射波の強度がSAR画像において一点で重なり合う現象が、レイオーバである。図2において、点Pを含む白い領域が、レイオーバが生じている領域である。なお、図2において黒く塗られている領域は、構造物Mによってレーダに対して陰になった領域を表す。この領域はレーダシャドウとも呼ばれる。 An example of an image representing the intensity distribution of the reflected wave (hereinafter referred to as “SAR image”) generated in such a case is shown in FIG. In FIG. 2, the arrow indicates the traveling direction of the radar. The SAR image is generated based on the intensity of the reflected wave received by the radar and the distance between the point where the reflected wave is emitted and the radar. In the SAR, reflected waves from two or more points that are on the plane perpendicular to the traveling direction of the radar including the position of the radar and are equal in distance from the radar are not distinguished. Point P is a point which reflects the intensity of the reflected wave from a point Q a, the strength indicated in this respect P, is also reflected in the strength of the reflected wave from the point Q b. In this way, a phenomenon in which the intensity of reflected waves from two or more points overlaps at one point in the SAR image is layover. In FIG. 2, a white area including the point P is an area where a layover has occurred. In FIG. 2, an area painted black represents an area shaded by the structure M against the radar. This region is also called radar shadow.
 以下、図面を参照しながら、本発明の実施形態を詳細に説明する。 Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.
 <<第1の実施形態>>
 まず、本発明の第1の実施形態について説明する。
<< First Embodiment >>
First, a first embodiment of the present invention will be described.
 <構成>
 以下の説明においては、情報処理装置11が行う処理において、基準となる三次元空間が定義されているとする。基準となる三次元空間に対しては、三次元座標系が定義されている。この三次元座標系を、以下、基準の三次元座標系または基準の座標系と表記する。基準の座標系は、例えば、測地系でもよいし、後述の、三次元データであるモデルデータ1113の座標系でもよい。
<Configuration>
In the following description, it is assumed that a reference three-dimensional space is defined in the processing performed by the information processing apparatus 11. A three-dimensional coordinate system is defined for the reference three-dimensional space. Hereinafter, this three-dimensional coordinate system is referred to as a reference three-dimensional coordinate system or a reference coordinate system. The reference coordinate system may be, for example, a geodetic system or a coordinate system of model data 1113 that is three-dimensional data described later.
 また、以下、第1の座標系のもとで記述される点が第2の座標系のもとで記述可能であることを、第1の座標系が第2の座標系に関係づけられている、と表記する。 Further, hereinafter, the fact that the points described under the first coordinate system can be described under the second coordinate system, the first coordinate system is related to the second coordinate system. Is written.
 図3は、第1の実施形態に係る情報処理装置11の構成を示すブロック図である。情報処理装置11は、記憶部111、特徴点抽出部112、ジオコーディング部113、候補点抽出部114、表示態様決定部115、画像生成部116、および表示制御部117を備える。記憶部111、特徴点抽出部112、ジオコーディング部113、候補点抽出部114、表示態様決定部115、および画像生成部116は、互いにデータの送受信が可能であるように接続される。なお、情報処理装置11の各部間のデータの伝達は、直接信号線を介して行われてもよいし、共有の記憶領域(例えば記憶部111)に対する読み書きによって行われてもよい。以下の説明では、「データを送出する」「データを受け取る」という語によってデータの移動を説明するが、データを伝達する方法は直接的に伝達する方法に限定されない。 FIG. 3 is a block diagram showing a configuration of the information processing apparatus 11 according to the first embodiment. The information processing apparatus 11 includes a storage unit 111, a feature point extraction unit 112, a geocoding unit 113, a candidate point extraction unit 114, a display mode determination unit 115, an image generation unit 116, and a display control unit 117. The storage unit 111, the feature point extraction unit 112, the geocoding unit 113, the candidate point extraction unit 114, the display mode determination unit 115, and the image generation unit 116 are connected so as to be able to transmit / receive data to / from each other. Note that data transmission between each unit of the information processing apparatus 11 may be performed directly via a signal line, or may be performed by reading and writing to a shared storage area (for example, the storage unit 111). In the following description, data movement is described by the words “send data” and “receive data”, but the method of transmitting data is not limited to the method of transmitting directly.
 情報処理装置11は、表示装置21と通信可能に接続される。 The information processing apparatus 11 is connected to the display device 21 so as to be communicable.
 ===記憶部111===
 記憶部111は、情報処理装置11による処理に必要なデータを記憶する。例えば、記憶部111は、SARデータ1111、SARデータパラメータ1112、モデルデータ1113、空間画像1114および撮影条件情報1115を記憶する。
=== Storage 111 ===
The storage unit 111 stores data necessary for processing by the information processing apparatus 11. For example, the storage unit 111 stores SAR data 1111, SAR data parameters 1112, model data 1113, an aerial image 1114, and shooting condition information 1115.
 SARデータ1111は、SARを用いた観測によって得られたデータである。SARにより観測される対象(以下、「被観測体」とも表記)は、例えば、地表および建造物等である。SARデータ1111は、少なくとも、基準の座標系に関係づけられた座標系のもとで表されたSAR画像を生成可能なデータである。例えば、SARデータ1111は、観測値と、観測値に関連づけられた情報とを含む。観測値は、例えば、観測された反射波の強度である。観測値に関連づけられた情報は、例えば、その反射波を観測したレーダの、その反射波を観測した時の位置および進行方向、ならびに、反射波の観測によって導出される反射点とレーダとの間の距離、等の情報を含む。SARデータ1111は、被観測体に対するレーダの俯角(反射点から見たレーダの仰角)の情報を含んでもよい。位置に関する情報は、例えば、測地系における経度、緯度および高度の組で記述される。 SAR data 1111 is data obtained by observation using SAR. Targets observed by the SAR (hereinafter also referred to as “observed object”) are, for example, the ground surface and buildings. The SAR data 1111 is data that can generate at least a SAR image represented under a coordinate system related to a reference coordinate system. For example, the SAR data 1111 includes an observation value and information associated with the observation value. The observed value is, for example, the intensity of the observed reflected wave. The information associated with the observation value is, for example, the position and traveling direction of the radar that observed the reflected wave, and the position between the reflected point and the radar derived from the observation of the reflected wave. Information such as distance. The SAR data 1111 may include information on the depression angle of the radar (the elevation angle of the radar viewed from the reflection point) with respect to the object to be observed. The information regarding the position is described by, for example, a combination of longitude, latitude, and altitude in the geodetic system.
 SARデータ1111は、SAR画像それ自体でもよい。 The SAR data 1111 may be a SAR image itself.
 なお、本実施形態の説明では、用いられるデータとしてSARによる観測データが想定されるが、他の実施形態では、SARではなく、例えばRAR(Real Aperture Radar;実開口レーダ)による観測結果のデータが用いられてもよい。 In the description of the present embodiment, observation data by SAR is assumed as data to be used. However, in other embodiments, data of observation results by, for example, RAR (Real Aperture Radar) is used instead of SAR. May be used.
 SARデータパラメータ1112は、SARデータ1111に含まれるデータと、基準の座標系との関係を示すパラメータである。言い換えれば、SARデータパラメータ1112は、SARデータ1111に含まれる観測値に、基準の座標系における位置を付与するためのパラメータである。 The SAR data parameter 1112 is a parameter indicating the relationship between the data included in the SAR data 1111 and the reference coordinate system. In other words, the SAR data parameter 1112 is a parameter for assigning a position in the reference coordinate system to the observation value included in the SAR data 1111.
 例えば、SARデータ1111において観測値に、測地系のもとで記述された、レーダの位置および方向、ならびにレーダと被観測体との間の距離に関する情報が、関連づけられている場合、SARデータパラメータ1112は、その情報を、基準の座標系のもとで記述される情報に変換するパラメータである。 For example, when information regarding the position and direction of the radar and the distance between the radar and the observed object described under the geodetic system is associated with the observation value in the SAR data 1111, the SAR data parameter Reference numeral 1112 denotes a parameter for converting the information into information described under a reference coordinate system.
 SARデータ1111がSAR画像である場合、SAR画像の座標系は、SARデータパラメータ1112によって、基準の座標系に関連づけられる。すなわち、SAR画像における任意の点は、基準の座標系における一点に対応づけられる。 When the SAR data 1111 is a SAR image, the coordinate system of the SAR image is related to the reference coordinate system by the SAR data parameter 1112. That is, an arbitrary point in the SAR image is associated with one point in the reference coordinate system.
 モデルデータ1113は、地形や建物の構造等、物体の形状を三次元で表すデータである。モデルデータ1113は、例えば、DEM(Digital Elevation Model;数値標高モデル)である。モデルデータ1113は、構造物を含む地球表面のデータであるDSM(Digital Surface Model;数値表面モデル)でもよいし、地表の形状のデータであるDTM(Digital Terrain Model)でもよい。モデルデータ1113は、DTMおよび構造物の三次元データを別々に有していてもよい。 The model data 1113 is data representing the shape of an object in three dimensions, such as topography and building structure. The model data 1113 is, for example, DEM (Digital Elevation Model; digital elevation model). The model data 1113 may be DSM (Digital Surface Model) that is data of the earth surface including the structure, or DTM (Digital Terrain Model) that is data of the shape of the ground surface. The model data 1113 may have DTM and three-dimensional data of a structure separately.
 モデルデータ1113に用いられる座標系は、基準の座標系に関係づけられる。すなわち、モデルデータ1113内の任意の点は、基準の座標系における座標によって記述可能である。 The coordinate system used for the model data 1113 is related to the reference coordinate system. That is, an arbitrary point in the model data 1113 can be described by coordinates in the reference coordinate system.
 空間画像1114は、SARの被観測体を含む空間が写された画像である。空間画像1114は、例えば、衛星写真や航空写真等の光学画像、地図、地形図、および地形を表したCG(Computer Graphics)の画像の、いずれでもよい。空間画像1114は、モデルデータ1113の投影図でもよい。好ましくは、空間画像1114は、その空間画像1114によって表された空間内の物体の地理的形状および配置等が、情報処理装置11の利用者(すなわち、情報処理装置11が出力する画像を閲覧する人)にとって直感的に理解しやすい画像である。 The space image 1114 is an image in which a space including the SAR observed object is copied. The spatial image 1114 may be, for example, any of optical images such as satellite photographs and aerial photographs, maps, topographic maps, and CG (Computer Graphics) images representing the topography. The aerial image 1114 may be a projection view of the model data 1113. Preferably, in the spatial image 1114, the geographical shape and arrangement of the object in the space represented by the spatial image 1114 are viewed by the user of the information processing apparatus 11 (that is, the image output by the information processing apparatus 11). It is an image that is easy to understand intuitively.
 空間画像1114は情報処理装置11の外部から取り込まれてもよいし、後述の画像生成部116がモデルデータ1113を射影することにより、生成されてもよい。 The spatial image 1114 may be captured from outside the information processing apparatus 11 or may be generated by projecting the model data 1113 by the image generation unit 116 described later.
 撮影条件情報1115は、空間画像1114の撮影条件(capturing conditions)に関する情報である。空間画像1114の撮影条件とは、空間画像1114の取得のされ方である。撮影条件情報1115は、空間画像1114の撮影範囲を一意に特定可能な情報である。撮影条件情報1115は、例えば、空間画像1114の撮影範囲に関する、複数のパラメータの値により表される。 Shooting condition information 1115 is information related to shooting conditions (capturing conditions) of the spatial image 1114. The imaging condition of the spatial image 1114 is how the spatial image 1114 is acquired. The shooting condition information 1115 is information that can uniquely specify the shooting range of the spatial image 1114. The shooting condition information 1115 is represented by a plurality of parameter values related to the shooting range of the spatial image 1114, for example.
 なお、本開示では、空間画像が特定の位置から撮影された撮影画像(captured image)であると見なし、撮影を行った主体(例えばカメラ等の撮像装置)を撮影体と称す。空間画像1114がモデルデータ1113の射影により生成された場合等、空間画像1114が実際に装置による撮影の工程を経ずに得られた画像である場合、撮影体は仮想的に想定されればよい。 In the present disclosure, it is assumed that the spatial image is a captured image captured from a specific position, and a subject (for example, an imaging device such as a camera) that has performed the imaging is referred to as an imaging object. In the case where the spatial image 1114 is an image obtained without actually performing the photographing process by the apparatus, such as when the spatial image 1114 is generated by projecting the model data 1113, the photographing body may be virtually assumed. .
 撮影条件情報1115は、例えば、撮影体の位置と、被撮影体の範囲を示す情報とにより記述される。例として、空間画像1114が矩形である場合、撮影条件情報1115は、撮影体の基準の座標系における座標と、空間画像1114の4つの角において写る地点に相当する、基準の座標系における4つの座標と、によって記述されてもよい。なお、この場合、撮影範囲は、撮影体の位置から上記4つの座標へそれぞれのびる、4つの半直線に囲まれる領域である。 The photographing condition information 1115 is described by, for example, the position of the photographing object and information indicating the range of the photographing object. As an example, when the spatial image 1114 is rectangular, the imaging condition information 1115 includes four coordinates in the reference coordinate system corresponding to the coordinates in the reference coordinate system of the photographic object and the points that are captured at the four corners of the spatial image 1114. And may be described by coordinates. In this case, the shooting range is an area surrounded by four half lines extending from the position of the shooting body to the four coordinates.
 なお、撮影体の位置は、厳密には、空間画像1114に対する撮影体の視点の位置であるが、実用上、表示態様決定部115が用いる撮影体の位置の情報は、厳密である必要はない。一例として、表示態様決定部115は、撮影体の位置を示す情報として、撮影体を搭載した装置(航空機、人工衛星等)が搭載する、GPS(Global Positioning System)機能を有する装置により取得された位置の情報を用いてもよい。 Note that the position of the photographing object is strictly the position of the viewpoint of the photographing object with respect to the spatial image 1114, but in practice, the information on the position of the photographing object used by the display mode determination unit 115 does not have to be exact. . As an example, the display mode determination unit 115 is acquired by a device having a GPS (Global Positioning System) function mounted on a device (aircraft, artificial satellite, etc.) on which the photographing body is mounted as information indicating the position of the photographing body. Position information may be used.
 なお、撮影条件情報1115において位置を表す情報は、例えば、基準の座標系におけるパラメータ(例えば、経度、緯度および高度)の値の組によって与えられる。すなわち、撮影条件情報1115により、空間画像1114が含む空間の範囲に含まれる任意の点の、基準となる三次元空間における位置は、一意に特定され得る。逆に、基準となる三次元空間における任意の点(少なくとも後述する特徴点および候補点)について、空間画像1114にその点が含まれる場合には、撮影条件情報1115に基づき、その点の空間画像1114における位置が一意に特定され得る。 Note that the information indicating the position in the shooting condition information 1115 is given by, for example, a set of values of parameters (for example, longitude, latitude, and altitude) in the reference coordinate system. In other words, the position in the reference three-dimensional space of any point included in the range of the space included in the spatial image 1114 can be uniquely specified by the shooting condition information 1115. On the other hand, when an arbitrary point (at least a feature point and a candidate point described later) in the reference three-dimensional space is included in the spatial image 1114, the spatial image of the point is based on the shooting condition information 1115. The position at 1114 may be uniquely identified.
 撮影条件情報1115の各パラメータは、基準の座標系と異なる座標系のパラメータであってもよい。その場合は、撮影条件情報1115は、その座標系におけるパラメータの値を基準の座標系におけるパラメータの値に変換するための変換パラメータを含んでいればよい。 Each parameter of the imaging condition information 1115 may be a parameter of a coordinate system different from the reference coordinate system. In that case, the imaging condition information 1115 only needs to include a conversion parameter for converting the parameter value in the coordinate system to the parameter value in the reference coordinate system.
 撮影条件情報1115は、例えば、撮影体の位置、姿勢および画角により記述されてもよい。撮影体の姿勢は、撮影方向、すなわち撮影時における撮影体の光軸方向と、空間画像1114の上下方向と基準の座標系との関係を示すパラメータと、により記述され得る。画角は、例えば空間画像1114が矩形である場合、上下方向の視野角と左右方向の視野角とを示すパラメータにより記述され得る。 The photographing condition information 1115 may be described by, for example, the position, posture, and angle of view of the photographing body. The posture of the photographing object can be described by a photographing direction, that is, an optical axis direction of the photographing object at the time of photographing, and a parameter indicating a relationship between a vertical direction of the spatial image 1114 and a reference coordinate system. For example, when the spatial image 1114 is rectangular, the angle of view can be described by parameters indicating a vertical viewing angle and a horizontal viewing angle.
 撮影体が人工衛星に搭載されたカメラである場合等、撮影体が被写体から十分に遠い場合、撮影体の位置を表す情報は、被写体から見た撮影体の方向を示すパラメータの値によって記述されてもよい。例えば、撮影体の位置を表す情報は、方位および仰角の組でもよい。 When the photographic body is sufficiently far from the subject, such as when the photographic body is a camera mounted on an artificial satellite, the information indicating the position of the photographic body is described by the value of a parameter indicating the direction of the photographic body viewed from the subject. May be. For example, the information indicating the position of the photographic object may be a set of azimuth and elevation angle.
 撮影条件情報1115は、撮影時刻の情報を含んでもよい。 The shooting condition information 1115 may include shooting time information.
 なお、記憶部111は、常に情報処理装置11の内部にデータを保持している必要はない。例えば、記憶部111は、情報処理装置11の外部の装置や記録媒体等にデータを記録し、必要に応じて、データを取得してもよい。すなわち、記憶部111は、以降で説明する情報処理装置11の各部の処理において、各部が要求するデータを取得できるよう構成されていればよい。 Note that the storage unit 111 does not always need to hold data in the information processing apparatus 11. For example, the storage unit 111 may record data on a device or a recording medium outside the information processing apparatus 11 and acquire the data as necessary. That is, the storage unit 111 only needs to be configured to acquire data requested by each unit in the processing of each unit of the information processing apparatus 11 described below.
 ===特徴点抽出部112===
 特徴点抽出部112は、SARデータ1111から、特徴点を抽出する。本開示において、特徴点とは、SARデータ1111において、少なくとも0でない信号強度を示す点のうち、所定の方法により抽出される点である。すなわち、特徴点抽出部112は、点を抽出する所定の方法により、SARデータ1111から1つ以上の点を抽出する。なお、本開示において、SARデータ1111から抽出される点とは、SAR画像における一点に関するデータ群(例えば、観測値と、観測値に関連づけられた情報との組)である。
=== Feature Point Extractor 112 ===
The feature point extraction unit 112 extracts feature points from the SAR data 1111. In the present disclosure, a feature point is a point extracted by a predetermined method from points indicating signal intensity that is not at least 0 in the SAR data 1111. That is, the feature point extraction unit 112 extracts one or more points from the SAR data 1111 by a predetermined method for extracting points. In the present disclosure, the points extracted from the SAR data 1111 are a data group related to one point in the SAR image (for example, a set of an observation value and information associated with the observation value).
 特徴点抽出部112は、例えば、SARデータ1111に対する分析において有用な情報を与える可能性のある点を抽出する方法により、特徴点を抽出する。 The feature point extraction unit 112 extracts feature points by, for example, a method of extracting points that may give useful information in the analysis of the SAR data 1111.
 例えば、特徴点抽出部112は、PS-InSAR(Permanent Scatterers Interferometric SAR)と呼ばれる手法により点を抽出してもよい。PS-InSARは、複数のSAR画像から、位相のずれに基づいて信号の強度の変化が観測された点を抽出する手法である。 For example, the feature point extraction unit 112 may extract points by a technique called PS-InSAR (Permanent Scatterers Interferometric SAR). PS-InSAR is a technique for extracting a point where a change in signal intensity is observed based on a phase shift from a plurality of SAR images.
 あるいは、特徴点抽出部112は、所定の条件(例えば、信号強度が所定の閾値を超える、等)を満たす点を特徴点として抽出してもよい。この所定の条件は、例えば、情報処理装置11の利用者又は設計者により設定されればよい。特徴点抽出部112は、人の判断によって選択された点を特徴点として抽出してもよい。 Alternatively, the feature point extraction unit 112 may extract a point that satisfies a predetermined condition (for example, the signal intensity exceeds a predetermined threshold) as the feature point. This predetermined condition may be set by a user or a designer of the information processing apparatus 11, for example. The feature point extraction unit 112 may extract points selected by human judgment as feature points.
 特徴点抽出部112は、抽出した特徴点の情報を、ジオコーディング部113に送出する。特徴点の情報は、少なくとも基準の座標系における座標を特定可能な情報を含む。例として、特徴点の情報は、その特徴点を含む範囲のSARデータを取得した観測機器の位置、進行方向、および、観測機器とその特徴点における信号の反射地点との距離により表される。 The feature point extraction unit 112 sends the extracted feature point information to the geocoding unit 113. The feature point information includes at least information capable of specifying coordinates in the reference coordinate system. For example, the feature point information is represented by the position and traveling direction of the observation device that has acquired the SAR data in the range including the feature point, and the distance between the observation device and the signal reflection point at the feature point.
 ===ジオコーディング部113===
 ジオコーディング部113は、特徴点抽出部112により抽出された特徴点のそれぞれに、基準の座標系における座標を付与する。ジオコーディング部113は、例えば、抽出された特徴点の情報を特徴点抽出部112から受け取る。ジオコーディング部113は、受け取った特徴点の情報と、SARデータパラメータ1112とに基づき、その特徴点の信号が、基準となる三次元空間のどの位置からの信号に相当するかを特定する。
=== Geocoding unit 113 ===
The geocoding unit 113 assigns coordinates in the reference coordinate system to each of the feature points extracted by the feature point extraction unit 112. For example, the geocoding unit 113 receives information on the extracted feature points from the feature point extraction unit 112. Based on the received feature point information and the SAR data parameter 1112, the geocoding unit 113 identifies from which position in the reference three-dimensional space the signal of the feature point corresponds.
 例えば、特徴点の情報が、その特徴点を含む範囲のSARデータを取得した観測機器の位置、進行方向、および、観測機器とその特徴点における信号の反射地点との距離により表される場合、まず、ジオコーディング部113は、SARデータパラメータ1112に基づき、その情報を、基準の座標系における観測機器の位置、進行方向および距離により表される情報に変換する。そして、ジオコーディング部113は、基準の座標系における、次の条件をすべて満たす点(座標)を特定する。
・当該点と観測機器の位置との距離が、特徴点の情報により示された距離である。
・観測機器の進行方向と垂直な平面に含まれる。
・基準面(基準の座標系において高度が0である面)に含まれる。
特定された点の座標が、特徴点の情報により示される特徴点の、基準の座標系における座標である。ジオコーディング部113は、例えばこのようにして特定された点の座標を、特徴点の情報により示された特徴点に付与する。
For example, when the feature point information is represented by the position of the observation device that has acquired the SAR data in the range including the feature point, the traveling direction, and the distance between the observation device and the reflection point of the signal at the feature point, First, the geocoding unit 113 converts the information based on the SAR data parameter 1112 into information represented by the position, traveling direction, and distance of the observation device in the reference coordinate system. Then, the geocoding unit 113 identifies points (coordinates) that satisfy all of the following conditions in the reference coordinate system.
-The distance between the point and the position of the observation device is the distance indicated by the feature point information.
・ Included in a plane perpendicular to the direction of travel of the observation equipment.
Included in a reference plane (a plane with an altitude of 0 in the reference coordinate system).
The coordinates of the identified point are the coordinates in the reference coordinate system of the feature point indicated by the feature point information. For example, the geocoding unit 113 assigns the coordinates of the points specified in this way to the feature points indicated by the feature point information.
 ===候補点抽出部114===
 候補点抽出部114は、基準の座標系における座標が付与された特徴点に、その特徴点に関与する点(以下、「候補点」)を関連づける。特徴点に関与する候補点について、以下で説明する。
=== Candidate Point Extractor 114 ===
Candidate point extraction section 114 associates a point related to the feature point (hereinafter, “candidate point”) with the feature point given the coordinate in the reference coordinate system. The candidate points related to the feature points will be described below.
 レイオーバが生じている領域にある特徴点(点Pとする)において示される信号強度は、複数の点からの反射波の強度の足し合わせである可能性がある。この時、点Pにおいて示される信号強度に寄与している可能性のある、三次元空間内の点を、本実施形態では、点Pに関与する候補点と呼ぶ。 The signal intensity indicated at the feature point (referred to as point P) in the region where the layover occurs may be the sum of the intensity of the reflected waves from a plurality of points. At this time, a point in the three-dimensional space that may contribute to the signal intensity indicated at the point P is referred to as a candidate point related to the point P in this embodiment.
 図4は、候補点の例を説明するための図である。図4は、基準となる三次元空間を、点Pを通り、レーダの進行方向(アジマス方向)に垂直な平面により切り出した断面図である。 FIG. 4 is a diagram for explaining an example of candidate points. FIG. 4 is a cross-sectional view of the reference three-dimensional space cut out by a plane passing through the point P and perpendicular to the radar traveling direction (azimuth direction).
 線GLは、基準となる三次元空間における基準面、すなわち、特徴点が位置する面の、断面線である。線MLは、モデルデータ1113が表す三次元構造の断面線である。点Sはレーダの位置を示す点である。点Pの位置は、ジオコーディング部113によって与えられた座標の位置である。点Pと点Sとの距離は“R”であるとする。 A line GL is a cross-sectional line of a reference plane in a reference three-dimensional space, that is, a plane on which a feature point is located. A line ML is a cross-sectional line of the three-dimensional structure represented by the model data 1113. Point S 1 is a point indicating the position of the radar. The position of the point P is a coordinate position given by the geocoding unit 113. The distance between the points P and S 1 is assumed to be "R".
 点Pにおいて示される信号強度に反映されるのは、断面図において点Sとの距離が“R”であるような点からの反射波である。すなわち、点Pに関与する点は、点Sを中心とした半径“R”の円弧が線MLと交差する点である。図4において、点Q、Q、Q、Qが、点Sを中心とした半径“R”の円弧が線MLと交差する、点P以外の点である。したがって、これらの点Q、Q、Q、Qが点Pに関与する候補点である。 Reflected in the signal intensity indicated at the point P is a reflected wave from a point whose distance from the point S 1 is “R” in the cross-sectional view. That is, the point involved in the point P is the point at which arc centered on the point S 1 radius "R" intersects the line ML. In FIG. 4, points Q 1 , Q 2 , Q 3 , and Q 4 are points other than the point P where the circular arc with the radius “R” centering on the point S 1 intersects the line ML. Therefore, these points Q 1 , Q 2 , Q 3 , Q 4 are candidate points related to the point P.
 候補点抽出部114は、このように、点Pを含みレーダの進行方向に垂直な平面上の、レーダとの距離がレーダと点Pとの間の距離に等しい点を、候補点として抽出すればよい。 In this way, the candidate point extraction unit 114 extracts, as candidate points, points on the plane that includes the point P and that is perpendicular to the traveling direction of the radar and whose distance from the radar is equal to the distance between the radar and the point P. That's fine.
 ただし、点Qは、点Sに対しては陰になっている(いわゆるレーダシャドウ内にある)ため、この点で反射された電磁波が点Pにおいて示される信号強度に寄与した可能性は低い。したがって、候補点抽出部114が抽出する候補点は、点Qを除いた、点Q、Q、Qであってもよい。すなわち、候補点抽出部114は、点Qと点Sとを結ぶ線分が点Q以外で線MLと交差することに基づき、点Qを候補点から除外してもよい。 However, since the point Q 3 is shaded with respect to the point S 1 (in the so-called radar shadow), there is a possibility that the electromagnetic wave reflected at this point has contributed to the signal intensity indicated at the point P. Low. Therefore, the candidate points extracted by the candidate point extraction unit 114 may be points Q 1 , Q 2 , Q 4 excluding the point Q 3 . In other words, the candidate point extracting unit 114, based on the line segment connecting the point Q 3 and the point S 1 is crossing the line ML outside point Q 3, may exclude the point Q 3 from the candidate point.
 上述したような候補点の抽出において必要な情報は、基準となる三次元空間の、点Pを通りアジマス方向に垂直な平面による、モデルデータ1113の断面線、点Sおよび点Pの位置、ならびに点Sと点Pとの距離“R”である。 Information required in the extraction of candidate points as described above, the three-dimensional space as a reference, according to a plane perpendicular to the point P as the azimuth direction, the section line of the model data 1113, the position of the point S 1 and the point P, In addition, the distance “R” between the point S 1 and the point P.
 点Sが十分に遠い場合、点Sから被観測体への電磁波の入射方向は全て互いに平行であると近似できる。従って、点Sが十分に遠い場合、図5に示されるように、候補点は、点Pを通る、レーダから点Pへの電磁波の入射線に垂直な直線と、線MLとの交点を求めることによって特定可能である。ただし、図5において、点Qについては、その点Qを通る、レーダからの電磁波の入射線に平行な直線が、線MLに交差するため(すなわち、レーダシャドウ内にあるため、)候補点から除かれてもよい。候補点抽出部114は、このように、観測機器から被観測体への電磁波の入射方向が全て互いに平行であるという近似のもとで、候補点を抽出してもよい。このような方法による抽出においては、点Sの座標および距離“R”の代わりに、点Sの方位および俯角θを用いて、候補点の位置を算出することができる。 When the point S 1 is sufficiently far away, it can be approximated that the incident directions of electromagnetic waves from the point S 1 to the object to be observed are all parallel to each other. Therefore, when the point S 1 is sufficiently distant, as shown in FIG. 5, the candidate point, passes through the point P, and a line perpendicular to the electromagnetic wave of the incident beam to the point P from the radar, the intersection of the line ML It can be specified by seeking. However, in FIG. 5, a candidate for point Q 3 is that a straight line passing through point Q 3 and parallel to the incident line of the electromagnetic wave from the radar intersects line ML (ie, is in the radar shadow). May be removed from the point. In this way, the candidate point extraction unit 114 may extract candidate points under the approximation that the incident directions of the electromagnetic waves from the observation device to the object to be observed are all parallel to each other. In the extraction by such a method, the position of the candidate point can be calculated using the azimuth and depression angle θ of the point S 1 instead of the coordinate of the point S 1 and the distance “R”.
 候補点抽出部114は、特徴点に関与する候補点を、表示態様決定部115と画像生成部116とに送出する。 The candidate point extraction unit 114 sends candidate points related to the feature points to the display mode determination unit 115 and the image generation unit 116.
 ===表示態様決定部115===
 表示態様決定部115は、後述の画像生成部116により生成される画像において表示される候補点の表示態様を、その候補点の三次元空間における位置と、撮影条件情報1115とに基づいて決定する。
=== Display Mode Determination Unit 115 ===
The display mode determination unit 115 determines the display mode of the candidate points displayed in the image generated by the image generation unit 116, which will be described later, based on the position of the candidate points in the three-dimensional space and the shooting condition information 1115. .
 表示態様とは、例えば、表示される図形等の形状、大きさ、色、明るさ、透過性、動き及びそれらの経時変化等により定まる、表示の様子である。なお、本開示において、「候補点の表示態様」とは、候補点の位置を示す表示の表示態様である。「候補点を表示する」とは、候補点の位置を示す表示を表示することである。 The display mode is a state of display determined by, for example, the shape, size, color, brightness, transparency, movement, and change with time of a figure to be displayed. In the present disclosure, the “candidate point display mode” is a display mode for displaying the position of the candidate point. “Display candidate points” means to display a display indicating the positions of candidate points.
 撮影条件は、上述の通り、撮影条件情報1115により示される。 The shooting condition is indicated by the shooting condition information 1115 as described above.
 表示態様決定部115は、候補点の表示態様を決定するにあたり、候補点抽出部114から候補点の座標を受け取る。さらに、表示態様決定部115は、記憶部111から、モデルデータ1113と、画像生成部116により生成される画像に用いられる空間画像の撮影条件情報1115と、を読み出す。 The display mode determination unit 115 receives the coordinates of the candidate points from the candidate point extraction unit 114 when determining the display mode of the candidate points. Further, the display mode determination unit 115 reads out the model data 1113 and the imaging condition information 1115 of the spatial image used for the image generated by the image generation unit 116 from the storage unit 111.
 以下、表示態様決定部115が表示態様を決定する方法の具体例を説明する。 Hereinafter, a specific example of a method in which the display mode determination unit 115 determines the display mode will be described.
 (1)死角を考慮した表示態様の決定
 表示態様決定部115は、例えば、候補点が、三次元空間において、空間画像において死角となる領域に位置するか否かに応じて、異なる表示態様を決定する。
(1) Determination of display mode in consideration of blind spot The display mode determination unit 115 displays different display modes depending on whether the candidate point is positioned in a region that is a blind spot in a spatial image in a three-dimensional space, for example. decide.
 空間画像において死角となる領域とは、空間画像の撮影範囲に含まれる領域でありながら、空間画像に写る物体によって遮られ、空間画像を撮影した撮影体の位置からは見えなくなっている領域である。本実施形態では、空間画像において死角となる領域は、オクルージョン領域とも表記される。 An area that is a blind spot in a spatial image is an area that is included in the imaging range of the spatial image, but is blocked by an object that appears in the spatial image and is not visible from the position of the photographic object that captured the spatial image. . In the present embodiment, an area that becomes a blind spot in a spatial image is also referred to as an occlusion area.
 図6は、オクルージョン領域を説明するための図である。図6において、点Sは撮影体の位置を示している。立体Mは直方体の構造物である。点線で示された立体的な領域の範囲が、撮影範囲である。直線Lc,Ld,Leは、それぞれ、立体M上の点Qc,Qd,Qeを通る、点Sの位置からのびる直線である。図6において斜線によって示される立体的な領域が、オクルージョン領域である。 FIG. 6 is a diagram for explaining the occlusion area. 6, the point S 2 represents the position of the imaging member. The solid M is a rectangular parallelepiped structure. The range of the three-dimensional area indicated by the dotted line is the shooting range. Straight Lc, Ld, Le, respectively, through a point Qc on sterically M, Qd, the Qe, is a straight line extending from the position of the point S 2. A three-dimensional area indicated by diagonal lines in FIG. 6 is an occlusion area.
 例えば、表示態様決定部115は、まず、それぞれの候補点に対し、候補点がオクルージョン領域内に位置するか否かの判定を行う。 For example, the display mode determination unit 115 first determines whether each candidate point is located in the occlusion area.
 例として、表示態様決定部115は、点Qがオクルージョン領域に位置するか否かを判定する方法を説明する。図7は、候補点がオクルージョン領域に位置するか否かを判定する方法を説明するための図である。図7は、空間画像の撮影体の位置を表す点Sと、点Qとを含む平面の図である。線GLは、基準となる三次元空間の基準面の断面線、線MLは、モデルデータ1113が表す三次元構造の断面線である。 As an example, the display mode decision unit 115, the point Q 2 will be described a method of determining whether located in the occlusion region. FIG. 7 is a diagram for explaining a method for determining whether a candidate point is located in the occlusion area. Figure 7 is a S 2 points representing the position of the imaging of the spatial image, which is a diagram of a plane including the point Q 2. A line GL is a cross-sectional line of the reference plane of the reference three-dimensional space, and a line ML is a cross-sectional line of the three-dimensional structure represented by the model data 1113.
 表示態様決定部115は、まず、候補点Qと点Sとを結ぶ線分を算出する。そして、表示態様決定部115は、その線分が線MLと点Q以外で交点を持つか(すなわち、モデルデータ1113と交差するか)を判定する。線分が線MLとの交点を持つ場合、表示態様決定部115は、当該候補点がオクルージョン領域に位置すると判定する。線分が線MLとの交点を持たない場合、表示態様決定部115は、当該候補点がオクルージョン領域に位置しないと判定する。図7の例では、点Qと点Sとを結ぶ線分は、線MLと交点Qを持つ。したがって、表示態様決定部115は、候補点Qがオクルージョン領域に位置すると判定する。 Display mode decision unit 115 first calculates a line segment connecting the candidate points Q 2 and the point S 2. The display mode decision unit 115 determines that whether the segment has an intersection outside line ML and the point Q 2 (i.e., intersects the model data 1113). When the line segment has an intersection with the line ML, the display mode determination unit 115 determines that the candidate point is located in the occlusion area. When the line segment does not have an intersection with the line ML, the display mode determination unit 115 determines that the candidate point is not located in the occlusion area. In the example of FIG. 7, a line segment connecting the point Q 2 and the point S 2 has a line ML and the intersection Q f. Thus, the display mode determination unit 115 determines that the candidate point Q 2 is positioned in the occlusion region.
 表示態様決定部115は、例えば以上のようにして、空間画像の撮影範囲に含まれる候補点がオクルージョン領域に位置するか否かをそれぞれ判定する。 The display mode determination unit 115 determines whether or not the candidate points included in the imaging range of the spatial image are located in the occlusion area, for example, as described above.
 なお、上述の説明において、候補点と点Sとを結ぶ線分は、候補点から撮影体の方向へとのびる半直線であってもよい。点Sの位置が被観測体から十分に遠い場合、各候補点にとって撮影体の方向は同一であるとみなされる。すなわち、表示態様決定部115は、点Sが十分に遠いと見なすことにより、空間画像中の全ての候補点について、同じ方向の半直線を用いて判定を行うことができる。この場合、判定に係る計算のコストを低減することができる。 Incidentally, in the above description, the line segment connecting the candidate points and the point S 2 may be a half line extending in the direction of the imaging member from the candidate point. When the position of the point S 2 is sufficiently far from the observed object, the direction of the imaging member to each of the candidate points are considered to be identical. That is, the display mode determination unit 115, by the point S 2 is regarded as sufficiently far, all of the candidate points in space image, the determination can be performed by using a half-line in the same direction. In this case, the calculation cost related to the determination can be reduced.
 判定後、表示態様決定部115は、オクルージョン領域に位置する候補点の表示態様と、オクルージョン領域に位置しない候補点の表示態様とが異なるように、各候補点の表示態様を決定する。 After the determination, the display mode determination unit 115 determines the display mode of each candidate point so that the display mode of candidate points located in the occlusion area is different from the display mode of candidate points not positioned in the occlusion area.
 表示態様決定部115は、決定した表示態様を表す情報を候補点に関連づける。例えば、表示態様決定部115は、各候補点の表示態様に関連するプロパティの値を設定すればよい。あるいは、表示態様決定部115は、既に用意されている、表示態様に関連するプロパティの値の組の識別情報を、候補点に関連づければよい。第1の情報に第2の情報を関連づけるとは、第1の情報と第2の情報とが関連付けられていることを示すデータを生成することである。 The display mode determination unit 115 associates information indicating the determined display mode with candidate points. For example, the display mode determination unit 115 may set a property value related to the display mode of each candidate point. Or the display mode determination part 115 should just associate the identification information of the set of the property value relevant to the display mode already prepared with a candidate point. To associate the second information with the first information is to generate data indicating that the first information and the second information are associated with each other.
 例えば、表示態様決定部115は、オクルージョン領域に位置しない候補点の透過率の値を、オクルージョン領域に位置する候補点の透過率と異なる値に設定する。透過率とは、表示される図形が画像に重畳された場合に重畳された位置の画素値に寄与する度合いを示すパラメータである。例えば、透過率が0%である図形が重畳された場合は、その図形の重畳された位置の画素値はその図形の色のみに依存する。透過率が0%でない図形が重畳された場合は、その図形の重畳された位置の画素値は重畳前の画素値の色にも依存した値となる。言い換えれば、透過率が0%でない図形は、いわば半透明な図形として表示される。 For example, the display mode determination unit 115 sets the transmittance value of the candidate points not located in the occlusion area to a value different from the transmittance of the candidate points located in the occlusion area. The transmittance is a parameter indicating the degree of contribution to the pixel value at the superimposed position when the displayed graphic is superimposed on the image. For example, when a graphic with a transmittance of 0% is superimposed, the pixel value at the superimposed position of the graphic depends only on the color of the graphic. When a graphic with a transmittance of 0% is superimposed, the pixel value at the superimposed position of the graphic is a value that also depends on the color of the pixel value before superimposition. In other words, a graphic whose transmittance is not 0% is displayed as a semitransparent graphic.
 図8は、表示態様に関するプロパティの例と、2種類の表示態様(「第1の表示態様」および「第2の表示態様」)の、各プロパティの値の例を示す表示態様である。表示態様決定部115は、例えば、図8に示されるような表の内容を表すデータを予め記憶しておき、または当該データを記憶部111に記憶させておき、各候補点に対して「第1の表示態様」または「第2の表示態様」を関連づけてもよい。なお、表示態様に関するプロパティとして、図形が表示されるか否かを指定するプロパティがあってもよい。 FIG. 8 is a display mode showing examples of properties relating to the display mode and examples of values of each property in two types of display modes (“first display mode” and “second display mode”). The display mode determination unit 115 stores, for example, data representing the contents of the table as shown in FIG. 8 in advance, or stores the data in the storage unit 111, for each candidate point. One display mode "or" second display mode "may be associated. Note that as a property related to the display mode, there may be a property for designating whether or not a graphic is displayed.
 図8に示される表を用いる場合、例えば、表示態様決定部115は、オクルージョン領域に位置しない候補点の表示態様として第1の表示態様を関連づけ、オクルージョン領域に位置する候補点の表示態様として第2の表示態様を関連づければよい。そうすることにより、オクルージョン領域に位置する候補点は透過色で、表示装置21に表示される。 When the table shown in FIG. 8 is used, for example, the display mode determination unit 115 associates the first display mode as a display mode of candidate points that are not located in the occlusion area, and sets the first display mode of candidate points that are positioned in the occlusion area. The two display modes may be associated with each other. By doing so, the candidate points located in the occlusion area are displayed in the transmissive color on the display device 21.
 なお、「透過率」のプロパティの値が「100%」であるような表示態様があってもよい。また、透過率は、表示される図形の部分(輪郭線および内部領域等)ごとに設定可能であってもよい。 Note that there may be a display mode in which the value of the “transmission” property is “100%”. Further, the transmittance may be set for each portion of the graphic to be displayed (such as an outline and an internal region).
 (1’)死角の段階を考慮した表示態様の決定
 表示態様決定部115は、上述した例において、候補点から観測機器へのびる半直線が線MLと交差する回数に応じて異なる表示態様を、それぞれの候補点に対して決定してもよい。
(1 ′) Determination of the display mode in consideration of the stage of the blind spot In the example described above, the display mode determination unit 115 displays different display modes according to the number of times the half line extending from the candidate point to the observation device intersects the line ML. You may determine with respect to each candidate point.
 例えば、表示態様決定部115は、半直線が線MLと交差する回数が1回であるような候補点の透過率を50%、半直線が線MLと交差する回数が2回以上であるような候補点の透過率を80%に設定してもよい。 For example, the display mode determination unit 115 seems to have a transmittance of 50% for the candidate point where the number of times the half line intersects the line ML is one time, and the number of times that the half line intersects the line ML is two times or more. The transmittance of a candidate point may be set to 80%.
 これにより、オクルージョン領域内に位置する候補点の位置に対する理解をより深めることができる。例えば、閲覧者は、透過率の程度に基づいて、透過色で表示された候補点が、建物の壁に位置するのか、更に奥の地面に位置するのかを理解することができる。 This makes it possible to deepen the understanding of the positions of candidate points located in the occlusion area. For example, the viewer can understand whether the candidate point displayed in the transparent color is located on the wall of the building or further on the ground based on the degree of transmittance.
 (2)距離に応じた表示態様の決定
 表示態様決定部115は、例えば、候補点と撮影体との距離に応じて変化する表示態様を決定してもよい。
(2) Determination of display mode according to distance The display mode determination unit 115 may determine a display mode that changes according to the distance between the candidate point and the photographic object, for example.
 表示態様決定部115は、例えば、候補点が撮影体から遠いほど、候補点を示す図形の大きさが小さくなるように、各候補点の表示態様を決定してもよい。あるいは、表示態様決定部115は、例えば、候補点が撮影体から遠いほど、候補点を示す図形の明度が低くなるように、各候補点の表示態様を決定してもよい。そのような構成により、候補点の位置関係がわかりやすくなる。 The display mode determination unit 115 may determine the display mode of each candidate point so that, for example, as the candidate point is farther from the photographic object, the size of the graphic indicating the candidate point becomes smaller. Or the display mode determination part 115 may determine the display mode of each candidate point so that the brightness of the figure which shows a candidate point becomes low, for example, so that a candidate point is far from a to-be-photographed body. Such a configuration makes it easy to understand the positional relationship between candidate points.
 表示態様決定部115は、各候補点の表示態様を決定したら、各候補点の表示態様を、画像生成部116に送出する。 When the display mode of each candidate point is determined, the display mode determination unit 115 sends the display mode of each candidate point to the image generation unit 116.
 ===画像生成部116===
 画像生成部116は、表示制御部117が表示装置21に表示させる画像を生成する。具体的には、画像生成部116は、空間画像に候補点の位置を示す表示が重畳された画像を生成する。なお、本開示において「画像を生成する」とは、画像が表示されるためのデータを生成することである。画像生成部116が生成するデータの形式は、画像形式に限られない。画像生成部116が生成する画像は、表示装置21が表示するために必要な情報を有するデータであればよい。
=== Image Generation Unit 116 ===
The image generation unit 116 generates an image to be displayed on the display device 21 by the display control unit 117. Specifically, the image generation unit 116 generates an image in which a display indicating the position of the candidate point is superimposed on the spatial image. In the present disclosure, “generating an image” means generating data for displaying an image. The format of data generated by the image generation unit 116 is not limited to the image format. The image generated by the image generation unit 116 may be data having information necessary for the display device 21 to display.
 画像生成部116は、生成する画像に用いられる空間画像を、記憶部111が記憶する空間画像1114の内から読み出す。画像生成部116は、読み出す画像を、例えば、利用者からの指示に基づいて決定すればよい。例えば、画像生成部116は、利用者から、複数の空間画像1114のうちの1つを指定する情報を受け付ければよい。あるいは、例えば、画像生成部116は、三次元空間における範囲を指定する情報を受け付け、指定された範囲を含む空間画像を読み出せばよい。 The image generation unit 116 reads a spatial image used for the generated image from the spatial image 1114 stored in the storage unit 111. The image generation unit 116 may determine the image to be read based on an instruction from the user, for example. For example, the image generation unit 116 may receive information specifying one of the plurality of spatial images 1114 from the user. Alternatively, for example, the image generation unit 116 may receive information specifying a range in the three-dimensional space and read a spatial image including the specified range.
 あるいは、画像生成部116は、利用者が表示を希望する特徴点又は候補点を指定する情報を受け付けてもよい。そして、画像生成部116は、指定された特徴点又は候補点を含む、基準となる三次元空間における範囲を特定し、特定された範囲を含む空間画像を読み出せばよい。なお、利用者が表示を希望する特徴点又は候補点を指定する情報は、SARデータ1111を指定する情報でもよい。 Alternatively, the image generation unit 116 may accept information designating feature points or candidate points that the user desires to display. Then, the image generation unit 116 may specify a range in the reference three-dimensional space including the designated feature point or candidate point, and read a spatial image including the specified range. Note that the information that specifies the feature points or candidate points that the user desires to display may be information that specifies the SAR data 1111.
 画像生成部116は、記憶部111に記憶された空間画像1114の一部を切り出し、使用する空間画像として読み出してもよい。例えば、画像生成部116は、利用者が表示を希望する候補点に基づいて空間画像を読み出す場合、その候補点をすべて含む範囲を空間画像1114から切り出し、切り出された画像を、使用する空間画像として読み出してもよい。 The image generation unit 116 may cut out a part of the spatial image 1114 stored in the storage unit 111 and read it as a spatial image to be used. For example, when the image generation unit 116 reads a spatial image based on candidate points that the user desires to display, the image generation unit 116 cuts out a range including all the candidate points from the spatial image 1114 and uses the cut-out image as a spatial image. May be read out.
 また、画像生成部116は、候補点抽出部114から、候補点抽出部114が抽出した候補点の座標を受け取る。さらに、画像生成部116は、表示態様決定部115から、各候補点の表示態様を示す情報を、それぞれ取得する。 Also, the image generation unit 116 receives the coordinates of the candidate points extracted by the candidate point extraction unit 114 from the candidate point extraction unit 114. Further, the image generation unit 116 acquires information indicating the display mode of each candidate point from the display mode determination unit 115.
 そして、画像生成部116は、読み出された空間画像に、候補点抽出部114が抽出した候補点の位置を、表示態様決定部115により決定された表示態様によって示す表示を、重畳する。これにより、候補点が示された空間画像が生成される。 Then, the image generation unit 116 superimposes a display indicating the position of the candidate point extracted by the candidate point extraction unit 114 on the read spatial image according to the display mode determined by the display mode determination unit 115. Thereby, a spatial image in which candidate points are shown is generated.
 以下、画像生成部116によって生成される、候補点が示された空間画像を、「点表示画像」とも表記する。 Hereinafter, the spatial image generated by the image generation unit 116 and indicating the candidate points is also referred to as a “point display image”.
 画像生成部116が空間画像における候補点の位置を特定する具体的な例を説明する。 A specific example in which the image generation unit 116 specifies the position of the candidate point in the spatial image will be described.
 画像生成部116は、空間画像における候補点の位置を、撮影条件情報1115に基づいた計算によって特定すればよい。 The image generation unit 116 may specify the position of the candidate point in the spatial image by calculation based on the shooting condition information 1115.
 例えば、画像生成部116は、撮影条件情報1115に基づき、空間画像の撮影範囲と、撮影方向とを特定する。そして、画像生成部116は、候補点を含み撮影方向に垂直な平面による、撮影範囲の切断面を求める。その切断面と候補点との位置関係が、空間画像と候補点との位置関係に相当する。画像生成部116は、その切断面の座標を空間画像の座標に関係づけた場合の、候補点の座標を特定すればよい。特定された座標が、空間画像における候補点の座標である。 For example, the image generation unit 116 specifies the shooting range and shooting direction of the spatial image based on the shooting condition information 1115. Then, the image generation unit 116 obtains a cut surface of the shooting range by a plane that includes the candidate point and is perpendicular to the shooting direction. The positional relationship between the cut surface and the candidate point corresponds to the positional relationship between the spatial image and the candidate point. The image generation unit 116 may specify the coordinates of the candidate points when the coordinates of the cut surface are related to the coordinates of the spatial image. The identified coordinates are the coordinates of candidate points in the spatial image.
 なお、光学衛星画像は、オルソ補正等により補正されていてもよい。光学衛星画像が補正される場合、候補点が示される位置も補正される。候補点の位置は、光学衛星画像に対する補正において用いられた補正パラメータを用いて補正されればよい。 The optical satellite image may be corrected by ortho correction or the like. When the optical satellite image is corrected, the position where the candidate point is indicated is also corrected. The position of the candidate point may be corrected using the correction parameter used in the correction for the optical satellite image.
 以上に説明した、候補点の空間画像における位置を特定する方法は、一例である。画像生成部116は、基準の座標系における候補点の位置、および空間画像と基準の座標系との関係に基づいて、候補点の空間画像における位置を特定すればよい。 The above-described method for specifying the position of the candidate point in the spatial image is an example. The image generation unit 116 may specify the position of the candidate point in the spatial image based on the position of the candidate point in the reference coordinate system and the relationship between the spatial image and the reference coordinate system.
 画像生成部116は、生成した点表示画像を、表示制御部117に送出する。 The image generation unit 116 sends the generated point display image to the display control unit 117.
 ===表示制御部117、表示装置21===
 表示制御部117は、画像生成部116により生成された点表示画像を表示装置21に表示させる制御を行う。表示制御部117は、例えば、点表示画像を表示装置21に出力することにより、表示装置21に点表示画像を表示させる。
=== Display Control Unit 117, Display Device 21 ===
The display control unit 117 performs control to display the point display image generated by the image generation unit 116 on the display device 21. The display control unit 117 causes the display device 21 to display the point display image by, for example, outputting the point display image to the display device 21.
 表示装置21は、例えば、液晶モニタ、プロジェクタ等のディスプレイである。表示装置21は、タッチパネルのように、入力部としての機能を有していてもよい。本実施形態の説明では、表示装置21は情報処理装置11の外部の装置として情報処理装置11に接続されているが、表示装置21が表示部として情報処理装置11の内部に含まれていてもよい。 The display device 21 is a display such as a liquid crystal monitor or a projector. The display device 21 may have a function as an input unit like a touch panel. In the description of the present embodiment, the display device 21 is connected to the information processing device 11 as an external device of the information processing device 11, but even if the display device 21 is included in the information processing device 11 as a display unit. Good.
 表示装置21による表示を見る閲覧者は、情報処理装置11による処理の結果を知る。具体的には、閲覧者は、画像生成部116により生成された点表示画像を観察できる。 The viewer who sees the display on the display device 21 knows the result of the processing by the information processing device 11. Specifically, the viewer can observe the point display image generated by the image generation unit 116.
 <動作>
 (候補点の抽出)
 情報処理装置11による、候補点を抽出する処理に係る動作の流れを、図9のフローチャートを参照しながら説明する。
<Operation>
(Extraction of candidate points)
The flow of operations related to the process of extracting candidate points by the information processing apparatus 11 will be described with reference to the flowchart of FIG.
 情報処理装置11の特徴点抽出部112は、記憶部111からSARデータ1111を取得する(S111)。取得されるSARデータ1111は、少なくとも後述のステップS121で用いられる空間画像に含まれる範囲のSARデータを含む。 The feature point extraction unit 112 of the information processing apparatus 11 acquires the SAR data 1111 from the storage unit 111 (S111). The acquired SAR data 1111 includes at least SAR data in a range included in the spatial image used in step S121 described later.
 そして、特徴点抽出部112が、取得されたSARデータ1111において特徴点を抽出する(ステップS112)。 Then, the feature point extraction unit 112 extracts feature points from the acquired SAR data 1111 (step S112).
 次に、ジオコーディング部113が、抽出された特徴点に、当該特徴点の基準の座標系における位置を示す座標を付与する(ステップS113)。ジオコーディング部113は、抽出された特徴点に付与された座標を、候補点抽出部114に送出する。 Next, the geocoding unit 113 assigns coordinates indicating the position of the feature point in the reference coordinate system to the extracted feature point (step S113). The geocoding unit 113 sends the coordinates assigned to the extracted feature points to the candidate point extraction unit 114.
 候補点抽出部114は、特徴点の座標とモデルデータ1113とに基づき、当該特徴点に関与する候補点を抽出する(ステップS114)。すなわち、候補点抽出部114は、特徴点に関与する候補点の座標を特定する。そして、候補点抽出部114は、候補点の座標を、表示態様決定部115と画像生成部116とに送出する。候補点抽出部114は、候補点の座標を、特徴点と候補点とが関連づけられる形式で、記憶部111に記憶させてもよい。 The candidate point extraction unit 114 extracts candidate points related to the feature point based on the coordinates of the feature point and the model data 1113 (step S114). That is, the candidate point extraction unit 114 specifies the coordinates of candidate points related to the feature points. Then, the candidate point extraction unit 114 sends the coordinates of the candidate points to the display mode determination unit 115 and the image generation unit 116. The candidate point extraction unit 114 may store the coordinates of the candidate points in the storage unit 111 in a format in which the feature points and the candidate points are associated with each other.
 (点表示画像の生成)
 情報処理装置11による、点表示画像を生成する処理に係る動作の流れを、図10のフローチャートを参照しながら説明する。
(Generate dot display image)
The flow of operations related to the process of generating the point display image by the information processing apparatus 11 will be described with reference to the flowchart of FIG.
 表示態様決定部115は、後述のステップS124の処理において用いられる空間画像の、撮影条件情報を読み出す(ステップS121)。また、表示態様決定部115は、候補点抽出部114が抽出した候補点の座標を候補点抽出部114から取得する。候補点の座標が記憶部111に記憶されている場合、表示態様決定部115は、記憶部111から候補点の座標を読み出してもよい。そして、表示態様決定部115は、撮影条件情報に基づき、空間画像の範囲に含まれる候補点を特定する(ステップS122)。 The display mode determination unit 115 reads shooting condition information of a spatial image used in the process of step S124 described later (step S121). In addition, the display mode determination unit 115 acquires the coordinates of the candidate points extracted by the candidate point extraction unit 114 from the candidate point extraction unit 114. When the coordinates of the candidate points are stored in the storage unit 111, the display mode determination unit 115 may read the coordinates of the candidate points from the storage unit 111. And the display mode determination part 115 specifies the candidate point contained in the range of a spatial image based on imaging condition information (step S122).
 そして、表示態様決定部115は、空間画像の範囲に含まれる候補点の位置と、モデルデータ1113と、撮影条件情報とに基づいて、空間画像の範囲に含まれる各候補点の表示態様を決定する(ステップS123)。表示態様決定部115は、決定された候補点の表示態様の情報を画像生成部116に送出する。 Then, the display mode determination unit 115 determines the display mode of each candidate point included in the range of the spatial image based on the position of the candidate point included in the range of the spatial image, the model data 1113, and the shooting condition information. (Step S123). The display mode determination unit 115 sends information on the display mode of the determined candidate points to the image generation unit 116.
 そして、画像生成部116は、候補点の位置が示された空間画像である点表示画像を生成する(ステップS124)。具体的には、画像生成部116は、記憶部111から空間画像を読み出し、その空間画像に対して、表示態様決定部115が決定した表示態様により候補点の位置を示す表示を、重畳する。 Then, the image generation unit 116 generates a point display image that is a spatial image indicating the position of the candidate point (step S124). Specifically, the image generation unit 116 reads a spatial image from the storage unit 111 and superimposes a display indicating the position of the candidate point on the spatial image according to the display mode determined by the display mode determination unit 115.
 なお、画像生成部116により使用される空間画像が決定されるタイミングは、SARデータを取得する処理が行われるタイミングの前でもよいし、後でもよい。すなわち、一つの例では、情報処理装置11は、使用される空間画像が決定されてから、決定された空間画像に含まれる範囲のデータであるSARデータ1111を特定し、その特定されたSARデータに対してステップS111からステップS114の処理を実行してもよい。また一つの例では、情報処理装置11は、使用される空間画像が決定される前に、空間画像1114に含まれ得る範囲のSARデータ1111に対し、予めステップS111からステップS114の処理を実行してもよい。 Note that the timing at which the spatial image used by the image generation unit 116 is determined may be before or after the timing at which the process for acquiring the SAR data is performed. That is, in one example, after the spatial image to be used is determined, the information processing apparatus 11 specifies the SAR data 1111 that is data in the range included in the determined spatial image, and the specified SAR data Alternatively, the processing from step S111 to step S114 may be executed. In one example, the information processing apparatus 11 performs the processes from step S111 to step S114 on the SAR data 1111 in a range that can be included in the spatial image 1114 in advance before the spatial image to be used is determined. May be.
 画像生成部116は、生成した画像を、表示制御部117に送出する。 The image generation unit 116 sends the generated image to the display control unit 117.
 表示制御部117は、画像生成部116により生成された点表示画像を、表示装置21に表示させる。これにより、表示装置21は、画像生成部116により生成された点表示画像を表示する(ステップS125)。閲覧者がこの表示を見ることにより、閲覧者は、SARデータ1111において抽出された特徴点に関与する候補点に関する理解を容易にすることができる。 The display control unit 117 causes the display device 21 to display the point display image generated by the image generation unit 116. Thereby, the display device 21 displays the point display image generated by the image generation unit 116 (step S125). By viewing this display, the viewer can easily understand the candidate points related to the feature points extracted in the SAR data 1111.
 <効果>
 第1の実施形態に係る情報処理装置11によれば、閲覧者は、SAR画像中の、レイオーバが生じている領域内の点における信号に寄与する地点への理解を容易にすることができる。その理由は、候補点抽出部114が、特徴点における信号に寄与した可能性のある地点である候補点をモデルデータ1113に基づいて抽出し、画像生成部116が、候補点の位置が表示された空間画像である点表示画像を生成するからである。
<Effect>
According to the information processing apparatus 11 according to the first embodiment, a viewer can easily understand a point that contributes to a signal at a point in a region where a layover occurs in the SAR image. The reason is that the candidate point extraction unit 114 extracts candidate points that may have contributed to the signal at the feature point based on the model data 1113, and the image generation unit 116 displays the position of the candidate point. This is because a point display image that is a spatial image is generated.
 表示態様決定部115により、点表示画像において表示される候補点の位置は、候補点の位置と撮影条件とに基づいた、より理解しやすい表示態様で表示される。 The display mode determination unit 115 displays the position of the candidate point displayed in the point display image in a more easily understandable display mode based on the position of the candidate point and the shooting conditions.
 以下、表示態様決定部115が、候補点に対して、候補点が空間画像において死角となる領域に位置するか否かに応じて、異なる表示態様を決定する場合の、点表示画像の例を紹介する。 Hereinafter, an example of a point display image in the case where the display mode determination unit 115 determines a different display mode for a candidate point depending on whether or not the candidate point is located in a region that is a blind spot in the spatial image. introduce.
 図11は、表示装置21により表示される、点表示画像の一つの例を示す図である。ただし、「Q」および「Q」の文字ならびに点を指し示す曲線は、説明の便宜のために表記されたものであり、実際に表示される画像には含まれない。図11に示される例では、建物が写った光学衛星画像に、候補点を表す円が重畳されている。図11に示される例では、いずれの候補点もオクルージョン領域内に位置しない。そのため、各候補点の表示態様は一様である。 FIG. 11 is a diagram illustrating an example of a dot display image displayed by the display device 21. However, the letters “Q 5 ” and “Q 6 ” and the curves indicating the points are shown for convenience of explanation and are not included in the actually displayed image. In the example shown in FIG. 11, a circle representing a candidate point is superimposed on an optical satellite image showing a building. In the example shown in FIG. 11, no candidate point is located in the occlusion area. Therefore, the display mode of each candidate point is uniform.
 図12は、図11に表示された建物および候補点を、別の位置から撮影された光学画像を、空間画像として使用した点表示画像の例を示す図である。図12に示される例では、候補点QおよびQはオクルージョン領域内に位置するため、他の候補点とは異なる表示態様で表示されている。 FIG. 12 is a diagram illustrating an example of a point display image in which the building and the candidate point displayed in FIG. 11 are used as an optical image taken from another position as a spatial image. In the example shown in FIG. 12, the candidate point Q 5 and Q 6 are to positioned in the occlusion area are displayed in a manner different from the other candidate points.
 このような表示によれば、点表示画像を閲覧する閲覧者は、候補点QおよびQが建物の見えている面には位置していないということを認識できる。具体的には、図12において、建物の白い四角形(窓を表す)を含んでいる面を、説明の便宜上「前面」と定義するならば、閲覧者は、候補点QおよびQが建物の前面には位置していないということを認識できる。 According to such a display, viewers viewing the point display image, on the surface of the candidate point Q 5 and Q 6 are visible building can recognize that not located. Specifically, in FIG. 12, if a surface including a white square (representing a window) of a building is defined as “front surface” for convenience of explanation, the viewer can see that candidate points Q 5 and Q 6 are buildings. It can be recognized that it is not located in front of.
 表示態様決定部115が、候補点に対して、候補点の撮影体からの距離に応じて、異なる表示態様を決定する場合の、点表示画像の例を紹介する。図13は、表示態様決定部115が、距離に応じて候補点を示す円の大きさが異なるように表示態様を決定した場合の、図12に示される点表示画像と同じ画角の点表示画像を示す図である。図13に示される例では、候補点QおよびQを示す円は、建物の正面に位置する他の候補点を示す円よりも小さく表示されるため、候補点QおよびQが建物の前面に位置する点ではないことがわかる。 An example of a point display image when the display mode determination unit 115 determines different display modes for the candidate points according to the distance of the candidate points from the photographic body will be introduced. FIG. 13 shows a point display with the same angle of view as the point display image shown in FIG. 12 when the display mode determination unit 115 determines the display mode so that the size of the circle indicating the candidate point differs according to the distance. It is a figure which shows an image. In the example shown in FIG. 13, circles indicate the candidate point Q 5 and Q 6 to be displayed smaller than the circle that indicates the other candidate points located in front of the building, the candidate point Q 5 and Q 6 buildings It turns out that it is not a point located in front of.
 図14は、仮に表示態様決定部115が機能しなかった場合の、図13に示される点表示画像と同じ画角の点表示画像を示す図である。表示態様決定部115が機能しなかった場合、候補点QおよびQの表示態様が他の候補点の表示態様と同一となる。この場合、閲覧者は、図14に示される点表示画像を見るだけでは、候補点QおよびQが建物のどの面に位置するのかを特定することができない。それゆえ、例えば、閲覧者は、点Qが表す情報から有用な情報を得たとしても、その情報が建物のどの位置に関する情報であるのかを判断することができない。例えば、PS-InSARでは、データの取得時刻が異なる2つのSARデータ1111を用いて、特徴点の変動が観測できる。しかし、候補点Qが関与する特徴点が変動したことを閲覧者が知ったとしても、具体的にどの位置が変動したのかを判断することができない。あるいは、閲覧者は誤って建物の前面が変動したと認識してしまう可能性がある。 FIG. 14 is a diagram showing a point display image having the same angle of view as the point display image shown in FIG. 13 when the display mode determination unit 115 does not function. When the display mode determination unit 115 does not function, the display modes of the candidate points Q 5 and Q 6 are the same as the display modes of the other candidate points. In this case, the viewer will only see the point display image shown in FIG. 14, the candidate point Q 5 and Q 6 can not be identified whether the position to which the surface of the building. Thus, for example, viewers can even obtain useful information from the information represented by the point Q 5, the information can not determine whether the information relating to which position of a building. For example, in PS-InSAR, fluctuations in feature points can be observed using two SAR data 1111 having different data acquisition times. However, the candidate point Q 5 is even know viewers that feature points involved varies, specifically which location can not be determined whether varied. Alternatively, the viewer may mistakenly recognize that the front of the building has changed.
 また、閲覧者が、各候補点が特徴点の信号に寄与する度合いを、候補点の位置に基づいて判定しようとしている場合、点QおよびQの位置が定まらないため、点QおよびQが特徴点の信号に寄与する度合いを判断することができない。 Also, viewers, a signal to the degree of contribution of each candidate point is a feature point, if you are trying to determine, based on the position of the candidate point, the position of the point Q 5 and Q 6 are not fixed, the point Q 5 and Q 6 can not determine the degree contribute to the signal of the feature points.
 表示態様決定部115により、上記のような不都合を解消することができる。すなわち、情報処理装置11は、閲覧者に対して特徴点に関する、より理解しやすい情報を提供する。 The inconvenience as described above can be solved by the display mode determination unit 115. That is, the information processing apparatus 11 provides the viewer with more easily understandable information regarding the feature points.
 <<変形例1>>
 上述の情報処理装置11の動作例において、ステップS111の処理とステップS112の処理との順番は逆でもよい。すなわち、特徴点抽出部112は、ジオコーディング部113により座標が付与された点の中から特徴点を抽出してもよい。
<< Modification 1 >>
In the operation example of the information processing apparatus 11 described above, the order of the process of step S111 and the process of step S112 may be reversed. That is, the feature point extraction unit 112 may extract a feature point from the points given coordinates by the geocoding unit 113.
 <<変形例2>>
 表示態様決定部115は、候補点の位置を示す表示が、さらに、候補点からの信号を受信したレーダの方向、または当該レーダからの電磁波の入射方向を示す表示となるように、表示態様を決定してもよい。
<< Modification 2 >>
The display mode determination unit 115 changes the display mode so that the display indicating the position of the candidate point further indicates the direction of the radar that received the signal from the candidate point or the incident direction of the electromagnetic wave from the radar. You may decide.
 図15は、表示態様決定部115がそれぞれの候補点に対してレーダからの電磁波の入射方向を示すような表示態様を決定した場合の、点表示画像の例を示す図である。 FIG. 15 is a diagram illustrating an example of a point display image when the display mode determination unit 115 determines a display mode that indicates the incident direction of the electromagnetic wave from the radar with respect to each candidate point.
 図15において、レーダからの電磁波の入射方向は、候補点を示す円に重なる矢印によって示される。このような表示により、閲覧者はレーダの方向を知ることができる。 In FIG. 15, the incident direction of the electromagnetic wave from the radar is indicated by an arrow that overlaps the circle indicating the candidate point. With such a display, the viewer can know the direction of the radar.
 入射方向を示す図形は、画像に平行な方向だけでなく、立体的な方向も示すような図形でもよい。図16は、立体的な方向を示す図形の例を示す図である。図16に示される矢印は、紙面の奥側寄りの右下の方向を指している。例えば、入射方向は、矢印が指す方向である。その場合、矢印が指す方向の逆方向は、候補点から見たレーダの方向である。このような表示によれば、閲覧者は、レーダからの電磁波の入射方向をより具体的に知ることができる。 The figure showing the incident direction may be a figure showing not only a direction parallel to the image but also a three-dimensional direction. FIG. 16 is a diagram illustrating an example of a figure indicating a three-dimensional direction. The arrow shown in FIG. 16 points to the lower right direction closer to the back side of the page. For example, the incident direction is a direction indicated by an arrow. In this case, the direction opposite to the direction indicated by the arrow is the direction of the radar viewed from the candidate point. According to such a display, the viewer can more specifically know the incident direction of the electromagnetic wave from the radar.
 候補点は、レーダからの電磁波が入射した点であるので、少なくとも、示された方向で候補点を見れば候補点はオクルージョン領域に含まれないことがわかる。したがって、レーダからの電磁波の入射方向を示す表示により、閲覧者は、候補点が隠されない撮影方向を知ることができる。また、表示された候補点の、三次元空間における位置の候補が複数ある場合、示された方向からは観測できないような点はその候補から除外できる。したがって、閲覧者は、候補点の三次元空間における位置に対する理解を深めることができる場合がある。 Since the candidate point is the point where the electromagnetic wave from the radar is incident, it can be seen that the candidate point is not included in the occlusion area if at least the candidate point is viewed in the indicated direction. Therefore, the display showing the incident direction of the electromagnetic wave from the radar allows the viewer to know the shooting direction in which the candidate point is not hidden. Further, when there are a plurality of candidate positions of the displayed candidate points in the three-dimensional space, points that cannot be observed from the indicated direction can be excluded from the candidates. Therefore, the viewer may be able to deepen understanding of the position of the candidate point in the three-dimensional space.
 <<変形例3>>
 表示態様決定部115は、さらに、特定の特徴点に関与する候補点の表示態様が他の候補点の表示態様とは異なるように表示態様を決定するよう、構成されていてもよい。
<< Modification 3 >>
The display mode determination unit 115 may further be configured to determine the display mode so that the display mode of candidate points related to a specific feature point is different from the display mode of other candidate points.
 例えば、表示態様決定部115は、利用者が指定した特徴点に関与する候補点を白色で、その他の候補点を黒色で表示するよう、表示態様を決定してもよい。 For example, the display mode determination unit 115 may determine the display mode so that candidate points related to the feature points designated by the user are displayed in white and other candidate points are displayed in black.
 利用者による特徴点の指定は、例えば、指定受付部118により行われる。図17は、指定受付部118を備える情報処理装置12の構成を示すブロック図である。 The designation of feature points by the user is performed by the designation receiving unit 118, for example. FIG. 17 is a block diagram illustrating a configuration of the information processing apparatus 12 including the designation receiving unit 118.
 指定受付部118は、たとえば、情報処理装置12の利用者から特徴点の指定を受け付ける。例えば、情報処理装置12は、表示装置21に、特徴点が示されたSAR画像を表示させてもよい。そして、指定受付部118は、利用者による、そのSAR画像において示される特徴点のうちの1つ以上の選択を受け付けてもよい。選択はマウス等の入出力装置を介して行われればよい。選択された特徴点が、指定された特徴点である。指定受付部118は、複数の特徴点の指定を受け付けてもよい。 The designation accepting unit 118 accepts designation of feature points from the user of the information processing apparatus 12, for example. For example, the information processing apparatus 12 may cause the display device 21 to display a SAR image showing the feature points. And the designation | designated reception part 118 may receive selection of 1 or more of the feature points shown in the SAR image by a user. The selection may be performed via an input / output device such as a mouse. The selected feature point is the designated feature point. The designation accepting unit 118 may accept designation of a plurality of feature points.
 指定受付部118は、指定された特徴点の情報を、表示態様決定部115に送出する。指定された特徴点の情報は、たとえば、特徴点の各々に関連づけられた識別番号や、座標等である。 The designation receiving unit 118 sends information on the designated feature points to the display mode determining unit 115. The designated feature point information is, for example, an identification number or coordinates associated with each feature point.
 表示態様決定部115は、指定された特徴点に関与する候補点を特定する。表示態様決定部115は、例えば、候補点抽出部114に、指定された特徴点に関与する候補点を抽出させ、抽出された候補点の情報を受け取ればよい。あるいは、特徴点と候補点とを関連づける情報が記憶部111に記憶されている場合は、表示態様決定部115は、その情報に基づき候補点を特定すればよい。 The display mode determination unit 115 identifies candidate points related to the specified feature point. For example, the display mode determination unit 115 may cause the candidate point extraction unit 114 to extract candidate points related to the designated feature point and receive information on the extracted candidate points. Alternatively, when information that associates the feature points with the candidate points is stored in the storage unit 111, the display mode determination unit 115 may identify the candidate points based on the information.
 指定受付部118は、特徴点の指定の代わりに、候補点の指定を受け付けてもよい。例えば、利用者は、ステップS125の処理によって表示された点表示画像に含まれる候補点のうち、いずれかの候補点の選択を行ってもよい。指定受付部118は、その選択を受け付け、選択された候補点が関与する特徴点を特定してもよい。そして、指定受付部118は、その特徴点に関与する候補点を特定してもよい。 The designation accepting unit 118 may accept designation of candidate points instead of designation of feature points. For example, the user may select any candidate point among the candidate points included in the point display image displayed by the process of step S125. The designation accepting unit 118 may accept the selection and specify a feature point related to the selected candidate point. And the designation | designated reception part 118 may identify the candidate point which concerns on the feature point.
 表示態様決定部115は、特定された候補点の表示態様として、他の候補点の表示態様と異なる表示態様を決定する。そして、画像生成部116は、決定された表示態様による候補点の表示がされた点表示画像を生成する。この点表示画像が表示装置21により表示されることにより、閲覧者は、指定した特徴点に関与する候補点の情報を見ることができる。 The display mode determination unit 115 determines a display mode different from the display mode of other candidate points as the display mode of the identified candidate points. Then, the image generation unit 116 generates a point display image in which candidate points are displayed according to the determined display mode. By displaying this point display image on the display device 21, the viewer can see information on candidate points related to the designated feature point.
 図18は、本変形例3に係る情報処理装置12により生成される、点表示画像の例を示す図である。図18では、特定の特徴点に関与する候補点は白色で、その他の候補点は黒色で表示されている。ただし、特定の特徴点に関与する候補点のうち、オクルージョン領域内に位置する候補点を示す図形の透過率は、50%である。 FIG. 18 is a diagram illustrating an example of a point display image generated by the information processing apparatus 12 according to the third modification. In FIG. 18, candidate points related to a specific feature point are displayed in white, and other candidate points are displayed in black. However, the transmittance of a figure indicating a candidate point located in the occlusion area among candidate points related to a specific feature point is 50%.
 また、表示態様決定部115は、特定の特徴点に関与する候補点の表示が、変形例2のようにレーダの入射方向を示す表示を含むよう、特定の特徴点に関与する候補点の表示態様を決定してもよい。 In addition, the display mode determination unit 115 displays the candidate points related to the specific feature point so that the display of the candidate points related to the specific feature point includes a display indicating the incident direction of the radar as in the second modification. Aspects may be determined.
 このような表示によれば、閲覧者が候補点の位置を誤認する可能性を更に抑制することができる。例えば、図18に示される、建物の壁面に位置する候補点は、建物の、前面とは異なる面に位置することが、容易に理解される。 Such a display can further suppress the possibility that the viewer misidentifies the position of the candidate point. For example, it is easily understood that the candidate points located on the wall surface of the building shown in FIG. 18 are located on a surface different from the front surface of the building.
 本変形例の構成により、閲覧者は、特定の特徴点に関与する候補点を容易に理解することができる。 構成 With the configuration of this modification, the viewer can easily understand the candidate points related to a specific feature point.
 <<第2の実施形態>>
 本発明の一実施形態に係る情報処理装置10について説明する。
<< Second Embodiment >>
An information processing apparatus 10 according to an embodiment of the present invention will be described.
 図19は、情報処理装置10の構成を示すブロック図である。情報処理装置10は、候補点抽出部104と、表示態様決定部105と、画像生成部106とを含む。 FIG. 19 is a block diagram illustrating a configuration of the information processing apparatus 10. The information processing apparatus 10 includes a candidate point extraction unit 104, a display mode determination unit 105, and an image generation unit 106.
 候補点抽出部104は、レーダによって取得された被観測体からの信号の強度マップにおいて特定される点である対象点の、三次元空間における位置と、被観測体の形状とに基づいて、対象点における信号に寄与する点である候補点を抽出する。上記各実施形態の候補点抽出部114は、候補点抽出部104の一例である。 Candidate point extraction unit 104 determines the target point based on the position in the three-dimensional space of the target point, which is a point specified in the intensity map of the signal from the target object acquired by the radar, and the shape of the target object Candidate points that are points that contribute to the signal at the point are extracted. The candidate point extraction unit 114 of each of the above embodiments is an example of the candidate point extraction unit 104.
 信号の強度マップは、例えば、SAR画像である。強度マップにおいて特定される点は、三次元空間における一地点に関連づけられる。対象点の一例は、第1の実施形態における特徴点である。なお、被観測体の形状は、例えば、三次元のモデルデータによって与えられる。 The signal intensity map is, for example, a SAR image. A point specified in the intensity map is associated with a point in the three-dimensional space. An example of the target point is a feature point in the first embodiment. Note that the shape of the object to be observed is given by, for example, three-dimensional model data.
 表示態様決定部105は、被観測体が写った空間画像における前記候補点の位置を示す表示の表示態様を、候補点の三次元空間における位置と、当該空間画像の撮影条件と、に基づいて決定する。空間画像に写る被写体の形状は、例えば、三次元のモデルデータによって与えられる。 The display mode determination unit 105 displays the display mode indicating the position of the candidate point in the spatial image in which the observed object is captured based on the position of the candidate point in the three-dimensional space and the imaging condition of the spatial image. decide. The shape of the subject shown in the spatial image is given by, for example, three-dimensional model data.
 上記各実施形態の表示態様決定部115は、表示態様決定部105の一例である。 The display mode determination unit 115 of each of the above embodiments is an example of the display mode determination unit 105.
 画像生成部106は、空間画像における候補点の位置が、決定された表示態様により表示された、画像を生成する。三次元空間内の点と空間画像内の点との関係づけは、予め行われていてもよいし、画像生成部106が行ってもよい。いわば、画像生成部106は、三次元空間における候補点の位置と、空間画像と三次元空間との関係と、に基づいて、空間画像における候補点の位置が示された画像を生成する。 The image generation unit 106 generates an image in which the position of the candidate point in the spatial image is displayed according to the determined display mode. The relationship between the points in the three-dimensional space and the points in the spatial image may be performed in advance or may be performed by the image generation unit 106. In other words, the image generation unit 106 generates an image indicating the position of the candidate point in the spatial image based on the position of the candidate point in the three-dimensional space and the relationship between the spatial image and the three-dimensional space.
 上記各実施形態の画像生成部116は、画像生成部106の一例である。 The image generation unit 116 in each of the above embodiments is an example of the image generation unit 106.
 図20は、情報処理装置10の動作の流れを示すフローチャートである。 FIG. 20 is a flowchart showing an operation flow of the information processing apparatus 10.
 候補点抽出部104は、レーダによって取得された被観測体からの信号の強度マップにおいて特定される点である対象点の、三次元空間における位置と、被観測体の形状とに基づいて、対象点における信号に寄与する点である候補点を抽出する(ステップS101)。 Candidate point extraction unit 104 determines the target point based on the position of the target point, which is a point specified in the intensity map of the signal from the observed object acquired by the radar, in the three-dimensional space and the shape of the observed object. Candidate points that are points that contribute to the signal at the point are extracted (step S101).
 次に、表示態様決定部105は、被観測体が写った空間画像における候補点の位置を示す表示の表示態様を、候補点の三次元空間における位置と、当該空間画像の撮影条件と、に基づいて決定する(ステップS102)。 Next, the display mode determining unit 105 displays the display mode indicating the position of the candidate point in the spatial image in which the observed object is captured, based on the position of the candidate point in the three-dimensional space and the shooting condition of the spatial image. Based on the determination (step S102).
 そして、画像生成部106は、空間画像における候補点の位置が、決定された表示態様により表示された、画像を生成する(ステップS103)。 Then, the image generation unit 106 generates an image in which the position of the candidate point in the spatial image is displayed according to the determined display mode (step S103).
 本構成によれば、レーダによって取得された被観測体からの信号の強度マップ中の、レイオーバが生じている領域内の点における信号に寄与する、被観測体上の点への理解を容易にすることができる。その理由は、候補点抽出部104が、対象点における信号に寄与する候補点をモデルデータに基づいて抽出し、画像生成部106が、空間画像における候補点の位置が表示された画像を生成するからである。さらに、候補点の位置を示す表示の表示態様が、三次元空間における位置と、当該空間画像の撮影条件と、に基づいて決定されるからである。 According to this configuration, in the intensity map of the signal from the observed object acquired by the radar, it is easy to understand the point on the observed object that contributes to the signal at the point in the region where the layover occurs. can do. The reason is that the candidate point extraction unit 104 extracts candidate points that contribute to the signal at the target point based on the model data, and the image generation unit 106 generates an image in which the position of the candidate point in the spatial image is displayed. Because. Furthermore, the display mode of the display indicating the position of the candidate point is determined based on the position in the three-dimensional space and the imaging condition of the spatial image.
 <実施形態の各部を実現するハードウェアの構成>
 以上、説明した本発明の各実施形態において、各装置の各構成要素は、機能単位のブロックを示している。
<Hardware Configuration for Implementing Each Unit of Embodiment>
As described above, in each embodiment of the present invention described above, each component of each device represents a functional unit block.
 各構成要素の処理は、例えば、コンピュータシステムが、コンピュータ読み取り可能な記憶媒体により記憶された、その処理をコンピュータシステムに実行させるプログラムを、読み込み、実行することによって、実現されてもよい。「コンピュータ読み取り可能な記憶媒体」は、例えば、光ディスク、磁気ディスク、光磁気ディスク、および不揮発性半導体メモリ等の可搬媒体、ならびに、コンピュータシステムに内蔵されるROM(Read Only Memory)およびハードディスク等の記憶装置である。「コンピュータ読み取り可能な記憶媒体」は、インターネット等のネットワークや電話回線等の通信回線を介してプログラムを送信する場合の通信線のように、短時間の間、動的にプログラムを保持するもの、その場合のサーバやクライアントにあたるコンピュータシステム内部の揮発性メモリのように、プログラムを一時的に保持しているものも含む。また上記プログラムは、前述した機能の一部を実現するためのものであってもよく、更に前述した機能をコンピュータシステムにすでに記憶されているプログラムとの組み合わせで実現できるものであってもよい。 The processing of each component may be realized, for example, by reading and executing a program stored in a computer-readable storage medium that causes the computer system to execute the processing. “Computer-readable storage media” includes, for example, portable media such as optical disks, magnetic disks, magneto-optical disks, and non-volatile semiconductor memories, and ROMs (Read Only Memory) and hard disks built into computer systems. It is a storage device. "Computer-readable storage medium" is a medium that dynamically holds a program for a short time, such as a communication line when transmitting a program via a network such as the Internet or a communication line such as a telephone line, In this case, a program or a program that temporarily holds a program such as a volatile memory in a computer system corresponding to a server or a client is also included. The program may be a program for realizing a part of the functions described above, and may be a program capable of realizing the functions described above in combination with a program already stored in a computer system.
 「コンピュータシステム」とは、一例として、図21に示されるようなコンピュータ900を含むシステムである。コンピュータ900は、以下のような構成を含む。
・CPU(Central Processing Unit)901
・ROM902
・RAM(Random Access Memory)903
・RAM903へロードされるプログラム904Aおよび記憶情報904B
・プログラム904Aおよび記憶情報904Bを格納する記憶装置905
・記憶媒体906の読み書きを行うドライブ装置907
・通信ネットワーク909と接続する通信インタフェース908
・データの入出力を行う入出力インタフェース910
・各構成要素を接続するバス911
 例えば、各実施形態における各装置の各構成要素は、その構成要素の機能を実現するプログラム904AをCPU901がRAM903にロードして実行することで実現される。各装置の各構成要素の機能を実現するプログラム904Aは、例えば、予め、記憶装置905やROM902に格納される。そして、必要に応じてCPU901がプログラム904Aを読み出す。記憶装置905は、例えば、ハードディスクである。プログラム904Aは、通信ネットワーク909を介してCPU901に供給されてもよいし、予め記憶媒体906に格納されており、ドライブ装置907に読み出され、CPU901に供給されてもよい。なお、記憶媒体906は、例えば、光ディスク、磁気ディスク、光磁気ディスク、および不揮発性半導体メモリ等の、可搬媒体である。
The “computer system” is a system including a computer 900 as shown in FIG. 21 as an example. The computer 900 includes the following configuration.
CPU (Central Processing Unit) 901
・ ROM902
-RAM (Random Access Memory) 903
A program 904A and storage information 904B loaded into the RAM 903
A storage device 905 that stores the program 904A and storage information 904B
A drive device 907 that reads / writes from / to the storage medium 906
A communication interface 908 connected to the communication network 909
An input / output interface 910 for inputting / outputting data
-Bus 911 connecting each component
For example, each component of each device in each embodiment is realized when the CPU 901 loads the RAM 903 and executes a program 904A that realizes the function of the component. A program 904A for realizing the function of each component of each device is stored in advance in the storage device 905 or the ROM 902, for example. Then, the CPU 901 reads the program 904A as necessary. The storage device 905 is, for example, a hard disk. The program 904A may be supplied to the CPU 901 via the communication network 909, or may be stored in advance in the storage medium 906, read out to the drive device 907, and supplied to the CPU 901. The storage medium 906 is a portable medium such as an optical disk, a magnetic disk, a magneto-optical disk, and a nonvolatile semiconductor memory.
 各装置の実現方法には、様々な変形例がある。例えば、各装置は、構成要素毎にそれぞれ別個のコンピュータ900とプログラムとの可能な組み合わせにより実現されてもよい。また、各装置が備える複数の構成要素が、一つのコンピュータ900とプログラムとの可能な組み合わせにより実現されてもよい。 There are various modifications to the method of realizing each device. For example, each device may be realized by a possible combination of a separate computer 900 and a program for each component. A plurality of constituent elements included in each device may be realized by a possible combination of one computer 900 and a program.
 また、各装置の各構成要素の一部または全部は、その他の汎用または専用の回路、コンピュータ等やこれらの組み合わせによって実現されてもよい。これらは、単一のチップによって構成されてもよいし、バスを介して接続される複数のチップによって構成されてもよい。 In addition, some or all of the components of each device may be realized by other general-purpose or dedicated circuits, computers, or combinations thereof. These may be configured by a single chip or may be configured by a plurality of chips connected via a bus.
 各装置の各構成要素の一部または全部が複数のコンピュータや回路等により実現される場合には、複数のコンピュータや回路等は、集中配置されてもよいし、分散配置されてもよい。例えば、コンピュータや回路等は、クライアントアンドサーバシステム、クラウドコンピューティングシステム等、各々が通信ネットワークを介して接続される形態として実現されてもよい。 When a part or all of each component of each device is realized by a plurality of computers, circuits, etc., the plurality of computers, circuits, etc. may be centrally arranged or distributedly arranged. For example, the computer, the circuit, and the like may be realized as a form in which each is connected via a communication network, such as a client and server system and a cloud computing system.
 本願発明は以上に説明した実施形態に限定されるものではない。本願発明の構成や詳細には、本願発明のスコープ内で当業者が理解し得る様々な変更をすることができる。 The present invention is not limited to the embodiment described above. Various changes that can be understood by those skilled in the art can be made to the configuration and details of the present invention within the scope of the present invention.
 上記実施形態の一部または全部は以下の付記のようにも記載され得るが、以下には限られない。 Some or all of the above embodiments may be described as in the following supplementary notes, but are not limited to the following.
 <<付記>>
[付記1]
 レーダによって取得された被観測体からの信号の強度マップにおいて特定される点である対象点の、三次元空間における位置と、前記被観測体の形状とに基づいて、前記対象点における前記信号に寄与する点である候補点を抽出する候補点抽出手段と、
 前記被観測体が写った空間画像における前記候補点の位置を示す表示の表示態様を、前記候補点の前記三次元空間における位置と、当該空間画像の撮影条件と、に基づいて決定する表示態様決定手段と、
 前記空間画像における前記候補点の位置が前記決定された表示態様により表示された画像を生成する画像生成手段と、
 を備える情報処理装置。
[付記2]
 前記表示態様決定手段は、前記候補点が前記空間画像の撮影を行った撮影体から見て死角となる領域に位置するか否かに応じて、異なる表示態様を決定する、付記1に記載の情報処理装置。
[付記3]
 前記表示態様決定手段は、前記候補点から前記撮影体への半直線が前記空間画像に写る被写体の表面と交わる回数に応じて、異なる表示態様を決定する、付記2に記載の情報処理装置。
[付記4]
 前記表示態様決定手段は、前記候補点の位置を示す表示が、さらに、当該候補点が寄与する前記信号の基となる、前記レーダからの電磁波の、入射方向を示すように、前記候補点の位置を示す表示の表示態様を決定する、
 付記1から3のいずれか一つに記載の情報処理装置。
[付記5]
 前記表示態様決定手段は、少なくとも1つの前記候補点に対し、当該候補点の位置を示す表示の一部または全部が透過色であるように当該表示の表示態様を決定する、
 付記1から4のいずれか一つに記載の情報処理装置。
[付記6]
 前記表示態様決定手段は、さらに、所定の方法により特定された前記対象点における前記信号に寄与する前記候補点の位置を示す表示の表示態様として、前記空間画像における他の前記候補点の位置を示す表示の表示態様と異なる表示態様を決定する、
 付記1から5のいずれか一つに記載の情報処理装置。
[付記7]
 レーダによって取得された被観測体からの信号の強度マップにおいて特定される点である対象点の、三次元空間における位置と、前記被観測体の形状とに基づいて、前記対象点における前記信号に寄与する点である候補点を抽出し、
 前記被観測体が写った空間画像における前記候補点の位置を示す表示の表示態様を、前記候補点の前記三次元空間における位置と、当該空間画像の撮影条件と、に基づいて決定し、
 前記空間画像における前記候補点の位置が前記決定された表示態様により表示された画像を生成する、
 情報処理方法。
[付記8]
 前記候補点が前記空間画像の撮影を行った撮影体から見て死角となる領域に位置するか否かに応じて、異なる表示態様を決定する、付記7に記載の情報処理方法。
[付記9]
 前記候補点から前記撮影体への半直線が前記空間画像に写る被写体の表面と交わる回数に応じて、異なる表示態様を決定する、付記8に記載の情報処理方法。
[付記10]
 前記候補点の位置を示す表示が、さらに、当該候補点が寄与する前記信号の基となる、前記レーダからの電磁波の、入射方向を示すように、前記候補点の位置を示す表示の表示態様を決定する、
 付記7から9のいずれか一つに記載の情報処理方法。
[付記11]
 少なくとも1つの前記候補点に対し、当該候補点の位置を示す表示の一部または全部が透過色であるように当該表示の表示態様を決定する、
 付記7から10のいずれか一つに記載の情報処理方法。
[付記12]
 さらに、所定の方法により特定された前記対象点における前記信号に寄与する前記候補点の位置を示す表示の表示態様として、前記空間画像における他の前記候補点の位置を示す表示の表示態様と異なる表示態様を決定する、
 付記7から11のいずれか一つに記載の情報処理方法。
[付記13]
 レーダによって取得された被観測体からの信号の強度マップにおいて特定される点である対象点の、三次元空間における位置と、前記被観測体の形状とに基づいて、前記対象点における前記信号に寄与する点である候補点を抽出する候補点抽出処理と、
 前記被観測体が写った空間画像における前記候補点の位置を示す表示の表示態様を、前記候補点の前記三次元空間における位置と、当該空間画像の撮影条件と、に基づいて決定する表示態様決定処理と、
 前記空間画像における前記候補点の位置が前記決定された表示態様により表示された画像を生成する画像生成処理と、
 をコンピュータに実行させるプログラムを記憶する、コンピュータ読み取り可能な記憶媒体。
[付記14]
 前記表示態様決定処理は、前記候補点が前記空間画像の撮影を行った撮影体から見て死角となる領域に位置するか否かに応じて、異なる表示態様を決定する、付記13に記載の記憶媒体。
[付記15]
 前記表示態様決定処理は、前記候補点から前記撮影体への半直線が前記空間画像に写る被写体の表面と交わる回数に応じて、異なる表示態様を決定する、付記14に記載の記憶媒体。
[付記16]
 前記表示態様決定処理は、前記候補点の位置を示す表示が、さらに、当該候補点が寄与する前記信号の基となる、前記レーダからの電磁波の、入射方向を示すように、前記候補点の位置を示す表示の表示態様を決定する、
 付記13から15のいずれか一つに記載の記憶媒体。
[付記17]
 前記表示態様決定処理は、少なくとも1つの前記候補点に対し、当該候補点の位置を示す表示の一部または全部が透過色であるように当該表示の表示態様を決定する、
 付記13から16のいずれか一つに記載の記憶媒体。
[付記18]
 前記表示態様決定処理は、さらに、所定の方法により特定された前記対象点における前記信号に寄与する前記候補点の位置を示す表示の表示態様として、前記空間画像における他の前記候補点の位置を示す表示の表示態様と異なる表示態様を決定する、
 付記13から17のいずれか一つに記載の記憶媒体。
<< Appendix >>
[Appendix 1]
Based on the position in the three-dimensional space of the target point, which is the point specified in the intensity map of the signal from the observed object acquired by the radar, and the shape of the observed object, the signal at the target point Candidate point extracting means for extracting candidate points that are contributing points;
A display mode in which a display mode indicating the position of the candidate point in the spatial image in which the observed object is captured is determined based on the position of the candidate point in the three-dimensional space and the imaging condition of the spatial image. A determination means;
Image generating means for generating an image in which the position of the candidate point in the spatial image is displayed according to the determined display mode;
An information processing apparatus comprising:
[Appendix 2]
The display mode determination unit according to claim 1, wherein the display mode determination unit determines a different display mode depending on whether or not the candidate point is located in an area that is a blind spot when viewed from a photographic body that has captured the spatial image. Information processing device.
[Appendix 3]
The information processing apparatus according to appendix 2, wherein the display mode determination unit determines a different display mode according to the number of times a half line from the candidate point to the photographic object intersects the surface of the subject in the spatial image.
[Appendix 4]
The display mode determining means is further configured to display the position of the candidate point such that an indication of an incident direction of an electromagnetic wave from the radar that is a basis of the signal contributed by the candidate point. Determine the display mode of the display showing the position,
The information processing apparatus according to any one of appendices 1 to 3.
[Appendix 5]
The display mode determining means determines the display mode of the display so that a part or all of the display indicating the position of the candidate point is a transparent color for at least one of the candidate points.
The information processing apparatus according to any one of appendices 1 to 4.
[Appendix 6]
The display mode determination means further uses the position of the other candidate point in the spatial image as a display mode of display indicating the position of the candidate point contributing to the signal at the target point specified by a predetermined method. Determine a display mode different from the display mode of the display
The information processing apparatus according to any one of appendices 1 to 5.
[Appendix 7]
Based on the position in the three-dimensional space of the target point, which is the point specified in the intensity map of the signal from the observed object acquired by the radar, and the shape of the observed object, the signal at the target point Extract candidate points that contribute,
A display mode of display indicating the position of the candidate point in the spatial image in which the observed object is captured is determined based on the position of the candidate point in the three-dimensional space and the imaging condition of the spatial image,
Generating an image in which the position of the candidate point in the spatial image is displayed according to the determined display mode;
Information processing method.
[Appendix 8]
The information processing method according to appendix 7, wherein a different display mode is determined depending on whether or not the candidate point is located in a region that is a blind spot when viewed from a photographing body that has captured the spatial image.
[Appendix 9]
9. The information processing method according to appendix 8, wherein a different display mode is determined according to the number of times a half line from the candidate point to the photographic object intersects the surface of the subject in the spatial image.
[Appendix 10]
A display mode for indicating the position of the candidate point so that the display indicating the position of the candidate point further indicates the incident direction of the electromagnetic wave from the radar, which is the basis of the signal contributed by the candidate point. To decide,
The information processing method according to any one of appendices 7 to 9.
[Appendix 11]
The display mode of the display is determined so that a part or all of the display indicating the position of the candidate point is a transparent color for at least one of the candidate points.
The information processing method according to any one of appendices 7 to 10.
[Appendix 12]
Further, the display mode indicating the position of the candidate point contributing to the signal at the target point specified by a predetermined method is different from the display mode indicating the position of the other candidate point in the spatial image. Determine the display mode,
The information processing method according to any one of appendices 7 to 11.
[Appendix 13]
Based on the position in the three-dimensional space of the target point, which is the point specified in the intensity map of the signal from the observed object acquired by the radar, and the shape of the observed object, the signal at the target point Candidate point extraction processing for extracting candidate points that are contributing points;
A display mode in which a display mode indicating the position of the candidate point in the spatial image in which the observed object is captured is determined based on the position of the candidate point in the three-dimensional space and the imaging condition of the spatial image. The decision process,
An image generation process for generating an image in which the position of the candidate point in the spatial image is displayed according to the determined display mode;
A computer-readable storage medium for storing a program for causing a computer to execute the program.
[Appendix 14]
The display mode determination process according to appendix 13, wherein the display mode determination process determines a different display mode depending on whether or not the candidate point is located in a region that is a blind spot when viewed from the photographic body that has captured the spatial image. Storage medium.
[Appendix 15]
15. The storage medium according to appendix 14, wherein the display mode determination process determines a different display mode according to the number of times a half line from the candidate point to the photographic object intersects the surface of the subject in the spatial image.
[Appendix 16]
The display mode determination process is performed so that the display indicating the position of the candidate point further indicates an incident direction of an electromagnetic wave from the radar that is a basis of the signal contributed by the candidate point. Determine the display mode of the display showing the position,
The storage medium according to any one of appendices 13 to 15.
[Appendix 17]
The display mode determination process determines the display mode of the display so that a part or all of the display indicating the position of the candidate point is a transparent color with respect to at least one of the candidate points.
The storage medium according to any one of appendices 13 to 16.
[Appendix 18]
In the display mode determination process, the position of another candidate point in the spatial image is further displayed as a display mode indicating the position of the candidate point contributing to the signal at the target point specified by a predetermined method. Determine a display mode different from the display mode of the display
The storage medium according to any one of appendices 13 to 17.
 10、11、12  情報処理装置
 104  候補点抽出部
 105  表示態様決定部
 106  画像生成部
 111  記憶部
 112  特徴点抽出部
 113  ジオコーディング部
 114  候補点抽出部
 115  表示態様決定部
 116  画像生成部
 117  表示制御部
 118  指定受付部
 1111  SARデータ
 1112  SARデータパラメータ
 1113  モデルデータ
 1114  空間画像
 1115  撮影条件情報
 21  表示装置
 900  コンピュータ
 901  CPU
 902  ROM
 903  RAM
 904A  プログラム
 904B  記憶情報
 905  記憶装置
 906  記憶媒体
 907  ドライブ装置
 908  通信インタフェース
 909  通信ネットワーク
 910  入出力インタフェース
 911  バス
10, 11, 12 Information processing device 104 Candidate point extraction unit 105 Display mode determination unit 106 Image generation unit 111 Storage unit 112 Feature point extraction unit 113 Geocoding unit 114 Candidate point extraction unit 115 Display mode determination unit 116 Image generation unit 117 Display Control unit 118 Designation receiving unit 1111 SAR data 1112 SAR data parameter 1113 Model data 1114 Spatial image 1115 Imaging condition information 21 Display device 900 Computer 901 CPU
902 ROM
903 RAM
904A program 904B storage information 905 storage device 906 storage medium 907 drive device 908 communication interface 909 communication network 910 input / output interface 911 bus

Claims (18)

  1.  レーダによって取得された被観測体からの信号の強度マップにおいて特定される点である対象点の、三次元空間における位置と、前記被観測体の形状とに基づいて、前記対象点における前記信号に寄与する点である候補点を抽出する候補点抽出手段と、
     前記被観測体が写った空間画像における前記候補点の位置を示す表示の表示態様を、前記候補点の前記三次元空間における位置と、当該空間画像の撮影条件と、に基づいて決定する表示態様決定手段と、
     前記空間画像における前記候補点の位置が前記決定された表示態様により表示された画像を生成する画像生成手段と、
     を備える情報処理装置。
    Based on the position in the three-dimensional space of the target point, which is the point specified in the intensity map of the signal from the observed object acquired by the radar, and the shape of the observed object, the signal at the target point Candidate point extracting means for extracting candidate points that are contributing points;
    A display mode in which a display mode indicating the position of the candidate point in the spatial image in which the observed object is captured is determined based on the position of the candidate point in the three-dimensional space and the imaging condition of the spatial image. A determination means;
    Image generating means for generating an image in which the position of the candidate point in the spatial image is displayed according to the determined display mode;
    An information processing apparatus comprising:
  2.  前記表示態様決定手段は、前記候補点が前記空間画像の撮影を行った撮影体から見て死角となる領域に位置するか否かに応じて、異なる表示態様を決定する、請求項1に記載の情報処理装置。 The display mode determining unit determines a different display mode according to whether or not the candidate point is located in a region that is a blind spot when viewed from a photographing body that has captured the spatial image. Information processing device.
  3.  前記表示態様決定手段は、前記候補点から前記撮影体への半直線が前記空間画像に写る被写体の表面と交わる回数に応じて、異なる表示態様を決定する、請求項2に記載の情報処理装置。 The information processing apparatus according to claim 2, wherein the display mode determination unit determines a different display mode according to the number of times a half line from the candidate point to the photographic object intersects the surface of the subject in the spatial image. .
  4.  前記表示態様決定手段は、前記候補点の位置を示す表示が、さらに、当該候補点が寄与する前記信号の基となる、前記レーダからの電磁波の、入射方向を示すように、前記候補点の位置を示す表示の表示態様を決定する、
     請求項1から3のいずれか一項に記載の情報処理装置。
    The display mode determining means is further configured to display the position of the candidate point such that an indication of an incident direction of an electromagnetic wave from the radar that is a basis of the signal contributed by the candidate point. Determine the display mode of the display showing the position,
    The information processing apparatus according to any one of claims 1 to 3.
  5.  前記表示態様決定手段は、少なくとも1つの前記候補点に対し、当該候補点の位置を示す表示の一部または全部が透過色であるように当該表示の表示態様を決定する、
     請求項1から4のいずれか一項に記載の情報処理装置。
    The display mode determining means determines the display mode of the display so that a part or all of the display indicating the position of the candidate point is a transparent color for at least one of the candidate points.
    The information processing apparatus according to any one of claims 1 to 4.
  6.  前記表示態様決定手段は、さらに、所定の方法により特定された前記対象点における前記信号に寄与する前記候補点の位置を示す表示の表示態様として、前記空間画像における他の前記候補点の位置を示す表示の表示態様と異なる表示態様を決定する、
     請求項1から5のいずれか一項に記載の情報処理装置。
    The display mode determination means further uses the position of the other candidate point in the spatial image as a display mode of display indicating the position of the candidate point contributing to the signal at the target point specified by a predetermined method. Determine a display mode different from the display mode of the display
    The information processing apparatus according to any one of claims 1 to 5.
  7.  レーダによって取得された被観測体からの信号の強度マップにおいて特定される点である対象点の、三次元空間における位置と、前記被観測体の形状とに基づいて、前記対象点における前記信号に寄与する点である候補点を抽出し、
     前記被観測体が写った空間画像における前記候補点の位置を示す表示の表示態様を、前記候補点の前記三次元空間における位置と、当該空間画像の撮影条件と、に基づいて決定し、
     前記空間画像における前記候補点の位置が前記決定された表示態様により表示された画像を生成する、
     情報処理方法。
    Based on the position in the three-dimensional space of the target point, which is the point specified in the intensity map of the signal from the observed object acquired by the radar, and the shape of the observed object, the signal at the target point Extract candidate points that contribute,
    A display mode of display indicating the position of the candidate point in the spatial image in which the observed object is captured is determined based on the position of the candidate point in the three-dimensional space and the imaging condition of the spatial image,
    Generating an image in which the position of the candidate point in the spatial image is displayed according to the determined display mode;
    Information processing method.
  8.  前記候補点が前記空間画像の撮影を行った撮影体から見て死角となる領域に位置するか否かに応じて、異なる表示態様を決定する、請求項7に記載の情報処理方法。 The information processing method according to claim 7, wherein a different display mode is determined depending on whether or not the candidate point is located in an area where a blind spot is seen from a photographing body that has captured the spatial image.
  9.  前記候補点から前記撮影体への半直線が前記空間画像に写る被写体の表面と交わる回数に応じて、異なる表示態様を決定する、請求項8に記載の情報処理方法。 The information processing method according to claim 8, wherein a different display mode is determined according to the number of times a half line from the candidate point to the photographic object intersects the surface of the subject in the spatial image.
  10.  前記候補点の位置を示す表示が、さらに、当該候補点が寄与する前記信号の基となる、前記レーダからの電磁波の、入射方向を示すように、前記候補点の位置を示す表示の表示態様を決定する、
     請求項7から9のいずれか一項に記載の情報処理方法。
    A display mode for indicating the position of the candidate point so that the display indicating the position of the candidate point further indicates the incident direction of the electromagnetic wave from the radar, which is the basis of the signal contributed by the candidate point. To decide,
    The information processing method according to any one of claims 7 to 9.
  11.  少なくとも1つの前記候補点に対し、当該候補点の位置を示す表示の一部または全部が透過色であるように当該表示の表示態様を決定する、
     請求項7から10のいずれか一項に記載の情報処理方法。
    The display mode of the display is determined so that a part or all of the display indicating the position of the candidate point is a transparent color for at least one of the candidate points.
    The information processing method according to any one of claims 7 to 10.
  12.  さらに、所定の方法により特定された前記対象点における前記信号に寄与する前記候補点の位置を示す表示の表示態様として、前記空間画像における他の前記候補点の位置を示す表示の表示態様と異なる表示態様を決定する、
     請求項7から11のいずれか一項に記載の情報処理方法。
    Further, the display mode indicating the position of the candidate point contributing to the signal at the target point specified by a predetermined method is different from the display mode indicating the position of the other candidate point in the spatial image. Determine the display mode,
    The information processing method according to any one of claims 7 to 11.
  13.  レーダによって取得された被観測体からの信号の強度マップにおいて特定される点である対象点の、三次元空間における位置と、前記被観測体の形状とに基づいて、前記対象点における前記信号に寄与する点である候補点を抽出する候補点抽出処理と、
     前記被観測体が写った空間画像における前記候補点の位置を示す表示の表示態様を、前記候補点の前記三次元空間における位置と、当該空間画像の撮影条件と、に基づいて決定する表示態様決定処理と、
     前記空間画像における前記候補点の位置が前記決定された表示態様により表示された画像を生成する画像生成処理と、
     をコンピュータに実行させるプログラムを記憶する、コンピュータ読み取り可能な記憶媒体。
    Based on the position in the three-dimensional space of the target point, which is the point specified in the intensity map of the signal from the observed object acquired by the radar, and the shape of the observed object, the signal at the target point Candidate point extraction processing for extracting candidate points that are contributing points;
    A display mode in which a display mode indicating the position of the candidate point in the spatial image in which the observed object is captured is determined based on the position of the candidate point in the three-dimensional space and the imaging condition of the spatial image. The decision process,
    An image generation process for generating an image in which the position of the candidate point in the spatial image is displayed according to the determined display mode;
    A computer-readable storage medium for storing a program for causing a computer to execute the program.
  14.  前記表示態様決定処理は、前記候補点が前記空間画像の撮影を行った撮影体から見て死角となる領域に位置するか否かに応じて、異なる表示態様を決定する、請求項13に記載の記憶媒体。 14. The display mode determination process according to claim 13, wherein the display mode determination process determines a different display mode depending on whether or not the candidate point is located in an area that is a blind spot when viewed from a photographic body that has captured the spatial image. Storage media.
  15.  前記表示態様決定処理は、前記候補点から前記撮影体への半直線が前記空間画像に写る被写体の表面と交わる回数に応じて、異なる表示態様を決定する、請求項14に記載の記憶媒体。 15. The storage medium according to claim 14, wherein the display mode determination process determines a different display mode according to the number of times a half line from the candidate point to the photographic object intersects the surface of the subject in the spatial image.
  16.  前記表示態様決定処理は、前記候補点の位置を示す表示が、さらに、当該候補点が寄与する前記信号の基となる、前記レーダからの電磁波の、入射方向を示すように、前記候補点の位置を示す表示の表示態様を決定する、
     請求項13から15のいずれか一項に記載の記憶媒体。
    The display mode determination process is performed so that the display indicating the position of the candidate point further indicates an incident direction of an electromagnetic wave from the radar that is a basis of the signal contributed by the candidate point. Determine the display mode of the display showing the position,
    The storage medium according to any one of claims 13 to 15.
  17.  前記表示態様決定処理は、少なくとも1つの前記候補点に対し、当該候補点の位置を示す表示の一部または全部が透過色であるように当該表示の表示態様を決定する、
     請求項13から16のいずれか一項に記載の記憶媒体。
    The display mode determination process determines the display mode of the display so that a part or all of the display indicating the position of the candidate point is a transparent color with respect to at least one of the candidate points.
    The storage medium according to any one of claims 13 to 16.
  18.  前記表示態様決定処理は、さらに、所定の方法により特定された前記対象点における前記信号に寄与する前記候補点の位置を示す表示の表示態様として、前記空間画像における他の前記候補点の位置を示す表示の表示態様と異なる表示態様を決定する、
     請求項13から17のいずれか一項に記載の記憶媒体。
    In the display mode determination process, the position of another candidate point in the spatial image is further displayed as a display mode indicating the position of the candidate point contributing to the signal at the target point specified by a predetermined method. Determine a display mode different from the display mode of the display
    The storage medium according to any one of claims 13 to 17.
PCT/JP2017/016451 2017-04-26 2017-04-26 Information processing device, information processing method, and computer-readable storage medium WO2018198212A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/016451 WO2018198212A1 (en) 2017-04-26 2017-04-26 Information processing device, information processing method, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/016451 WO2018198212A1 (en) 2017-04-26 2017-04-26 Information processing device, information processing method, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
WO2018198212A1 true WO2018198212A1 (en) 2018-11-01

Family

ID=63919532

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/016451 WO2018198212A1 (en) 2017-04-26 2017-04-26 Information processing device, information processing method, and computer-readable storage medium

Country Status (1)

Country Link
WO (1) WO2018198212A1 (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0933649A (en) * 1995-07-21 1997-02-07 Toshiba Corp Discrimination operation apparatus of isar image target
JP2002267749A (en) * 2001-03-14 2002-09-18 Mitsubishi Electric Corp Image radar equipment
JP2004333445A (en) * 2003-05-12 2004-11-25 Mitsubishi Electric Corp Device and program for assisting ground truth
JP2008185375A (en) * 2007-01-29 2008-08-14 Mitsubishi Electric Corp 3d shape calculation device of sar image, and distortion correction device of sar image
US20120274505A1 (en) * 2011-04-27 2012-11-01 Lockheed Martin Corporation Automated registration of synthetic aperture radar imagery with high resolution digital elevation models
JP2016090361A (en) * 2014-11-04 2016-05-23 国立研究開発法人情報通信研究機構 Extraction method of vertical structure from sar interferogram
WO2016125206A1 (en) * 2015-02-06 2016-08-11 三菱電機株式会社 Synthetic-aperture-radar-signal processing device
US20160259046A1 (en) * 2014-04-14 2016-09-08 Vricon Systems Ab Method and system for rendering a synthetic aperture radar image
US20170059702A1 (en) * 2015-05-07 2017-03-02 Thales Holdings Uk Plc Synthetic aperture radar

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0933649A (en) * 1995-07-21 1997-02-07 Toshiba Corp Discrimination operation apparatus of isar image target
JP2002267749A (en) * 2001-03-14 2002-09-18 Mitsubishi Electric Corp Image radar equipment
JP2004333445A (en) * 2003-05-12 2004-11-25 Mitsubishi Electric Corp Device and program for assisting ground truth
JP2008185375A (en) * 2007-01-29 2008-08-14 Mitsubishi Electric Corp 3d shape calculation device of sar image, and distortion correction device of sar image
US20120274505A1 (en) * 2011-04-27 2012-11-01 Lockheed Martin Corporation Automated registration of synthetic aperture radar imagery with high resolution digital elevation models
US20160259046A1 (en) * 2014-04-14 2016-09-08 Vricon Systems Ab Method and system for rendering a synthetic aperture radar image
JP2016090361A (en) * 2014-11-04 2016-05-23 国立研究開発法人情報通信研究機構 Extraction method of vertical structure from sar interferogram
WO2016125206A1 (en) * 2015-02-06 2016-08-11 三菱電機株式会社 Synthetic-aperture-radar-signal processing device
US20170059702A1 (en) * 2015-05-07 2017-03-02 Thales Holdings Uk Plc Synthetic aperture radar

Similar Documents

Publication Publication Date Title
KR101809067B1 (en) Determination of mobile display position and orientation using micropower impulse radar
CN111436208B (en) Planning method and device for mapping sampling points, control terminal and storage medium
JP7247276B2 (en) Viewing Objects Based on Multiple Models
SG189284A1 (en) Rapid 3d modeling
JP2011239361A (en) System and method for ar navigation and difference extraction for repeated photographing, and program thereof
CN110703805B (en) Method, device and equipment for planning three-dimensional object surveying and mapping route, unmanned aerial vehicle and medium
JP2019194924A5 (en)
CN112967344A (en) Method, apparatus, storage medium, and program product for camera external reference calibration
CN115825067A (en) Geological information acquisition method and system based on unmanned aerial vehicle and electronic equipment
CN111527375B (en) Planning method and device for surveying and mapping sampling point, control terminal and storage medium
JP2020135764A (en) Three-dimensional object modeling method, three-dimensional object modeling device, server, three-dimensional model creation system, and program
JP7020418B2 (en) Information processing equipment, information processing methods, and programs
US10930079B1 (en) Techniques for displaying augmentations that represent cadastral lines and other near-ground features
US20130120373A1 (en) Object distribution range setting device and object distribution range setting method
US20230162442A1 (en) Image processing apparatus, image processing method, and storage medium
US9852542B1 (en) Methods and apparatus related to georeferenced pose of 3D models
WO2018198212A1 (en) Information processing device, information processing method, and computer-readable storage medium
CN114494563B (en) Method and device for fusion display of aerial video on digital earth
US20190156465A1 (en) Converting Imagery and Charts to Polar Projection
EP4075789A1 (en) Imaging device, imaging method, and program
WO2018211625A1 (en) Information processing device, information processing method, and storage medium having program stored thereon
JP2017182287A (en) Ledger generation device and ledger generation program
JP5164341B2 (en) Projection method and graphic display device
KR102329031B1 (en) Method of determining zoom level of visualization system for 3D modeling
WO2018220732A1 (en) Information providing device, information providing method, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17907249

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17907249

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP