CN110915201B - Image display system and method - Google Patents

Image display system and method Download PDF

Info

Publication number
CN110915201B
CN110915201B CN201880001267.9A CN201880001267A CN110915201B CN 110915201 B CN110915201 B CN 110915201B CN 201880001267 A CN201880001267 A CN 201880001267A CN 110915201 B CN110915201 B CN 110915201B
Authority
CN
China
Prior art keywords
image
group
computer system
adjacent
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201880001267.9A
Other languages
Chinese (zh)
Other versions
CN110915201A (en
Inventor
仲村柄真人
植田良一
大西健太郎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Systems Ltd
Original Assignee
Hitachi Systems Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Systems Ltd filed Critical Hitachi Systems Ltd
Publication of CN110915201A publication Critical patent/CN110915201A/en
Application granted granted Critical
Publication of CN110915201B publication Critical patent/CN110915201B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/38Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory with means for controlling the display position
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Abstract

The invention relates to an image display system and method, and provides a technique which can easily select an appropriate image and reduce the labor of a user when selecting an image for a predetermined application from an image group captured by a camera. A computer system (1) of the image display system inputs a captured image group (201), displays a list of the image group (201) on a list screen (401), and displays a first image selected from the image group (201) on the basis of a user operation on an individual screen (402). A computer system (1) determines an adjacent image to a first image on the basis of the determination of the spatial positional relationship between the first image and a group of candidate images located in the periphery thereof and the determination of the overlapping state with respect to the imaging range, and selects and displays the adjacent image to the first image on an individual screen (402) in accordance with the operation of a user.

Description

Image display system and method
Technical Field
The present invention relates to a technique for processing and displaying an image group. The present invention also relates to a technique useful for assisting in the deterioration inspection of a structure.
Background
In various applications, a user captures an image group (moving picture or a plurality of still pictures) using a camera and inputs the image group into a computer such as a PC. The user displays the image group processed by the computer or an individual image selected from the image group on the display screen. The user confirms the image group or the individual image on the display screen and performs a job corresponding to the application. Examples of the job include a general editing job of a photographic image.
Examples of applications using the image group include a system and a function for inspecting or diagnosing a state such as deterioration of a surface of a structure from an image obtained by imaging the surface of the structure (which may be referred to as a structure deterioration inspection). A user who is an operator performing maintenance inspection work photographs the surface of a structure to be maintained and inspected with a camera manually or by using an unmanned aerial vehicle, a robot, or the like. The structure includes various buildings, infrastructures, and the like. The user inputs the image group photographed by the camera into the computer, and displays the image group processed by the computer in the display screen. The user selects an individual image of the inspection object from the image group on the display screen and displays the image on the display screen. The user performs, for example, a visual inspection operation. In this case, the user observes the individual images on the display screen and visually confirms the presence or absence of detection of a crack, corrosion, peeling, or the like (sometimes collectively referred to as deterioration) or a position.
Another example of the application using the image group is a system and a function (sometimes referred to as Structure three-dimensional model generation) For generating and acquiring a three-dimensional model of a Structure from an image captured on the surface of the Structure based on a process such as a well-known SFM (Structure For Motion recovery) and displaying the three-dimensional model on a display screen.
An example of the conventional technique for the image display is japanese patent laid-open No. 2000-90232 (patent document 1). Patent document 1 describes, as a panoramic image synthesizing apparatus or the like, a procedure of dividing a captured subject image a plurality of times by a digital camera so that a part of the image overlaps, synthesizing a plurality of images obtained by capturing, and generating a panoramic image, a procedure of displaying a full preview and a partial preview of the panoramic image in the same window at the same time, and the like.
Further, as an example of a conventional technique for the above-described structural deterioration inspection, japanese patent laid-open No. 11-132961 (patent document 2) is given. Patent document 2 describes a structure inspection device that performs a plurality of times of divided image capturing of an object to be inspected, stores image data to which corresponding position information or image capturing information is added in a plurality of image data captured, specifies an inspection position of the object to be inspected, selects image data, and displays the image data as an image joining the inspection position and a plurality of image data around the inspection position.
Patent document 1: japanese patent laid-open No. 2000-90232
Patent document 2: japanese laid-open patent publication No. 11-132961
In a conventional image display system, when a user selects an individual image for a predetermined use from an image group on a display screen, it is difficult to select an appropriate image, or a job time is long. For example, in the case of the use for the structure deterioration inspection support, the user needs to select an inspection target image from a plurality of images on the display screen. In this case, there are cases where there are few overlapping (overlapping) portions between images, and it is difficult to select an appropriate image, and the time for the deterioration inspection work is long.
An object of the present invention is to provide a technique relating to image display, which can easily select an appropriate image when selecting an image for a predetermined application from an image group captured by a camera, and can reduce the labor of a user's work.
Disclosure of Invention
A typical embodiment of the present invention is an image display system having the following configuration. An image display system according to an embodiment is configured by a computer system that performs: inputting an image group including a plurality of images different in time, position, and direction of photographing; displaying a list of the image groups on a list screen; displaying a first image selected from the image group based on an operation by a user in an individual screen; determining an adjacent image to the first image based on a determination of a spatial positional relationship between the first image and a group of candidate images spatially located around the first image and a determination of an overlapping state with respect to an imaging range; and selecting the adjacent image to the first image as a second image and displaying the second image as a new first image in the individual screen according to the user's operation.
According to the exemplary embodiment of the present invention, in the image display technology, when an image for a predetermined application is selected from an image group captured by a camera, an appropriate image can be easily selected, and the work of a user can be reduced.
Drawings
Fig. 1 is a diagram showing a structure of a structure deterioration inspection support system including an image display system according to an embodiment of the present invention.
Fig. 2 is a diagram illustrating the structure of cooperation of a drone and a computer system in an embodiment.
Fig. 3 is a diagram showing a configuration of an image display system according to an embodiment.
Fig. 4 is a diagram showing a processing flow of the image display system of the embodiment.
Fig. 5 is a diagram showing a first example of an aerial photography method of the structure according to the embodiment.
Fig. 6 is a diagram showing a second example of an aerial photography method of the structure in the embodiment.
Fig. 7 is a diagram showing a shooting range in the embodiment.
Fig. 8 is a diagram showing an overlap between images in the direction in which the aerial photography is performed in the embodiment.
Fig. 9 is a diagram illustrating an overlap between images in a plan view of the images in the embodiment.
Fig. 10 is a diagram showing an example of adjacent images and repetition in the embodiment.
Fig. 11 is a diagram showing a repetitive state in the shooting range in the embodiment.
Fig. 12 is a diagram showing a repetition state with respect to adjacent images in the embodiment.
Fig. 13 is a diagram showing SFM in the embodiment.
Fig. 14 is a diagram showing a first example of a list screen in the embodiment.
Fig. 15 is a diagram showing a first example of an individual screen in the embodiment.
Fig. 16 is a diagram showing a second example of the individual screen in the embodiment.
Fig. 17 is a diagram showing a third example of the individual screen in the embodiment.
Fig. 18 is a diagram showing a fourth example of the individual screen in the embodiment.
Fig. 19 is a diagram showing a fifth example of the individual screen in the embodiment.
Fig. 20 is a diagram showing a sixth example of the individual screen in the embodiment.
Fig. 21 is a diagram showing an example of an individual screen (degradation check screen) in the embodiment.
Fig. 22 is a diagram showing an example of a three-dimensional model screen of a structure according to the embodiment.
Fig. 23 is a diagram showing a second example of the list screen in the embodiment.
Fig. 24 is a diagram showing a third example of the list screen in the embodiment.
Fig. 25 is a diagram showing an example of the image relationship between the side surfaces of the structure in the embodiment.
Fig. 26 is a diagram showing image selection between sides of a structure in the embodiment.
Fig. 27 is a diagram showing a third example of an aerial photography method of the structure in the embodiment.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the drawings. In all the drawings for describing the embodiments, the same components are denoted by the same reference numerals in principle, and redundant description thereof will be omitted.
[ problems of the prior art, etc. ]
Supplementary explanation is given to the basic technology or technical problems and the like. Conventionally, for example, in an application for assisting a structure deterioration inspection or a system thereof, the following operation or processing is performed. A user who is an operator arrives at a site where a structure to be inspected is located, and images the surface of the structure using a camera. In this case, the entire surface area of the structure may be photographed, and the photographing may be focused on a partial area. In particular, a person (skilled person) having knowledge about the structure may concentrate a position where deterioration is likely to occur in a design drawing of the structure in advance, and set the position as an imaging target position. In this case, the number of images and the imaging position are limited, and the labor for the work can be reduced.
The user brings the data of the captured image group back to the office or the like and inputs the data into a computer such as a PC. The user selects an individual image as an inspection target image from the list of image groups on the display screen and displays the individual image on the display screen. In the case of performing a visual inspection, a user visually searches for the presence or absence and the position of a deterioration state such as a crack from within an image for each image.
Such manual imaging work or deterioration inspection work of the structure is troublesome. In addition, there are also common technical problems such as shortage of persons such as a professional technician and high cost. As a countermeasure, there is also a technique of imaging a structure using a flying object such as an unmanned aerial vehicle or UAV, or a robot. In this technique, while the unmanned aerial vehicle is autonomously navigated, a camera mounted on the unmanned aerial vehicle captures, for example, the entire surface area of a structure, and an image group is automatically obtained. This reduces the number of operations for the user to collectively check the target position, and the number of operations for manually capturing images on site. Further, the maintenance inspection of the structure and the deterioration inspection service can be made efficient.
In addition, when a three-dimensional model of a structure is generated from an image group, it takes time to perform a manual image capturing operation and the like in the same manner. In this case, by using the unmanned aerial vehicle or the like, shooting work and the like can be reduced. The computer generates a three-dimensional model of the structure from the input image group based on the SFM processing. Thus, even when the structure has only a two-dimensional design drawing, for example, a three-dimensional model can be obtained and displayed on a display screen, and the three-dimensional model can be used for structure management and the like.
The image group obtained by the above-described manual operation, unmanned aerial vehicle, or the like is, for example, an image group obtained by completely imaging the surface area of the object, and there are many cases where there are overlapping areas between the images. That is, in one image and another image located around the one image in a spatial positional relationship with respect to the one image, a partial region reflecting the same position overlaps in the image content. In the application of the deterioration inspection, when the surface region of the structure is inspected without omission, it is sufficient to repeatedly perform imaging of a part of the region so that the missing region is not imaged at the time of imaging of the image group. In particular, in the application of three-dimensional model generation, it is necessary to have a sufficiently repeated image group between images as a requirement for applying the well-known SFM processing. For example, 80% or more of the image repetition is required in the lateral direction and 60% or more of the image repetition is required in the longitudinal direction, which correspond to the moving direction of the camera.
In addition, in the surface area of the appearance of the structure, the similar shape is located in many places due to the building structure, the wall surface design, and the like. That is, there are many similar positions in the image content of the image group. Therefore, when a user observes an image, it is sometimes difficult to recognize which position the image is captured. Further, it may be difficult to grasp a spatial positional relationship with respect to which image a certain image is located in the periphery.
Therefore, when the user selects an individual image to be a target of a job from a list of image groups on the display screen, it may be difficult to select an appropriate image. For example, in the use of the degradation inspection, a user visually inspects a region in a certain image and then intends to visually inspect a region adjacent to the region of the image. However, since the content of images is similar to each other and it is difficult to know the difference or the positional relationship between the images, it is difficult for the user to select the images of the adjacent areas. In addition, when the user selects an image next to a certain image from the list of image groups in a simple manner, the selected image may be an image at a separate position, not limited to an image in an adjacent region.
Further, the user determines the image content or the spatial positional relationship from a certain image (first image) on the display screen itself, and selects the next image (second image). For example, it is determined that the two images reflect the same object. In this case, the next image is usually included as a repeated region in the region where the first image has completed the examination. In the case of imaging a surface region of a structure in complete detail, or generating a three-dimensional model, an overlapping region between images is generated in the content of an image group depending on the application or the case. Therefore, the overlapping area in the next image is included again as an object of visual inspection, increasing the burden on the user. Since an overlapping area exists in each of the plurality of images, the work such as the deterioration inspection is not efficient.
As described above, in the conventional image display system, for example, in the case of the use for the degradation inspection assistance of the structure or the three-dimensional model generation, it is difficult or time-consuming to select the individual image from the image group. That is, the conventional system has a technical problem in terms of work assistance or efficiency of the user.
In view of the above-described problems, the image display system according to the embodiment can provide a configuration for assisting a user to easily select an appropriate image when the user selects an individual image from an image group on a display screen according to the application, and can reduce the labor for the job. The structure includes a Graphical User Interface (GUI). The present image display system has a structure for navigating to a next image and connecting the images in consideration of the overlapping state between the images when selecting a next target image from a certain image in the image group. The present image display system automatically judges, for example, an image (adjacent image) with the least degree of repetition, and prompts a link image so that the image can be simply selected.
An image display system according to an embodiment inputs data of a group of images in which, for example, a surface region of a structure is completely captured so that a part of the images overlaps. There are cases where the image group includes a plurality of images different in capturing time, position, direction, and the like. The image display system displays a list of image groups on a GUI screen. When an individual image (first image) is selected from a screen by a user operation, the present image display system displays the selected first image on the screen. The image display system determines a spatial positional relationship between images (a group of the first image and the candidate images) for each image (candidate image) spatially located around the first image, and determines an overlapping state of imaging ranges between the images. Specifically, the present image display system calculates the repetition state as a repetition rate. The present image display system determines a spatial positional relationship between images using positional information of the images and the like. The image display system determines an image such as an adjacent image to a first image based on a repetition rate, a positional relationship, and the like. For example, the present image display system selects, as the adjacent images, the image in which the repetition rate between images is the minimum at or below a predetermined repetition rate threshold among the candidate images.
The present image display system presents an image such as the adjacent image as a candidate image (next image) to be selected next by the user. In this case, the present image display system displays a link image indicating the presence and positional relationship of the next image for the first image on the screen. When the link image has been selected for operation, the image display system displays a next image (second image) associated with the link image as an individual image (new first image).
With the image display system, the user can easily select an appropriate image with a small number of repetitions from a certain image, and can efficiently perform a job such as a degradation inspection. The first image and the candidate image (such as adjacent images) may be arranged so that the imaging range is different in plane and direction as a spatial positional relationship. In this case, the user can select and display the second image located in a different direction by an operation of linking the images from the first image on the screen.
(embodiment mode)
An image display system according to an embodiment of the present invention will be described with reference to fig. 1 to 27.
[ Structure deterioration inspection support System ]
Fig. 1 shows an overall configuration of a structure deterioration inspection support system including an image display system according to an embodiment. The image display system according to the embodiment is constituted by a computer system 1. The deterioration inspection support system of fig. 1 includes a computer system 1 as an image display system, an unmanned aerial vehicle 10 as a flying object, and a structure 5 as an object for deterioration inspection and the like. The computer system 1 and the drone 10 are connected by wireless communication. The computer system 1 is shown as a case of being constituted by a user server system having a PC2 and a server 3, for example. The PC2 and the server 3 are connected via a communication network 6. The user (operator) operates the PC2 to use the system. The user can confirm the status, result, and the like of the job by inputting instructions to the system, setting the user, and the like on the GUI screen 22 of the PC 2.
The structure deterioration inspection assistance system of fig. 1 has a deterioration inspection assistance function including a three-dimensional model generation function. The deterioration-check support function is realized by the deterioration-check support software 100 or the like. The deterioration inspection support function is a function of providing a GUI screen for supporting a work of a deterioration inspection of the structure 5 by a user (worker). The deterioration inspection includes a visual inspection of the deterioration state by a visual inspection of the user with respect to the image taken with the camera 4. The degradation check may also include automatic diagnosis by a computer. The three-dimensional model generation function is a function of generating a three-dimensional model of the structure 5 from the image group of the camera 4 based on SFM processing and providing a screen to be displayed.
The PC2 is a computer having the drone control function 21 and the like, and is a client terminal device used by each user (operator). The server 3 is a server device such as a cloud computing system or a data center of an enterprise, and performs computing processing in cooperation with the PC 2. A plurality of PCs 2 of a plurality of users and the like may be connected to the server 3 in the same manner. The PC2 performs navigation control of the drone 10 and shooting control of the camera 4 by wireless communication. The PC2 transmits navigation control information, shooting setting information, and the like to the unmanned aerial vehicle 10.
The structure 5 is a target object for degradation inspection and three-dimensional model generation, and is an object imaged by the camera 4 of the unmanned aerial vehicle 10. The structure 5 is, for example, a building or an infrastructure. Examples of the building include a common building, a house, and a public building. Examples of the infrastructure include electric power equipment, road traffic equipment, communication equipment, and bridges. A predetermined region on the surface of the structure 5 becomes a target region.
The drone 10 is a flight vehicle that makes autonomous flight by remote control from the PC2 through wireless communication. In the surrounding space of the structure 5, the unmanned aerial vehicle 10 autonomously travels on a set track. The computer system 1 creates and sets control information relating to autonomous navigation and aerial photography of the unmanned aerial vehicle 10 as aerial photography setting information in advance so that the target region of the structure 5 can be aerial photographed based on the imaging plan data d 2. The aerial setting information includes a flight path of the unmanned aerial vehicle 10, shooting timing (shooting time), shooting setting information of the camera 4, and the like.
In addition, as a modification, the user may operate the navigation of the unmanned aerial vehicle 10 by using the PC2 or the like.
The unmanned aerial vehicle 10 mounts the camera 4, various sensors, and the like. The drone 10, when navigating on the rail, takes an aerial photograph of the structure 5 by means of the camera 4. The unmanned aerial vehicle 10 transmits image data, sensor data, and the like at the time of shooting to the PC2 by wireless communication. In a known sensor group of the unmanned aerial vehicle 10, as sensor data, the position, direction, speed, acceleration, and the like of the unmanned aerial vehicle 10 can be detected. The location includes three-dimensional coordinate information, for example, in the form of latitude, longitude, altitude (from the surface of the earth), based on GPS, altitude sensors, or other positioning systems. In addition, when GPS is used, sufficient positioning accuracy is assumed.
The camera 4 includes, as the shooting setting information, a camera direction (shooting direction), shooting timing, shooting conditions, and the like. The camera direction is a direction toward the surface of the object 5 with the camera position as a standard. The shooting timing is a timing at which a plurality of continuous images (still pictures) are shot, and is defined by a shooting interval or the like. The imaging conditions are defined by known parameters such as exposure.
The PC2 has a drone control function 21, a user program 20, a GUI screen 22, and the like. The drone control function 21 is a function of controlling navigation of the drone 10 or shooting by the camera 4. The user program 20 is a user program in the deterioration examination assisting software 100. The user program 20 and the server program 30 perform processing in cooperation by user-server communication. The user program 20 controls the drone control function 21, the image display function, the GUI screen 22, and the like. Various data and information used for processing by the user program 20 and the like are stored in the memory of the PC 2. In the memory, control information of the drone 10, image data or sensor data acquired from the drone 10, various data acquired from the server 3, and the like are stored.
The server 3 has the server program 30, DB31, and the like. The server program 30 is a server program in the degradation inspection assisting software 100. In particular, the server program 30 is responsible for processing with a high computational processing load, such as the deterioration inspection support processing and the three-dimensional model generation processing. The server program 30 executes predetermined processing in response to a request from the user program 20, and responds to the processing result information.
In the DB31, various data and information used in processing by the server program 30 and the user program 20 are stored. DB31 may be implemented by a DB server or the like. In the DB31, for example, structure data d1, imaging plan data d2, image data d3, inspection data d4, and the like are stored.
The structure data d1 includes basic information and management information relating to the structure 5, design drawing data, three-dimensional data, and the like. In the present image display system, the three-dimensional data includes three-dimensional model data generated by a three-dimensional model generation function. The three-dimensional data is not limited to this, and can be used even in the case of data created by a conventional three-dimensional CAD system or the like.
The imaging plan data d2 is plan data relating to the deterioration check and the aerial photography. The actual aerial photography setting information of the unmanned aerial vehicle 10 is created or set based on the photography plan data. The image data d3 is image group data captured by the camera 4. The inspection data d4 is data indicating the state, result, and the like of the deterioration inspection.
The mounting structure of the computer system 1 is not limited to the above example. For example, the PC2 and the server 3 may be integrated into one computer, or each function may be separated into a plurality of devices. The PC2 may be of another type having an unmanned control device. The present invention is not limited to the unmanned aerial vehicle 10, and can be applied to other flying objects, robots, and the like. The unmanned aerial vehicle 10 can be a general unmanned aerial vehicle, and can also be a special unmanned aerial vehicle for the image display system, which is installed with a special function.
[ unmanned plane, computer ]
Fig. 2 shows an outline configuration of the drone 10 and the drone control function 21 of the computer system 1. The unmanned aerial vehicle 10 includes a propeller drive unit 11, a navigation control unit 12, a sensor 13, a gimbal 14, a camera 4, an image storage unit 15, a wireless communication unit 16, a battery 17, and the like. The propeller driving unit 11 drives a plurality of propellers. The flight control unit 12 controls the flight of the unmanned aerial vehicle 10 in accordance with flight control information from the flight control unit 102 of the PC 2. Therefore, the navigation control unit 12 controls the propeller drive unit 11 while using the information from the sensor 13. The sensor 13 is a sensor group such as a known GPS receiver, an electronic compass, a gyro sensor, and an acceleration sensor, and outputs predetermined sensor data. The gimbal 14 is a known structure for holding the camera 4, and is a structure for automatically maintaining a fixed state of the camera 4 without shaking during navigation. The directions and positions of the drone 10, gimbal 14, camera 4 are substantially independent parameters.
The camera 4 takes an aerial image of the object 5 in accordance with the imaging control information and imaging setting information from the imaging control unit 104 of the PC2, and outputs image data. The image storage unit 15 stores image data and the like. The wireless communication unit 16 includes a wireless communication interface device, and performs wireless communication with the computer system 1 through a predetermined wireless communication interface. The battery 17 supplies electric power to each part.
The computer system 1 has a processor 111, a memory 112, a communication device 113, an input device 114, a display device 115, and the like, which are connected to each other via a bus or the like. The processor 111 executes processing in accordance with the program read out from the memory 112. This realizes the functions of the deterioration testing support software 100, and the GUI unit 101 and the like.
The computer system 1 includes a GUI unit 101, an aerial photography setting unit 107, an aerial navigation control unit 102, an imaging control unit 104, a sensor data storage unit 103, an image data storage unit 105, a wireless communication unit 106, a deterioration inspection support unit 121, a three-dimensional model generation unit 122, an image display control unit 123, and the like.
The GUI unit 101 configures a GUI screen 22 for the user based on applications such as an OS, middleware, and a Web browser, and displays the GUI screen on a display device 114 (e.g., a touch panel). The user can perform user setting or instruction input operations on the GUI screen 23, and can confirm the status, result, and the like of each function. As the user setting, setting of the presence or absence of use or the like relating to each function of the image display system, setting of a threshold value for controlling each function, or the like can be performed. The GUI unit 101 generates screen data corresponding to a list screen or an individual screen, which will be described later.
The aerial photography setting unit 107 creates aerial photography setting information based on the user operation and the imaging plan data d2, and sets the computer system 1 and the unmanned aerial vehicle 10. The flight control unit 102 controls the flight of the unmanned aerial vehicle 10 based on the aerial setting information. The flight control unit 102 transmits flight control information d101 to the unmanned aerial vehicle 10 and receives sensor data d102 indicating a flight state from the unmanned aerial vehicle 10 by wireless communication. The imaging control unit 104 controls the imaging of the camera 4 based on the aerial setting information. The imaging control unit 104 transmits imaging control information d103 to the unmanned aerial vehicle 10 and receives image data d104 and the like from the unmanned aerial vehicle 10 by wireless communication.
The image data storage 105 stores image data acquired from the unmanned aerial vehicle 10. In the image data, information management is performed by appropriately associating the imaging time, sensor data, imaging setting information, and the like. The wireless communication unit 106 includes a wireless communication interface device, and performs wireless communication with the unmanned aerial vehicle 10 via a predetermined wireless communication interface.
The deterioration inspection assisting unit 121 performs a process corresponding to the deterioration inspection assisting function. The deterioration inspection assisting unit 121 performs, for example, an assisting process of a visual inspection or an automatic diagnosis process. The three-dimensional model generation unit 122 performs a process corresponding to the three-dimensional model generation function. The image display control unit 123 performs control related to determination of the spatial positional relationship of the image group or image selection.
[ image display System ]
Fig. 3 shows a configuration of an image display system (computer system 1) according to an embodiment. The image display system has as input (including storage and the like) an image group 201 based on image data d3, image information 202, structure data d1, imaging plan data d2, and the like. In the embodiment, the input image data d3 (image group 201) is mainly obtained by the camera 4 of the drone 10, but is not limited to this, and may be an image group manually captured by an operator or a combination of image groups obtained by various methods. The image group 201 is subject data of each image. The image information 202 is metadata or attribute information in the form of, for example, a well-known Exif or the like, which is associated with each image. The image information 202 includes information such as an ID (identification information), shooting time, camera position, camera direction, camera type, focal length, angle of view, and image size (number of pixels). The camera position and the camera direction use data generated by the SFM processing unit 122A.
The image display system includes, as processing (corresponding processing units), a list display unit 301, an individual display unit 302, an image display control unit 123, a deterioration inspection support unit 121, a three-dimensional model generation unit 122, and the like. The image display control unit 123 includes an adjacent image determination unit 303 and a repetition rate calculation unit 304. The image display system outputs screens such as a list screen 401 and an individual screen 402 as an output (including display, storage, and the like).
The list display unit 301 displays a list screen 401 based on the image group 201 and the image information 202 (for example, ID and shooting time). In the list screen 401, a plurality of images in the image group 201 are arranged and displayed in the order of using IDs or the like.
The individual display unit 302 displays the individual screen 402 based on the image group 201 and the first image selection information 321 on the list screen 401. The individual display unit 302 displays the first image 322 selected by the user on the individual screen 402. The individual display unit 302 displays the second image 324 corresponding to the link image based on the link image selection information 323 in the individual screen 402.
The image display control unit 123 grasps the spatial positional relationship and the degree of overlap between images and controls the link between images and the selection of the next image by using the adjacent image determination unit 303 or the like.
The adjacent image determination unit 303 determines candidate images, adjacent images, and the like located around the first image based on the image information 202, the first image selection information 321, or the link image selection information 323 (i.e., the second image selection information). The adjacent images are images adjacent to each other with the minimum degree of repetition. The adjacent image determination unit 303 determines the positional relationship between images using, among the image information 202, particularly, positional information (camera position, etc.) or orientation information (camera orientation, etc.). The adjacent image determination unit 303 determines an adjacent image using the information of the repetition rate calculated by the repetition rate calculation unit 304. The adjacent image determination unit 303 determines the distance between a certain image (first image) and the position of another image located around the certain image, and determines the difference in the direction of each image (the direction of the corresponding camera, the direction of the plane on which the image capturing range of the image is arranged, or the like).
The repetition rate calculation unit 304 calculates the repetition rate between images based on the first image selection information 321 and the like and the candidate image information and the like determined by the adjacent image determination unit 303.
The image display control unit 123 displays the link image 325 corresponding to the image such as the adjacent image determined by the adjacent image determination unit 303 on the individual screen 402. The link image 325 is a link image (GUI component) for guiding an image (next image) selected as a candidate after the first image.
The deterioration inspection assisting unit 121 performs an assisting process of visual inspection or the like by a user operation with the individual screen 402 as a deterioration inspection screen, displays the state or result of the process on the deterioration inspection screen, and creates and stores inspection data d4 indicating the state or result of the process.
The three-dimensional model generation unit 122 includes an SFM processing unit 122A. The SFM processing unit 122A performs SFM processing on the input image group 201 to generate a three-dimensional model of the structure 5, information on the camera direction and the camera position, and creates corresponding three-dimensional model data d 5. The three-dimensional model generator 122 displays a three-dimensional model screen of the structure based on the three-dimensional model data d 5. The three-dimensional model generation unit 122 records the obtained information of the camera direction and the camera position in the image information 202, or performs information management in association with the image data d 3.
The user performs an operation (first image selecting operation) OP1 of selecting a desired image from the image group as a first image on the list screen 401. Further, the user performs an operation (second image selecting operation ═ link image selecting operation) OP2 to select a link image on the individual screen 402. In addition, the user performs a degradation check operation OP3 on the image of the individual screen 402 (degradation check screen).
[ treatment procedure ]
Fig. 4 shows a process flow of the structure deterioration inspection support system including the image display system according to the embodiment. The flow of fig. 4 has steps S1 to S13. The following describes the sequence of steps.
(S1) the computer system 1 creates and sets the aerial photography setting information of the unmanned aerial vehicle 10 based on the user operation, the photography plan data d2 of the structure 5, and the like. The aerial photography of the unmanned aerial vehicle 10 is executed according to the aerial photography setting information. By this aerial photography, image data or the like captured by the camera 4 is transmitted to the PC 2. The PC2 transmits image data and the like to the server 3. The server 3 stores the image data d3 and the like in the DB 31.
(S2) the computer system 1 reads the image data d3 (image data 201, image information 202) from the DB31 or the like, and holds the image data in the memory for processing.
(S3) the computer system 1 (particularly the list display unit 301) displays a list of image groups on the list screen 401 (fig. 14) based on the image data d3 (image group 201, image information 202).
(S4) when the computer system 1 receives a user operation on the list screen 401 and receives a predetermined operation (first image selecting operation OP1), the computer system obtains the first image selecting information 321 and selects a first image. The first image selecting operation OP1 is, for example, clicking, touching, or the like of one image.
(S5) the computer system 1 (especially the individual display unit 302) displays the selected first image 322 on the individual screen 402 (fig. 15) in accordance with the operation of S4 and the first image selection information 321. The user can perform the deterioration check operation using the individual screen 402 as a deterioration check screen (fig. 21).
(S6) the computer system 1 (particularly the neighboring image determination unit 303) searches for each image (candidate image) located in the periphery with respect to the first image based on the image information 202 and the first image selection information 321 (or the link image selection information 323). At this time, the adjacent image determination unit 303 determines the positional relationship between the images using information such as the camera position and camera direction of the images. Specifically, the adjacent image determination unit 303 calculates a distance between the positions of the first image and the candidate images, and selects the candidate image for the first image based on the distance. At this time, the adjacent image determination unit 303 selects candidate images in order of decreasing distance. In this case, the adjacent image determination unit 303 uses a preset distance threshold, and concentrates candidate images in a range where the distance is within the distance threshold. Further, at this time, no adjacent image has been determined.
In particular, in the present processing example, when selecting candidate images, the adjacent image determination unit 303 preferentially selects, as candidate images, images having substantially the same camera direction, for example, images arranged in the same plane.
(S7) the computer system 1 (particularly, the repetition rate calculation unit 304) calculates the repetition rate (OL) between images for each candidate image obtained in S6, sequentially using the first image and the set of candidate images. At this time, as described later, the repetition rate calculation unit 304 calculates an imaging range for each image (fig. 7), and calculates the repetition rate OL (fig. 12 and the like) using the imaging range and the like.
(S8) the computer system 1 (particularly the adjacent image judging section 303) judges the repetition state by comparing the repetition rate OL obtained in S7 with a predetermined repetition rate threshold value (as TOL) set in advance. In the present processing example, the adjacent image determination unit 303 determines whether or not the OL value associated with the candidate image is smaller than or equal to the TOL value, larger than 0, and is the smallest. The computer system 1 selects candidate images satisfying this condition as adjacent images (images having the smallest repetition). The computer system 1 determines adjacent images in the same manner in each direction such as the upper, lower, left, and right directions within the plane.
In addition, the repetition rate threshold TOL is set in advance in the program of the degradation inspection assisting software 100. However, the present invention is not limited to this, and the user may set the TOL value.
(S9) the computer system 1 confirms whether or not an adjacent image satisfying the condition of the repeated state is found as a result of S8, and proceeds to S11 when found (yes) and to S10 when not found (no). The case where no image is found here corresponds to an image having substantially the same direction from the camera of the image, and for example, the case where no image is found in images arranged in the same plane.
(S10) in S10, the computer system 1 selects a candidate image having the smallest distance between the positions of the images, in addition to the searched images, as the next image. As will be described later, for example, the next image is an image disposed on another surface (side surface or the like) in a different direction from the first image camera. Alternatively, the next image is an image which is not repeatedly located at a separate position within the same plane as the first image.
(S11) the computer system 1 (image display control section 123) displays the link image 325 (link image 403 of fig. 15, link image 404 of fig. 16, and the like) with respect to the image (next image) selected in S8 or S10 within the individual screen 402. Specifically, the image display control unit 123 displays the link image 325 corresponding to the type of the next image in each direction with respect to the first image.
(S12) the computer system 1 accepts a predetermined operation by the user on the link image 325 on the individual screen 402 (second image selecting operation OP 2). The second image selecting operation OP2 is a click, touch, or the like on a link image. The computer system 1 receives another operation such as an operation to return to the list screen 401 or an end operation on the individual screen 402. When the selection operation for the link image 325 (the second image selection operation OP2) is accepted (yes), the computer system 1 proceeds to S13, and when the operation for returning to the list screen 401 is accepted, the process returns to S3.
(S13) in S13, the computer system 1 (specifically, the individual display section 302) displays the selected next image (second image) represented by the link image 325 as a new first image in the individual screen 402, based on the link image selection information (second image selection information) 324 in S12. After S13, the process returns to S6, and the same process is repeated. The user performs a degradation check job by viewing the new first image on the individual screen 402 (degradation check screen). In the same manner, selection and display transition from a certain image to another image in the image group can be performed later.
[ aerial photography mode (1) ]
Fig. 5 shows a first example of an aerial photography mode of the structure 5 using the unmanned aerial vehicle 10. The manner of aerial photography is not limited to this. The aerial photography mode takes into consideration the processing or characteristics of the image display system, and selects an appropriate mode. The unmanned aerial vehicle control function 21 of the computer system 1 sets the aerial image orbit of the unmanned aerial vehicle 10, the image capturing setting of the camera 4, and the like as the aerial image capturing setting information on the basis of the image capturing plan data d2 and the like. The aerial photography setting information is stored in the DB31 or the like.
Fig. 5 (a) shows a Z-direction overhead structure in the case where the structure 5 is substantially rectangular parallelepiped. The structure 5 has four vertically standing side surfaces a1 to a 4. The present image display system, for example, completely photographs the entire surface area of these sides a1 to a4 as the degradation inspection object. The structure 5 has a corner portion connecting side surfaces in different directions, such as the corner portion 501.
In this example, aerial photography is performed by dividing each side surface of the object 5 into groups (side surface groups). In this example, four paths R1 to R4 are set corresponding to four side surfaces a1 to a4 divided into four groups, and the aerial photograph is performed four times. In this example, the control is performed such that the path of each group is a substantially straight path (indicated by a broken-line arrow), and the camera direction (indicated by a one-dot-chain-line arrow) and the object distance D0 are substantially constant.
For example, initially set A1 is aerial at path R1. At the path R1, the camera direction is the Y direction (the direction Y1 from the front to the rear). Next, in the group of side surfaces a2, aerial photography was performed on the route R2. At the path R2, the camera direction is the X direction (direction X1 from right to left). The same applies to the path R3 of the side A3 and the path R4 of the side a 4.
Fig. 5 (B) shows the structure 5 and the route of the aerial photograph in a side view corresponding to (a). In particular, an example of detailed setting of the route R1 for the group flight of the side a1 is shown. In this example, when the entire area of the side face a1(X-Z plane) is completely photographed, the same manner as the so-called line sequential scanning manner is used. In this embodiment, first, the unmanned aerial vehicle 10 is linearly moved in the X direction, which is the horizontal direction, from one end to the other end of the side surface a1 in a main scanning manner. When the camera 4 is moved in the horizontal direction, the direction of the control camera 4 and the object distance D0 are substantially fixed. After the unmanned aerial vehicle 10 reaches the other end of the side surface a1, the unmanned aerial vehicle 10 is then linearly moved in the vertical direction (Z direction) in a sub-scanning manner. Then, similarly, the image pickup device is linearly moved in the main scanning manner from the other end to one end of the side surface a1 so as to be folded back in the direction opposite to the X direction. Thereafter, the main scanning and the sub-scanning are repeated in the same manner.
In the example of the embodiment, the description has been given mainly for the side surface standing vertically, and the horizontal surface of the roof or the like of the structure 5 can be treated as the object in the same manner. In this case, the direction of the camera 4 during aerial photography is, for example, the Z direction downward. In the example of the embodiment, the case where the X direction or the Y direction, which is the horizontal direction of the side surface in a plan view, is described as the direction of the camera 4 in the aerial photographing, but the invention is not limited thereto, and the same processing can be performed in, for example, a diagonally downward direction.
In the setting of the aerial photography, the path of the unmanned aerial vehicle 10, the imaging timing (interval) of the camera 4, and the like are set so that the state of repetition between images is appropriate, taking into account the SFM processing of the three-dimensional model generation function in advance. For example, the state of repetition in the forward direction of the unmanned aerial vehicle 10 and the camera 4 is set to about 80%. By the aerial photography based on the setting, an image group in an appropriate repetitive state is obtained. The image group can be input reliably and SFM processing can be performed, and a highly accurate three-dimensional model can be obtained.
However, when the image group corresponding to the purpose of the three-dimensional model generation is used for the deterioration inspection, since there are many overlapping regions between images, the number of inspection regions (i.e., overlapping regions) is increased in the deterioration inspection of each image, and the number of regions to be inspected again is increased, which is inefficient. Therefore, in order to perform an effective degradation inspection, it is preferable to select, as a degradation inspection target image group, an image group in which the number of images overlapping each other is as small as possible from among these image groups. However, this image selection is difficult and time consuming to perform manually. Here, the present image display system provides a configuration of a GUI or the like that can assist and efficiently select an image.
[ aerial photography mode (2) ]
Fig. 6 similarly shows a second example of an aerial photography system of the structure 5 using the unmanned aerial vehicle 10. Such an aerial photography method is appropriately selected according to the shape of the structure 5 and the like. In this example, the structure 5 shown in fig. 6 (a) has a shape in which a portion of the rectangular parallelepiped structure 5 shown in fig. 5 is additionally connected to the side surface a1 and the front side (direction Y2) of a portion of the right side (direction X2). The structure 5 has six side surfaces a1 to a6, and the entire surface region is an inspection target. Further, the structure 5 has corners such as a corner 601, a corner 602, and a corner 603. The corner 601 is a portion connecting the side surface a1 extending in the X direction and the side surface a2 extending in the Y direction, and is a concave corner when viewed from the unmanned aerial vehicle 10. The corner 602 is a portion connecting the side surface a2 extending in the Y direction and the side surface a3 extending in the X direction, and is a corner that is convex when viewed from the drone 10. The corner 603 is a portion connecting the side surface a3 extending in the X direction and the side surface a4 extending in the Y direction, and is a convex corner.
In this example, similarly, each side surface of the object 5 is divided into groups and aerial photography is performed. In this example, six groups are provided corresponding to the six side surfaces a1 to a6, and six paths r1 to r6 are provided corresponding to each group, and aerial photography is performed six times. The path of each group is a linear path, and is controlled so that the camera direction and the object distance D0 are substantially constant. The camera directions are divided into four types, i.e., a direction Y1, a direction Y2, a direction X1, and a direction X2. For example, in the path r1 of the group of the side face a1, the camera direction is the Y direction (direction Y1). In the path r2 of the group of the side face a2, the camera direction is the X direction (direction X2). In the path r3 of the side a3, the camera direction is the Y direction (direction Y1). In the path r4 of the side a4, the camera direction is the X direction (direction X1). The same applies to the paths of the other sides.
Fig. 6 (B) shows the structure 5 and the route of the aerial photograph in a side view corresponding to (a). In particular, detailed setting examples of the path r1 of the side face a1, the path r3 of the side face a3, and the like are shown. As in fig. 5 (B), the same method as the line-sequential scanning method is used for each side surface.
[ shooting Range ]
Fig. 7 shows a schematic view for the shooting range of an image or the like. As described later, the image display control section 123 of the image display system calculates the shooting range of the image. This shooting range is used for the calculation of the repetition rate. The imaging range is a predetermined physical size such as the vertical and horizontal length of an image. For calculating the shooting range, the focal distance (denoted as F), the sensor size (denoted as SS), the subject distance (denoted as D), and the like are used. In fig. 7, a wall surface 701 (for example, a side surface a1) of the structure 5, a traveling direction K1 (for example, an X direction) of a part of an aerial route of the unmanned aerial vehicle 10, a position of the camera 4 (referred to as a camera position C), a direction of the camera 4 (referred to as a camera direction V), a sensor 702 of the camera 4, an imaging range of an image (referred to as sa, specifically, a width in the X direction), and the like are shown in an overhead view in the Z direction.
The sensor 702 is an imaging element of the camera 4, and a width portion in the X direction is shown in fig. 7. The sensor 702 has a predetermined sensor dimension SS (specifically, a width in the X direction is shown). The sensor size SS corresponds to the model of the camera, and is defined by the width, height, diagonal distance, and the like. The sensor size SS is included in the image information 202 or is not included, and can be grasped from the model of the camera or other information. The center position of the sensor 702 is shown as camera position C. From the sensor 702 and the camera position C, there is a camera direction V (shown by a dashed arrow) in the Y direction, for example. The position at which the camera direction V intersects the wall surface 701 is shown by the position Q. The position Q corresponds to the center position of the image and the shooting range SA. Among the distances from the camera position C to the position Q of the wall surface 701 (side surface a1) in the camera direction V, there are a focal point distance F and an object distance D. The focal distance F is obtained from the image information 202. The object distance D is a distance to the object. The subject distance D is obtained based on the shooting plan data D2. The object distance D is obtained, for example, from the object distance D0 of fig. 5.
The computer system 1 (particularly, the repetition rate calculation section 304) calculates the shooting range SA using the sensor size SS, the focal distance F, the object distance D, and the like. The basic formula is [ imaging range SA ] ═ sensor size SS ] ÷ [ focal distance F ] × [ subject distance D ]. Specific examples are as follows. The sensor size SS was 23.5mm in the lateral direction (width) and 15.6mm in the longitudinal direction (height). The focal distance F was 35 mm. The object distance D is 5m (═ 5000 mm). In the imaging range SA, when the width in the horizontal direction (X direction) is SAX and the height in the vertical direction (Z direction) is SAZ, the SAX can be calculated as (23.5/35) × 5000 ≈ 3357mm and as (15.6/35) × 5000 ≈ 2229 mm.
The computer system 1 similarly calculates the shooting range SA of each image of the image group. The computer system 1 (repetition rate calculation unit 304) calculates the repetition rate between images using the imaging range SA of each image and the like. The computer system 1 (adjacent image determination unit 303) determines the positional relationship between the images using information such as the camera position C, the camera direction V, and the imaging range SA.
[ Camera position and Camera Direction ]
The computer system 1 uses information of the camera position C or the camera direction V in processing. The information of the camera position C or the camera direction V can be obtained in the following ways, for example.
(1) When the information of the camera position C or the camera direction V is included in the image information 202 or in the sensor data at the time of aerial photography, the computer system 1 refers to the information of the camera position C or the camera direction V. For example, the camera 4 itself may be provided with a sensor capable of detecting the camera position C or the camera direction V. In this case, by associating image data of an image captured with the camera 4 with information of the detected camera position C or camera direction V, the information can be referred to.
(2) When the information of the camera position C or the camera direction V is not included in the image information 202, the computer system 1 calculates the camera position C or the camera direction V from other information. For example, the computer system 1 obtains information on the position and direction of the drone 10 and information on the position and direction of the gimbal 14 from sensor data during aerial photography. The computer system 1 obtains the camera position C and the camera direction V by calculation using these pieces of information. The camera position C and the like can be calculated by combining the relative positional relationship of the gimbal 14 provided on the unmanned aerial vehicle 10 and the relative positional relationship of the camera 4 provided on the gimbal 14.
(3) The computer system 1 obtains a camera position C and a camera direction V from an image group (for example, three consecutive images) by SFM processing. This method is used in the image display system according to the embodiment. The SFM processing unit 122A of the three-dimensional model generation unit 122 obtains the camera position C and the camera direction V by performing SFM processing on the input image group.
[ repetition (1) between images ]
Fig. 8 shows the repetition between images in the aerial direction of travel and between shooting ranges. Fig. 8 shows a part of a path to the wall surface 701 in an overhead view in the Z direction. The traveling direction K1 of the unmanned aerial vehicle 10 during aerial photography is, for example, the main scanning direction X2 of the path R1 in fig. 5 or the main scanning direction X2 of the path R1 in fig. 6. The camera direction C is the direction Y1. In the forward direction K1, positions P1 to P7 are indicated as dots as an example of the position P of the unmanned aerial vehicle 10. Correspondingly, positions C1-C7 are shown as dots as an example of camera position C. The object distance D0 at each location is fixed. The positions P1 to P7 or positions C1 to C7 have a time-series order. Further, the center positions on the image and the imaging range side are indicated by dots at positions Q1 to Q7 corresponding to positions C1 to C7, and the like. For example, images g1 to g7 and imaging ranges SA1 to SA7 are included. The shooting range here shows the width in the X direction. For example, the image g1 captured at the position P1 and the position C1 has a capture range SA1 and a center position Q1. In the example of fig. 8, the image capturing ranges SA1 to SA7 of the images g1 to g7 overlap by about 80% between the respective images in the forward direction K1.
[ repetition (2) between images ]
Fig. 9 corresponds to the example of fig. 8 and shows the repetition between the images in a top view (X-Z plane, e.g. side a 1). Fig. 9 shows a case where the images g1 to g7 are slightly shifted in the Z direction (the forward direction K2) so that the overlapping state can be easily understood. The advancing direction K1 is the direction X2 as in fig. 8, and the advancing side direction K2 is a vertical direction perpendicular to the advancing direction K1. The advancing direction K3 is a direction in the case where a deviation (movement component) in the advancing side direction K2 is applied to the advancing direction K1. In the present example, the repeated region 901 shown by a diagonal line pattern shows a region where the group of the image g1 (shooting range SA1) and the image g2 (shooting range SA2) are repeated. The repeated region 902 shows a region where the group of the image g1 and the image g6 (shooting range SA6) are repeated.
In this example, the images g1 to g7 are images captured so that a repetition rate (for example, about 80%) necessary for generating the three-dimensional model is ensured. For example, in the group of the image g1 and its next image g2, the repetition rate OL is 80% or more (for example, 85%). Similarly, the repetition rate OL for image g1 is 65% for image g3, 45% for image g4, 25% for image g5, and 5% for image g 6. In the group of image g1 and image g7, there is no duplication and there is a gap between the images.
The repetition rate threshold TOL set for the deterioration inspection is, for example, 20%. Regarding the image g1, the image g6 is selected as an image satisfying the above-described condition of the repetition rate (fig. 4, S8) in consideration of the candidate images (images g2 to g7, etc.) located on the right side (direction X1). The repetition rate OL (═ 5%) of the image g6 for the image g1 is smallest in a range larger than 0 below the repetition rate threshold TOL (═ 20%).
In addition, not only the repetition rate in the advancing direction K1(X direction, lateral direction of the image) but also the repetition rate in the advancing side direction K2 (longitudinal direction of the image) can be considered. The present image display system performs the same processing for the forward side direction K2 using a predetermined repetition rate threshold.
[ repetition (3) between images ]
Fig. 10 corresponds to the example of fig. 9, and shows an example of adjacent images in a top view (X-Z plane, e.g., side a 1). The first image selected by the user is set as image g1, for example. The present image display system investigates each image located around the image g1 in the three-dimensional space as a candidate image. In the example of fig. 10, images g2 to g7 and the like exist in the right direction (direction X2) with respect to image g1, as in fig. 9. With respect to the right direction of the image g1, an image g6 is selected as the image with the smallest repetition (adjacent image) as shown in fig. 9. The image g6 of the adjacent image in the right direction is also shown as an image ga. The position Qa of the image ga is the position Q6 of the image g 6. The image display system displays a link image representing the image ga on the individual screen 402.
In the left direction (direction X1), an image gb (position Qb) is an adjacent image with respect to the image g 1. Each image located between image g1 and image gb is omitted. Similarly, as for the image g1, there is an image gc (position Qc) as an adjacent image in the up direction (direction Z1). As for the image g1, in the lower direction (direction Z2), there is an image gd (position Qd) as an adjacent image, for example. Similarly, in the image g1, in the diagonally lower right direction, there is an image ge (position Qe) as an adjacent image, for example. Although not shown, there are also adjacent images in the right oblique upper direction, the left oblique upper direction, and the left oblique lower direction with respect to the image g 1. The user can select a desired next image (second image) from the link images (corresponding adjacent images) represented in the respective directions with respect to the first image in the individual screen 402.
In addition, in the present image display system, for a certain first image (for example, the image g1), when an image located in an oblique direction, for example, the image ge at the position Qe diagonally downward to the right is determined, the following manner can be used. As a first method, an image g1 at position Q1 is searched for an image directly located in the oblique direction, the repetition rate of image g1 and the image located in the oblique direction is calculated, and an adjacent image is determined using a predetermined repetition rate threshold (set for oblique direction). As another second aspect, the image g1 at the position Q1 is first determined as an adjacent image located on the left and right or above and below, and the adjacent images (for example, the image ga and the image gd) obtained by the determination can be further determined as adjacent images located on the top and below or on the left and right. That is, the adjacent image in the oblique direction can be obtained by the same determination result in both directions from the second stage of the first image.
[ calculation of repetition Rate (1) ]
The calculation of the repetition rate will be described with reference to fig. 11 to 12. The basic definition of the repetition rate is as follows. Ideally, the repetitive state of the surface region (corresponding imaging range SA) of the structure 5 is considered. The computer system 1 considers an overlapping area between images in a surface area in the image centered on the position Q of fig. 7. The computer system 1 calculates the size of the image and the repeat area. The ratio indicating the degree of repetition of the repeated region between images is defined as a repetition rate. The computer system 1 calculates the repetition rate based on the size of the image and the repetition region. When the distance between the camera 4 and the surface of the structure 5 at the time of shooting (object distance D0) is fixed, the size of the region reflected in each image almost corresponds to the size of the real world object. Therefore, in this case, if the repetition rate is calculated using the area size of the image, a sufficient effect can be obtained.
Fig. 11 first shows a state in which the shooting ranges SA of two images overlap. In fig. 11, an overhead view in the Z direction is shown. An image G1 taken from the camera position C1 and an image G2 taken from the camera position C2 are shown with the side a1 and the like as objects. The position Q1 of the image G1 and the position Q2 of the image G2 are shown. The position C1, the position Q1, and the like have three-dimensional position coordinates, respectively. A width W1 in the X direction of the shooting range SA of the image G1 and a width W2 in the X direction of the shooting range SA of the image G2 are shown. The width of the repeated region of the group of the image G1 and the image G2 is shown by the width W12. For example, the repetition rate of the image G1 and the image G2 in the X direction can be calculated using the width W1 of the image G1 and the width W12 of the repetition region.
Further, while the unmanned aerial vehicle 10 is traveling along the route, the position of the unmanned aerial vehicle 10, the position of the camera 4, the camera direction, and the like may be shaken or shifted due to the wind, shaking, and the like. That is, the camera direction C of each image may be different. In this case, similarly, the imaging range SA or the repetition rate of the image of the surface of the structure 5 can be calculated.
[ calculation of repetition Rate (2) ]
Fig. 12 corresponds to fig. 11, and shows a state of repetition of the image in a top view (X-Z plane). In addition, in this example, the case where the image sizes are different is shown by the image G1 at the position Q1 and the image G2 at the position Q2. A repeated region 1201 (diagonal line pattern) of the image group of the image G1 at the position Q1 and the image G2 at the position Q2 is shown. The image G1 has a width W1 as the width W in the X direction and a height H1 as the height H in the Z direction. Image G2 has a width W2 and a height H2. In the group of the image G1 and the image G2, the width W12 of the repetitive region 1201 and the height H12 of the repetitive region 1201 are shown. Also shown are width NW1, height NH1 of the non-overlapping portion of image G1, width NW2, height NH2 of the non-overlapping portion of image G1.
Here, the repetition rate OL is considered to be a repetition rate (OLW) in the lateral direction (X direction) of the image and a repetition rate (OLH) in the longitudinal direction (Z direction) of the image. With respect to the first image, the repetition rate OL can be defined as a ratio of the length of the repeated portion to the original length. In this example, the horizontal (X-direction) repetition rate OLW can be calculated as OLW ═ W12/W1, and the vertical (Z-direction) repetition rate OLH can be calculated as OLH ═ H12/H1. Further, the three-dimensional position coordinates (X, Y, Z) are appropriately converted into two-dimensional position coordinates (X, Y) in the processing of the two-dimensional image and processed.
In the example of fig. 12, the coordinates of the position Q1 of the image G1 are (X1, Y1, Z1), and the coordinates of the position Q2 of the image G2 are (X2, Y2, Z2). The position of the left end in the X direction of the imaging range SA of the image G1 is X3, and the position of the right end is X4. The position of the left end in the X direction of the imaging range SA of the image G2 is X5, and the position of the right end is X6. The upper end position in the Z direction of the imaging range SA of the image G1 is Z3, and the lower end position is Z4. The upper end position in the Z direction of the imaging range SA of the image G2 is Z5, and the lower end position is Z6. In this example, the positions in the Y direction are the same.
The positions of the upper, lower, left, and right ends (sides) of the image G1 were obtained as follows using the center position Q1, the width W1, and the height H1. The position X3 of the left end of the image G1 is obtained by X3 ═ X1- (W1/2). The position X4 of the right end of the image G1 is obtained by X4 ═ X1+ (W1/2). The position Z3 of the upper end of the image G1 passes through Z3 ═ Z1- (H1/2). The position Z4 of the lower end of the image G1 is obtained by Z4 ═ Z1+ (W1/2). Likewise, the positions of the respective ends of the image G2 (X5, X6, Z5, Z6) can be obtained. Using the positions of the respective ends of the respective images, the width W12, the height H12, and the like of the repetitive region 1201 can be calculated. For example, W12 ═ X4-X5, H12 ═ Z5-Z4.
Specific examples of the calculation are as follows. The size (expressed by the number of pixels) of the image G1 is W1 ═ 339, H1 ═ 252, the size of the image G2 is W2 ═ 373, and H2 ═ 244. Coordinates (X1, Z1) of the position Q1 are (1420, 2395), and coordinates (X2, Z2) of the position Q2 are (1510, 2376). The coordinates of each image edge are set to X3-1251, X4-1589, X5-1324, X6-1696, Z3-2520, Z4-2269, Z5-2497, and Z6-2254. Using these pieces of information, the width W12 and the height H12 of the repetitive region 1201 are obtained as W12-265 and H12-228. The transverse repetition rate OLW is set to OLW 265/339 ≈ 0.78, i.e., 78%. The longitudinal repetition rate OLH is set to OLH 228/252 ≈ 0.90, i.e., 90%.
The repetition rate is not limited to the above-described method, and may be processed as follows, for example. Consider the state of a surface area object represented by the color of a pixel, a feature point, or the like of the image content shown in the image. By known image processing, feature points, color regions, and the like can be extracted from an image. The computer system 1 determines the position of the same object (indicated by a feature point or the like) in each image, and determines the overlapping area between the images based on the position of the same object. The computer system 1 calculates the repetition rate from the size (in number of pixels) of the repetition area.
[ SFM treatment ]
Fig. 13 shows an explanatory diagram of a known SFM. SFM is a system for acquiring information (camera parameters) of three-dimensional coordinates, a camera direction, and a camera position of an object by capturing a plurality of images of the same object (represented by a plurality of feature points) from a plurality of viewpoints (camera positions). For example, three images from three camera positions can be used to acquire this information. In SFM, three-dimensional coordinates are estimated and calculated from how the same feature point of an object is displaced by using parallax due to movement of a camera. The calculated three-dimensional coordinates represent the structure or shape of the object. In the example of fig. 13, three images PIC1 to PIC3 continuously captured are shown with respect to the object OB (e.g., a cube). As an example of the feature point of the object OB, a feature point A, B is shown. In the SFM processing in which the images PIC1 to PIC3 are input, a feature point group (for example, feature point A, B) is extracted from each image, and three-dimensional coordinates of the feature point group are obtained by a factorization method or the like. For example, coordinates (Xa, Ya, Za) of the feature point a and coordinates (Xb, Yb, Zb) of the feature point B are obtained. At the same time, camera directions (e.g., directions J1-J3) and camera positions (e.g., positions E1-E3) are inferred. In addition, various application methods based on the SFM method and other methods can be similarly applied to the structure three-dimensional model generation function.
[ Listing Screen (1) ]
Fig. 14 shows a first example of a list screen 401 in the embodiment. A list of image groups is displayed on the list screen 401. In this example, a plurality of images (for example, #1 to # 24) of the image group 201 are arranged in a vertical and horizontal direction in the order of the ID of the image information 202 and the shooting time and displayed in the form of thumbnails. These image groups are, for example, image groups captured in consideration of a predetermined repetition rate for three-dimensional model generation. As another display example, not only the thumbnail but also information of the image information 201 may be displayed for each image.
[ Individual Picture (1) ]
Fig. 15 shows a first example of the individual screen 402. In the individual screen 402, a first image selected by the user from the list screen 401 is displayed in a frame 1501 of a sufficiently large main area. In addition, in the case where the second image is selected by selection of the link image in the individual screen 402, the second image is displayed in a frame 1501. Further, by a predetermined operation such as pressing a "return list" button 1502 in the individual screen 402, the list screen 401 can be returned.
In the individual screen 402, a link image 403 is displayed around the first image. In other words, the link image 403 is a navigation image. In the example of fig. 15, the link image 403 is displayed at the upper, lower, left, and right positions and at the positions of the respective inclinations (upper right, lower right, upper left, and lower left) outside the first image (frame 1501). The link image 403 is an arrow shape, but is not limited to this, and a GUI component, an icon image, or the like having a predetermined shape may be used. As another example of the link image 403, a thumbnail image may be used.
These link images 403 are images, means for selecting an adjacent image located in the direction indicated by the arrow as a next image. These link images 403 are images or components for selecting adjacent images in a certain plane. The user can perform an operation (click or the like) of selecting the link image 403.
In the GUI example of fig. 15, all of the link images 403 are displayed in each direction regardless of whether there is actually an adjacent image in each direction. The user selects and operates a desired link image 403 (e.g., right arrow). In this case, when there is an adjacent image in the direction indicated by the selected link image 403, the adjacent image is selected as the second image. That is, the screen display state is switched so that the selected second image is displayed instead of the first image in the frame 1501 of the individual screen 402. The second image serves as a new first image, and the link image 403 is likewise displayed. In addition, in the case where there is no adjacent image in the direction indicated by the link image 403 of the user selection operation, switching is not performed.
In this GUI example, when the user selects a desired link image 403 (for example, a right arrow) for operation, if an adjacent image exists in the direction indicated by the link image 403 in the same plane (for example, the side face a1) in which the first image exists, the adjacent image is selected. When there is no adjacent image in the same plane in the direction indicated by the link image 403, and there is an image in another plane in the direction of the back or front side between the corners, the image is selected.
As another GUI example, the link image 403 may be displayed superimposed on the image in the frame 1501.
As another example of the GUI, the link image 403 may not be displayed, and the adjacent image (second image) may be selected by a predetermined operation in advance, for example, user setting. For example, each key of the keyboard may be assigned a function as a link to each direction.
As another example of the GUI, instead of the link image 403, an adjacent image in a direction corresponding to the position can be selected by a predetermined operation on the position of the line of the frame 1501. For example, when the right vicinity of the operation frame 1501 is selected, the same function as that of the selection operation of the right arrow is realized.
As another example of the GUI, instead of the link image 403, a well-known operation such as dragging or sliding may be performed. For example, in the frame 1501, when a slide operation is performed from right to left, the same function as that of the selection operation of the right arrow is exhibited.
[ Individual Picture (2) ]
Fig. 16 shows a second example of the individual screen. In the GUI example of fig. 16, when an image of another surface (corresponding to the closest image of the adjacent images) exists around the first image with a corner interposed therebetween, a specific link image 404 showing the type and direction of the image is shown. The link image 404 is different in shape from the link image 403. For example, when there is no adjacent image on the right side of the first image on the same side and there is an image on the other side of the first image that is located forward or backward with respect to the corner, the link image 404 representing the content is displayed. In this example, in a case where an adjacent image exists in the right rear direction with respect to the first image (for example, in the vicinity of the corner 602 in fig. 25, with respect to the image #21 of the side a3 of the image #3 of the side a2), the link image 404 showing the right rear direction is displayed. In addition, when there is an adjacent image in the right front direction with respect to the first image (for example, in the vicinity of the corner 601 in fig. 25, with respect to the image of the side a2 of the image of the side a1), the link image 404 showing the right front direction is displayed.
As an example of another link image, a link image 405 is shown. In the case where an adjacent image exists on the upper side (direction Z1) in the Z direction in the same plane as the first image, a link image 403 indicating an arrow in the upper direction is displayed. When there is no adjacent image on the upper side in the Z direction (direction Z1) in the same plane with respect to the first image and an image corresponding to a horizontal surface such as a roof of the structure 5 exists, the link image 405 indicating the content is displayed. The link image 405 has a shape (e.g., a shape that is obliquely curved) indicating the upper depth direction. By the selection operation of the link image 405, an image positioned in the upper depth direction can be selected as a next image.
As an example of other link images, a link image 406 is shown. The link image 406 is displayed as a link image for displaying the content, in a case where the link image is not overlapped with the minimum adjacent image in the direction in the same plane with respect to the first image, but an image exists at a position separated without being overlapped. The link image 406 has a different shape from the link image 403. For example, with respect to the first image, there is no adjacent image on the lower side (direction Z2), but in the case where there is an image at a separated position, the link image 406 showing the position separated in the lower direction is displayed. By the selection operation of the link image 406, an image located at the separated position can be selected.
[ Individual Picture (3) ]
Fig. 17 shows a third example of the individual screen. In the GUI example of the individual screen 402, the link image 403 is displayed in a limited manner according to the presence or absence of the adjacent image. In this example, it is assumed that adjacent images exist at respective positions of the left, lower, and lower left, and do not exist at other positions, as adjacent images in the same plane with respect to the first image. In this case, as shown in the drawing, a link image 403 showing three arrows of left, lower, and lower-left is displayed. In this example, when there is an image on the other side surface on the right side of the first image, particularly on the back side on the right side, a link image 404 showing an arrow on the right side is displayed as shown in the drawing. In this GUI example, since the amount of display information is limited, the user can more easily perform a job.
[ Individual Picture (4) ]
Fig. 18 shows a fourth example of the individual screen. In the individual screen 402, when the first image is displayed in the frame 1501, the image content of the overlapping area with the adjacent image is not displayed as it is, but is displayed in a state where the overlapping area is deleted (in other words, masked). Specifically, the overlapping area is displayed by, for example, black or shading. In this example, the first image (or the second image) has a repeat region 408. The overlapping region 408 is a region that overlaps the vicinity of the right side of the image selected as the next image and displayed in the frame 1501, with respect to the vicinity of the right side of the previous image selected immediately before (the region is shown by a broken line for illustration).
By deleting and displaying the overlapping area 408 in this way, the user does not need to see the image content in the overlapping area 408 in the selected image, and the amount of visual information is reduced, so that the user can easily perform the work. In this example, the user knows that the overlapping region 408 of the image has completed the degradation check on the previous image, and therefore does not need to perform the degradation check on the overlapping region 408 again at the time of the degradation check on the image. Therefore, the deterioration checking work is validated.
As another GUI example, information showing that the deterioration inspection is completed may be marked and displayed in the overlap area 408. As another GUI example, the repeat region 408 may be displayed in the form of a box line representation.
[ Individual Picture (5) ]
Fig. 19 shows a fifth example of the individual screen. In the individual screen 402, an area not to be photographed (a defective area) is displayed in a predetermined expression in a manner that the user can know. The non-captured area (defective area) is an area where no corresponding image exists. Fig. 19 (a) shows an example of the relationship between two images. The case where the image 191 and the image 192 overlap each other and the repetition rate is large is shown. The non-captured region 193 is a region located outside the two images, and is an example of a region that is not captured in the image group including the two images. In a region such as a side surface of the structure 5, the non-image-pickup region 193 does not exist as an image. Therefore, the non-image-pickup region 193 cannot be a target of a deterioration inspection or the like. In the individual screen 402, the position of such an unimaged region 193 is displayed in a manner known to the user. Thus, the user can recognize the position of the non-imaging region 193, recognize the position of the omission of the deterioration inspection, and take another image of the position to perform the deterioration inspection.
In addition, a non-image-pickup region 193b of another example is shown. The non-captured region 193b is a region in which there is a gap between the image 192 and the image 194. Basically, imaging is performed so as to overlap between images, but such a gap may be formed for any reason (for example, an error in aerial imaging), and an un-imaged region 193b may be generated. In this case as well, the presence of the non-image-pickup region 193b is displayed so as to be known.
Fig. 19 (B) and (C) show examples of display of the non-image-pickup region 193 on the individual screen 402, corresponding to (a). First, the example (B) shows an example in which the selected image 191 is displayed in the frame 1501 in the image size of the original image 191. Illustratively, duplicate images 192 are also shown with box lines. At this time, an unimagined region 193 exists outside the frame 1501. In this way, in the case where the non-captured region 193 exists outside the frame 1501, an image showing the non-captured region 193 can be displayed in a predetermined representation (e.g., dot pattern).
(C) The example of (B) shows a case where the image 191 is moved (parallel-moved) and displayed corresponding to a predetermined operation by the user according to the state of (B). The image display system also has a function of moving an image displayed in the frame 1501 on the individual screen 402, a function of displaying the image in an enlarged/reduced size, or the like. The shift display can be performed by an operation such as dragging or sliding the first image, and the zoom-in/zoom-out display can be performed by a pinch (ping) operation or the like on the first image.
In this example, by the moving operation, the center position (position Q1a) of the image 191 is moved from the state of (B), for example, to the slightly upper left position Q1B. This movement causes a state in which a part of the image 191, a part 193c of the non-image-pickup region 193, and a part 192c of the image 192 enter the frame 1501. In the individual screen 402, an image portion entered into a frame 1501 is displayed, and a part 193c of the non-captured region 193 is displayed with a predetermined expression.
[ Individual Picture (6) ]
Fig. 20 shows a sixth example of the individual screen. In the individual screen 402, when there is an overlapping adjacent image with respect to a certain first image, the adjacent image is displayed in the form of a frame line or the like so that the user can know the presence of the adjacent image. In this example, when an adjacent image exists at a position on the right side with respect to the first image in the frame 1501, not only the link image 403 but also an outline image 409 (for example, a dotted line) showing an outline (edge) of the adjacent image is displayed. Thereby, the user knows that for the first image, for example, on the right side, there is an adjacent image, and also the degree of repetition. When the user wants to select the adjacent image, the user selects the operation link image 403 or the outline image 409. This adjacent image is thus selected and displayed in a frame 1501.
In the example of fig. 20, a GUI component (repetition rate threshold value setting field) 1503 for setting the repetition rate threshold value TOL by the user is displayed on the individual screen 402. In the GUI component 1503, the currently set repetition rate threshold TOL is displayed, and a slider or the like is provided so that the user can change the value of TOL. In this example, the user can confirm the state of the TOL value in the individual screen 402. The user can change the setting of the TOL value by operating the GUI component 1503 according to the application, the work state, and the like. In addition, as in the example of fig. 20, the actual repetition rate of the first image and the adjacent image may be displayed.
In addition, as for the setting of the repetition rate threshold TOL, not only one value but also two values may be used to set the range. For example, the range of the repetition rate is set to 40% to 60%, and in this case, images corresponding to the range can be selected.
[ deterioration inspection Picture ]
Fig. 21 shows an example of the degradation check screen. In the image display system of the embodiment, the individual screen 402 is originally a degradation check screen. The user performs a deterioration checking operation with the image as it is in a state where the user can select an individual image on the individual screen 402. In the degradation inspection screen, in a block 1501, the selected image is displayed as a degradation inspection target image.
Examples of the deterioration inspection include visual inspection by a human being and automatic diagnosis by a computer. The present image display system corresponds to these two functions. In this example, the user visually checks the image within the degradation check screen. The user observes the image and determines the presence or absence and position of degradation. For example, a case where a deterioration position 2101 such as a crack exists in an image is shown. In this degradation inspection screen, as an example of the degradation inspection assisting process, a degradation position mark that is found by a user in a visual inspection can be displayed. For example, as for the degraded position 2101, the user performs marking with a predetermined operation. The marking operation is an operation of circling with a frame by dragging or the like, for example. As an example of the marker image, a red frame 2102 is displayed.
Further, outside the frame 1501, image information, structural information, or the like may be displayed. Outside the box 1501, inspection items for the deterioration inspection (a state in which inspection completion/non-inspection can be input), deterioration presence/absence items, comment input fields, and the like may be set.
When the automatic diagnosis function is used, the computer system 1 inputs a target image and estimates a deterioration position. The image display system displays the automatic diagnosis result information by the computer system 1 in the deterioration inspection screen. For example, the estimated deterioration position is displayed in a predetermined expression. The user reviews the result information and makes a confirmation or final judgment.
Further, in the image display system of the modified example, unlike the individual screen 402, a deterioration checking screen may be provided, and switching between these screens may be performed in accordance with a predetermined operation (e.g., double-click or the like).
[ three-dimensional model picture of structure ]
Fig. 22 shows an example of a three-dimensional model screen of a structure. When a predetermined operation is received on the list screen 401 or the like, the present image display system displays a three-dimensional model screen of a structure. The image display system displays the three-dimensional model of the structure in the region on the screen based on the three-dimensional model data d5, and integrally displays the structure information and the like. In this area, for example, a three-dimensional model of the superimposed structure is displayed in a side view on a map (a field or the like). The direction, position, and the like of the three-dimensional model of the observation structure can be changed by a predetermined operation on the screen.
Then, the three-dimensional model of the structure can be selected and designated by a predetermined operation of the user. For example, when a certain side surface is selected or designated, the image display system displays an image group corresponding to the side surface on the list screen 401. Alternatively, when a certain position in the three-dimensional model of the structure is selected and designated, an image corresponding to the selected position may be displayed on the individual screen 402.
[ Listing Screen (2) ]
Fig. 23 shows a second example of the list screen. In the list screen 401, image groups to be inspected for degradation are displayed in a list on the original image group. First, the user selects an image group to be inspected for deterioration through the individual screen 402. Then, the image display system displays the image groups (selected image groups) to be inspected for degradation in a list on the list screen 401. The selected image group can be displayed by a predetermined operation (for example, pressing a selected image group display button) in the list screen 401. In this screen, the deterioration check information (for example, completion of the check, presence or absence of deterioration, and the like) can be displayed for each image in the group of images to be checked for deterioration (selected group of images).
[ Listing Screen (3) ]
Fig. 24 shows a third example of the list screen. This list screen 401 shows another mode in displaying a list of image groups to be inspected for deterioration as a modification to the list screen 401 of fig. 23. In the list screen 401, the image group of the deterioration inspection target is arranged and displayed in a positional relationship along the shape of the surface region of the structure 5 (or the generated three-dimensional model of the structure). For example, in the region 2401 in the list screen 401, an image group (adjacent image group) of the deterioration inspection target is displayed for each selected side group. For example, regarding side a2 of fig. 25, there are images #1 to #12 as adjacent image groups. Within this region, first, a region 2401 corresponding to the side of the selected group (i.e., a part of the planar region) is displayed. In this region 2401, a plurality of adjacent images (images #1 to #12) are displayed in an array in accordance with the actual positional relationship. In addition, each image (thumbnail image) may be displayed in a predetermined representation so that the state of completion of inspection, presence or absence of deterioration, or the like is known. When one image in the operation region 2401 is selected, the image is displayed on the individual screen 402.
In addition, a link image 2402 for selecting other side groups may be displayed in the area 2401. For example, when the link image 2402 for operating the right arrow is selected, the group of the side face a3 is selected and displayed in the area 2401 in the same manner. In the example of displaying the link image 2402 in units of side groups in fig. 24, the shape of the left and right arrows is not limited to this, and various representations are possible. For example, as in the example of the link image 404 of fig. 16, the link image 2402 may be an arrow shape that represents a change in the direction inside or in front in a three-dimensional space. For example, a link image 2402 for linking from the side a2 to the side a3 is displayed in an arrow shape representing a curve of the rear right.
[ judgment of Adjacent images ]
Determination of an adjacent image (next image) in spatial positional relationship, GUI, and the like will be described with reference to fig. 25 or 26. In particular, as shown in the example of fig. 6, a case will be described in which the positional relationship between the images interposed between the side faces of the corner of the structure 5 and the selection determination of the next image are performed. The plurality of images (and the imaging range) are arranged in a three-dimensional space. When using an image group for capturing a corner, a different side surface, a curved surface, or the like of the three-dimensional structure 5, the camera directions C of the respective images may be different. In some cases, it is difficult to determine the repetition rate or the adjacent images among the images having different camera directions C. Therefore, the present image display system (particularly, the adjacent image determination unit 303) has the following functions: when the camera direction C is different from the other image, the positional relationship between the images is determined using information on the position and direction of the images (for example, the camera direction C and the camera direction V), and the next image to a certain image is determined. In the present image display system, when calculating the repetition rate of the image group in the space, the image group is determined using the position information of the image in consideration of efficiency.
Fig. 25 shows an example of the image relationship between the side surfaces of the structure 5 corresponding to the example of fig. 6. In this example, an example of adjacent images of side a2 and side a3 of structure 5 is shown. The adjacent image here is an image group shown as being selected as an image suitable for the degradation check. Each image is shown by a dotted frame, but in reality, the end portions of each image partially overlap each other.
On the side surface a2, a corner 601 is located on the left side of the screen and on the back side in the Y direction (direction Y1), and a corner 602 is located on the right side of the screen and on the front side in the Y direction (direction Y2). On the side surface a3, there is a corner 602 on the left side of the screen (direction X1) and a corner 603 on the right side of the screen (direction X2). In the path r2 of the group of the side a2, an image group is captured by aerial photography of the type of the above-described scan. Among these image groups, an example of an adjacent image group selected so as to minimize the overlap is shown by images #1 to # 12. Likewise, in the path r3 of the group of the side a3, an image group is captured by aerial photography of the kind scanned above. Among these image groups, an example of an adjacent image group selected so as to minimize the overlap is shown by images #21 to # 32. These image numbers also correspond to the order of the shooting times. The images that are not selected because of the large repetition rate are not shown in the figure.
The present image display system displays one image selected by the user on the individual screen 402. For example, it is assumed that image #3 is selected as the first image. In this case, if only the Y-Z plane of the side face a2 is considered, there are images #2, #4, and #5 as the adjacent images. Considering a three-dimensional space, as shown in the drawing, the left end of the side surface a3 is connected to the right end of the side surface a2 at the corner 602. In the case where the user observes the image #3 on the individual screen 402, with respect to the image #3, there is no adjacent image because there is no structure 5 on the right side (direction Y2) as it is. However, along the surface shape of structure 5, the direction of curving from side surface a2 to side surface a3 is included, and when the three-dimensional positional relationship is considered, image #21 of side surface a3 exists in the direction of entering image #3 to the right and back. Similarly, when viewed from the image #21 of the side a3, there is an image #3 of the side a2 in the direction of the rear left entry.
In applications such as a degradation inspection, it is useful if the images can be selected or displayed in the same manner as the above-described positional relationship between the image #3 and the image # 21. Here, the present image display system has a function of enabling selection and display between images when an image on a certain side surface and an image on another side surface are continuous (when an image at a position close to a different camera direction is present). Therefore, in the present image display system, the three-dimensional positional relationship between the images is determined using the information of the positions or directions of the images (fig. 26), and the adjacent images between the side surfaces are also determined and presented as link images. For example, when the individual screen 402 displays the image #3, not only the link image indicating the left or lower side but also the link image indicating the right rear side (fig. 17, link image 404) is displayed as the link image.
In the case of such adjacent images between the side faces, there may be no overlapping region between the images. Therefore, in the image display system, when determining the adjacent images between the side surfaces, the determination is mainly performed on the three-dimensional positional relationship and the distance, not on the repetition rate in the same plane (S10 in fig. 4).
Similarly, in the relationship between the side a2 and the side a3, a set of image #4 and image #26, a set of image #9 and image #27, and a set of image #10 and image #32 are obtained as adjacent images. Similarly, in each of the side surfaces between the other corners, adjacent images are obtained.
Fig. 26 corresponds to fig. 25, and shows an example of image selection determination between the side surfaces of the structure 5. In fig. 26, side a1, side a2, side a3, corner 601, corner 602, and the like are shown in overhead view. The path r1 of the side a1 shows positions C1, C2, and C3 as an example of the camera position C at the time of shooting. These positions are positions near the corner 601 of the side a 1. In either position, the camera direction is direction Y1, and the object distance D0 is fixed. For example, an image g11 (specifically, the width of the shooting range) when shot at the position c3 is shown. In the path r2 of the side face a2, positions C4, C5, C6, and the like are shown as examples of the camera position C at the time of shooting. These positions are positions near the corner 601 of the side a 2. In either position, the camera direction is direction X2, and the object distance D0 is fixed. For example, an image g12 (particularly, the width of the shooting range) when shot at a position c4 (substantially the same position as the position c 3) is shown.
The present image display system determines not only an adjacent image on the same side but also an image on a different side (image g12) as an adjacent image with respect to the image g11 of the side a 1. In this case, the present image display system calculates and determines the distance between the camera positions C using the position information of the camera positions C of the respective images. The present image display system determines an image in which the distance between the camera positions C is smallest, for example, for the position C3 of the image g11, in addition to images in the same side plane (images of the positions C1, C2, and the like). In this example, the distance at position c4 is the smallest relative to position c 3. Thus, the image g12 as the adjacent image pickup position c4 between the sides. The present image display system associates image g12 with image g11 as an adjacent image between sides (also referred to as an adjacent image if there is no overlap between sides), and displays a corresponding link image.
As another example of the processing, the image display system may determine the distance by using information on the drone position P instead of the camera position C. As another processing example, the present image display system may determine the distance using information on the center position Q of the corresponding image and the imaging range, instead of using the camera position C. For example, the center position of the image g11 is set as the position q3, and the center position of the image g12 is set as the position q 4. The image display system determines an image having the smallest distance between the center positions of the images, except for the images on the same side. In this example, the position q4 is the smallest distance for position q 3. Thus, the image g12 as the adjacent image pickup position q4 between the sides.
The relationship between the side surface a2 and the side surface a3 of the corner 602 can be determined as in the above. For example, in the case of taking the image g13 at the position c13 (or the position q13) of the side a2 as the first image, the image g14 at the position c14 (or the position q14) of the side a3 can be picked up as the adjacent image.
When the individual screen 402 displays the image g11, a link image indicating the front right (direction Y2) is displayed as a link image. By this selection of the link image, it is possible to move to the image g 12. When the individual screen 402 displays the image g12, a link image indicating the front left (direction X1) is displayed as the link image. Selection of the link image can move to image g 11. When the individual screen 402 displays the image g13, a link image indicating the right rear (direction X2) is displayed as the link image. Selection of the link image can move to image g 14. When the individual screen 402 displays the image g14, a link image indicating the left rear (direction Y1) is displayed as a link image. Selection of the link image can move to image g 13.
Such switching of the display image between the link images also corresponds to a change in the user's line-of-sight direction corresponding to a change in the camera direction C. In the GUI of the present image display system, such image selection and display image switching along the direction of the surface shape of the three-dimensional structure 5 can be performed. This enables the user to more efficiently perform a job for a use such as a deterioration inspection.
The method of determining the positional relationship of the images using the information on the positions and directions of the images as described above is not limited to the above-described method. As a modification, the positional relationship of the images may be determined based on the determination of the feature points or the like extracted from the images by the image processing.
[ aerial photography mode (3) ]
Fig. 27 shows a third example of an aerial photography method of another structure 5 as an application example. The structure 5 of this example is shown as having a cylindrical shape. When the region of the structure 5 having the curved side surface a30 is to be inspected for degradation, the following aerial photography method is used. Fig. 27 (a) shows a side view, and (B) shows an overhead view in the Z direction. In the case of the aerial setting example of this embodiment, as shown in the drawing, the unmanned aerial vehicle 10 and the camera 4 are set to the target distance D0 substantially constant with respect to the curved side surface a30, and are set to the path R30 that moves in a circular shape in the horizontal direction, for example. In the path R30, the camera direction is controlled so that the curved side face a30 faces vertically, for example. For example, image g31 at camera position c31 on path R30, image g32 at camera position c32, image g33 at camera position c33, and so on are shown.
When an image group is obtained by such an aerial photography method, the processing of the image display system can be applied in the same manner, and substantially the same effect can be obtained. The present image display system similarly determines the repetition rate, adjacent images, and the like for a plurality of images (imaging ranges) formed along the side surface a 30. In this case, each image can be selected in the direction along the side surface a30 on the individual screen 402. In the case of this aerial photography, particularly, by fixing the object distance D0, the deterioration check is facilitated, and the accuracy of the SFM processing can be ensured. The present invention is not limited to this aerial photography method, and may be a method of dividing the aerial photography method into a plurality of linear paths as described above.
[ treatment timing ]
In the image display system according to the embodiment, as in the flow (S6) of fig. 4, the computer system 1 executes the process of determining the repetition rate or the adjacent image at each timing when a certain first image is selected by the user operation. The processing timing can be not limited thereto. In a modification, the computer system 1 (e.g., the server 3) performs the repetition rate calculation or the adjacent image determination process on the image group as a batch process at a timing at which data of the image group is input in advance or a timing at which the batch process is executed by a user designation. As a result of this processing, the target image group can be selected in a comprehensive manner according to the usage of the original image group, such as a deterioration test. For example, as in the example of fig. 25, an adjacent image group having a positional relationship in a three-dimensional space can be configured. The computer system 1 combines and manages information of the adjacent image group configured in the processing with three-dimensional model data of the structure 5 and the like. The computer system 1 can display information of the adjacent image group on a GUI screen (for example, fig. 23 and 24) in response to an operation by the user.
In addition, there is a case where the input image group changes. For example, there is a case where a part of the image is added later. For example, when there is an area that has not been captured in the first capturing, the area that has not been captured is captured as a subject in the second capturing. In such a case, the computer system 1 performs the processing for the added image as well. That is, the computer system 1 performs the same repetition rate calculation or adjacent image determination process again, using the added image as the first image and the processed image group as candidate images. Thus, the added image is constructed into an adjacent image group, and the relationship between the image groups is reconstructed and updated.
[ Effect and the like ]
As described above, according to the image display system of the embodiment, when selecting an image for use in a degradation inspection or the like from the image group captured by the camera 4 of the unmanned aerial vehicle 10, it is possible to easily select an appropriate image and to reduce the labor of the user. Depending on the application, the user often confirms or works by viewing a certain image (first image), then selects an image located around the image, and confirms or works by viewing the image (second image). In such a case, in the present image display system, selection of an image group or a job can be efficiently realized. In the embodiment, in the use of the deterioration inspection, the user can effectively perform the visual inspection. In the case of automatic diagnosis by a computer, the computer can efficiently process a selected image group.
As described above, the image display system according to the embodiment has a configuration for determining the overlapping state of objects in images with respect to a plurality of two-dimensional images having a positional relationship in a three-dimensional space, and has a configuration for navigating GUI or the like for selecting or switching between images in accordance with the overlapping state. In particular, in the present image display system, selection or switching of images along the positional relationship or direction of the three-dimensional shape of the surface of the structure 5 can be performed.
In the present image display system, when the user selects a first image and displays the first image on the individual screen 402, images located around the first image are searched for, adjacent images are determined based on the repetition rate, and a corresponding link image is presented. In this way, in the present image display system, since the processing is efficiently performed, the waiting time of the user can be reduced.
The present image display system is not limited to the use of degradation inspection or three-dimensional model generation, and can be applied to other uses. When the image processing apparatus is applied to another application, an appropriate image can be selected between an image group and images at a repetition rate according to the application, and a job or a process according to the application can be efficiently performed.
The image display system according to the modified example may not have the function of generating a three-dimensional model.
In the present image display system, the input image group is obtained manually or by the unmanned aerial vehicle 10, but the present invention is not limited thereto, and the input image group may be obtained by any method. In the present image display system, in particular, the case of using an image group for both the use of the deterioration inspection and the three-dimensional model generation has been described. In particular, a case will be described in which a part of an image group suitable for use in the deterioration inspection is selected from among image groups captured in such a manner that the three-dimensional model generation is possible. However, the present image display system is not limited to this, and is effective when a second image group to be used for another second application is selected from a first image group captured for a certain first application.
The present invention has been described specifically based on the embodiments, but the present invention is not limited to the embodiments described above, and various modifications can be made without departing from the scope of the present invention.
Description of the reference numerals
1 … computer system; 201 … image group; 202 … image information; d1 … construction data; d2 … shooting plan data; d4 … examining the data; d5 … three-dimensional model data; 121 … deterioration checking auxiliary part; 122 … a three-dimensional model generating unit; 122a … SFM processing unit; 123 … image display control unit; 301 … list display part; 302 … individual display section; 303 … adjacent image judging section; 304 … repetition rate calculation unit; 401 … list screen; 402 … individual pictures.

Claims (28)

1. An image display system is composed of a computer system,
the computer system performs:
inputting an image group including a plurality of images different in time, position, and direction of photographing;
displaying a list of the image groups on a list screen;
displaying a first image selected from the image group based on an operation by a user in an individual screen;
determining an adjacent image to the first image based on a determination of a spatial positional relationship between the first image and a group of candidate images spatially located around the first image and a determination of an overlapping state with respect to an imaging range; and
selecting the adjacent image for the first image as a second image, displaying the second image as a new first image in the individual screen, according to the user's operation,
the photographing position in the image group includes a case where a plurality of different positions are used for photographing a three-dimensional shape of an object in a three-dimensional space,
the computer system calculates a repetition rate of the imaging range as the repetition state in a group of the first image and the candidate image, and selects the candidate image having the smallest repetition rate within a range of a repetition rate threshold as the adjacent image.
2. The image display system of claim 1,
the computer system displays a link image for the first image in the individual screen, the link image indicating the presence and direction of the adjacent image, selects the adjacent image as the second image according to the operation of the link image by the user, and displays the second image as a new first image.
3. The image display system of claim 1,
the computer system calculates a distance between the position of the first image and the position of the candidate image in the set of the first image and the candidate image as the spatial positional relationship using image information including the imaging position of each image in the image group, and selects the candidate image having the smallest distance as the adjacent image.
4. The image display system of claim 1,
the computer system uses image information including a capturing position of each image of the image group to classify a capturing direction of the first image and a capturing direction of the candidate image into substantially the same case and a different case as the spatial positional relationship in the group of the first image and the candidate image, and determines the adjacent image in each case.
5. The image display system according to claim 2,
the computer system displays, as the link image, a link image having a shape indicating a direction in which the adjacent image exists with respect to the first image on the individual screen.
6. The image display system according to claim 2,
as the spatial positional relationship, in a case where a second direction in a second plane in which the adjacent image is arranged is present with respect to a first direction in a first plane in which the imaging range of the first image is arranged,
the computer system displays, as the link image, a link image having a shape indicating a change from the first direction to the second direction on the individual screen.
7. The image display system of claim 1,
in the case where there is a repeated region of the first image and the adjacent image,
the computer system displays the repeat region in the first image in the individual screen with a predetermined representation of deleted image content.
8. The image display system of claim 1,
in the case where there is an unphotographed region around the first image,
the computer system displays the non-captured area around the first image in a predetermined expression indicating that the non-captured area is present in the individual screen.
9. The image display system of claim 1,
in the case where there is a repeated region of the first image and the adjacent image,
the computer system displays the repeated region in the first image in an outline of the repeated region or a predetermined expression showing the repeated region in the individual screen.
10. The image display system of claim 1,
the computer system displays a list of image groups selected by the user operation on the individual screen among the image groups on the list screen.
11. The image display system of claim 1,
the computer system arranges an image group selected by the user operation on the individual screen among the image groups so as to match the spatial positional relationship, and displays the image group on the list screen.
12. The image display system of claim 1,
the image group is a group of images captured with a camera for the purpose of degradation inspection of a surface region of a structure,
the individual screen is a degradation check screen that accepts an operation of the user for the task of the degradation check.
13. The image display system of claim 1,
the image group is a group of images captured by a camera for the purpose of generating a three-dimensional model of a structure.
14. The image display system of claim 1,
the image group is an image group obtained by aerial photography by a flying object equipped with a camera.
15. An image display method for an image display system comprising a computer system,
the steps executed in the computer system include:
a step of inputting an image group including a plurality of images different in time, position, and direction of shooting;
a step of displaying a list of the image groups on a list screen;
a step of displaying a first image selected from the image group based on an operation by a user on an individual screen;
determining an adjacent image to the first image based on a determination of a spatial positional relationship between the first image and a group of candidate images spatially located around the first image and a determination of an overlapping state with respect to an imaging range;
selecting the adjacent image to the first image as a second image and displaying the second image as a new first image in the individual screen in accordance with an operation by the user; and
the computer system calculates a repetition rate of the imaging range as the repetition state in a group of the first image and the candidate images, selects the candidate image having the repetition rate that is the smallest within a range of a repetition rate threshold as the adjacent image,
the shooting positions in the image group include a case where a plurality of different positions are used for shooting the three-dimensional shape of the subject in the three-dimensional space.
16. The image display method according to claim 15,
comprises the following steps:
the computer system displays a link image for the first image in the individual screen, the link image indicating the presence and direction of the adjacent image, selects the adjacent image as the second image according to the operation of the link image by the user, and displays the second image as a new first image.
17. The image display method according to claim 15,
comprises the following steps:
the computer system calculates a distance between the position of the first image and the position of the candidate image in the set of the first image and the candidate image as the spatial positional relationship using image information including the imaging position of each image in the image group, and selects the candidate image having the smallest distance as the adjacent image.
18. The image display method according to claim 15,
comprises the following steps:
the computer system uses image information including a capturing position of each image of the image group to classify a capturing direction of the first image and a capturing direction of the candidate image into substantially the same case and a different case as the spatial positional relationship in the group of the first image and the candidate image, and determines the adjacent image in each case.
19. The image display method according to claim 16,
comprises the following steps:
the computer system displays, as the link image, a link image having a shape indicating a direction in which the adjacent image exists with respect to the first image on the individual screen.
20. The image display method according to claim 16,
comprises the following steps:
in the case where, as the spatial positional relationship, a second direction in a second plane in which the adjacent image is arranged is present with respect to a first direction in a first plane in which the imaging range of the first image is arranged, the computer system displays, as the link image, a link image having a shape indicating a change from the first direction to the second direction on the individual screen.
21. The image display method according to claim 15,
comprises the following steps:
in a case where there is a repeated region of the first image and the adjacent image, the computer system displays the repeated region in the first image in a predetermined expression in which image content is deleted in the individual screen.
22. The image display method according to claim 15,
comprises the following steps:
in a case where there is an unhatched region around the first image, the computer system displays the unhatched region around the first image in a predetermined expression indicating that the unhatched region is present in the individual screen.
23. The image display method according to claim 15,
comprises the following steps:
in a case where there is a repetition region of the first image and the adjacent image, the computer system displays the repetition region in the first image in an outline line of the repetition region or a predetermined expression showing the repetition region in the individual screen.
24. The image display method according to claim 15,
comprises the following steps:
the computer system displays a list of image groups selected by the user operation on the individual screen among the image groups on the list screen.
25. The image display method according to claim 15,
comprises the following steps:
the computer system arranges an image group selected by the user operation on the individual screen among the image groups so as to match the spatial positional relationship, and displays the image group on the list screen.
26. The image display method according to claim 15,
the image group is a group of images captured with a camera for the purpose of degradation inspection of a surface region of a structure,
the individual screen is a degradation check screen that accepts an operation of the user for the task of the degradation check.
27. The image display method according to claim 15,
the image group is a group of images captured by a camera for the purpose of generating a three-dimensional model of a structure.
28. The image display method according to claim 15,
the image group is an image group obtained by aerial photography by a flying object equipped with a camera.
CN201880001267.9A 2018-06-29 2018-08-24 Image display system and method Active CN110915201B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2018-124978 2018-06-29
JP2018124978A JP7051616B2 (en) 2018-06-29 2018-06-29 Image display system and method
PCT/JP2018/031364 WO2020003548A1 (en) 2018-06-29 2018-08-24 Image display system and method

Publications (2)

Publication Number Publication Date
CN110915201A CN110915201A (en) 2020-03-24
CN110915201B true CN110915201B (en) 2021-09-28

Family

ID=68985372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880001267.9A Active CN110915201B (en) 2018-06-29 2018-08-24 Image display system and method

Country Status (4)

Country Link
JP (1) JP7051616B2 (en)
CN (1) CN110915201B (en)
SG (1) SG11201808735TA (en)
WO (1) WO2020003548A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111510678B (en) * 2020-04-21 2021-12-24 上海歌尔泰克机器人有限公司 Unmanned aerial vehicle image transmission control method, device and system
JP7003352B1 (en) 2021-04-12 2022-01-20 株式会社三井E&Sマシナリー Structure inspection data management system
WO2022244206A1 (en) * 2021-05-20 2022-11-24 日本電気株式会社 Measurement condition optimization system, three-dimensional data measurement system, measurement condition optimization method, and non-transitory computer-readable medium
WO2023188510A1 (en) * 2022-03-29 2023-10-05 富士フイルム株式会社 Image processing device, image processing method, and program
CN114778558B (en) * 2022-06-07 2022-09-09 成都纵横通达信息工程有限公司 Bridge monitoring device, system and method based on video image

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000090232A (en) * 1998-09-08 2000-03-31 Olympus Optical Co Ltd Panoramic image synthesizing device and record medium storing panoramic image synthesizing program
JP2006098256A (en) * 2004-09-30 2006-04-13 Ricoh Co Ltd Three-dimensional surface model preparing system, image processing system, program, and information recording medium
JP2006099497A (en) * 2004-09-30 2006-04-13 Seiko Epson Corp Synthesis of panoramic image
CN102819752A (en) * 2012-08-16 2012-12-12 北京理工大学 System and method for outdoor large-scale object recognition based on distributed inverted files
JP2018074757A (en) * 2016-10-28 2018-05-10 株式会社東芝 Patrol inspection system, information processing apparatus, and patrol inspection control program
CN108141511A (en) * 2015-09-30 2018-06-08 富士胶片株式会社 Image processing apparatus, photographic device, image processing method and program

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013058124A (en) 2011-09-09 2013-03-28 Sony Corp Information processing apparatus, information processing method, and program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000090232A (en) * 1998-09-08 2000-03-31 Olympus Optical Co Ltd Panoramic image synthesizing device and record medium storing panoramic image synthesizing program
JP2006098256A (en) * 2004-09-30 2006-04-13 Ricoh Co Ltd Three-dimensional surface model preparing system, image processing system, program, and information recording medium
JP2006099497A (en) * 2004-09-30 2006-04-13 Seiko Epson Corp Synthesis of panoramic image
CN102819752A (en) * 2012-08-16 2012-12-12 北京理工大学 System and method for outdoor large-scale object recognition based on distributed inverted files
CN108141511A (en) * 2015-09-30 2018-06-08 富士胶片株式会社 Image processing apparatus, photographic device, image processing method and program
JP2018074757A (en) * 2016-10-28 2018-05-10 株式会社東芝 Patrol inspection system, information processing apparatus, and patrol inspection control program

Also Published As

Publication number Publication date
JP7051616B2 (en) 2022-04-11
CN110915201A (en) 2020-03-24
SG11201808735TA (en) 2020-01-30
WO2020003548A1 (en) 2020-01-02
JP2020005186A (en) 2020-01-09

Similar Documents

Publication Publication Date Title
CN110915201B (en) Image display system and method
US10818099B2 (en) Image processing method, display device, and inspection system
US10748269B2 (en) Structure member specification device and structure member specification method
EP2538241B1 (en) Advanced remote nondestructive inspection system and process
EP3683647B1 (en) Method and apparatus for planning sample points for surveying and mapping
JP7332353B2 (en) Inspection system and inspection method
JP2016138788A (en) Survey data processing device, survey data processing method and program
WO2021200432A1 (en) Imaging instruction method, imaging method, imaging instruction device, and imaging device
CN112469967B (en) Mapping system, mapping method, mapping device, mapping apparatus, and recording medium
JP2019175383A (en) Input device, input method of input device, output device and output method of output device
JP4588369B2 (en) Imaging apparatus and imaging method
JP7366618B2 (en) Field collaboration system and management device
JP2964402B1 (en) Method and apparatus for creating a three-dimensional map database
JP2002181536A (en) Photogrammetric service system
JP7334460B2 (en) Work support device and work support method
JP2007322404A (en) Image processing device and its processing method
WO2017155005A1 (en) Image processing method, display device, and inspection system
JP2005310044A (en) Apparatus, method and program for data processing
JP2017046106A (en) Imaging apparatus, imaging method, and imaging program
CN111868656A (en) Operation control system, operation control method, device, equipment and medium
JP3924576B2 (en) Three-dimensional measurement method and apparatus by photogrammetry
KR200488998Y1 (en) Apparatus for constructing indoor map
JPH10246628A (en) Photographing control method
JP6841977B2 (en) Shooting support device, shooting support method, and shooting support program
US20230326098A1 (en) Generating a digital twin representation of an environment or object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant