CN117121479A - Information processing apparatus and information processing method - Google Patents

Information processing apparatus and information processing method Download PDF

Info

Publication number
CN117121479A
CN117121479A CN202280023403.0A CN202280023403A CN117121479A CN 117121479 A CN117121479 A CN 117121479A CN 202280023403 A CN202280023403 A CN 202280023403A CN 117121479 A CN117121479 A CN 117121479A
Authority
CN
China
Prior art keywords
unit
display
image
information
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280023403.0A
Other languages
Chinese (zh)
Inventor
铃木友规
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ricoh Co Ltd
Original Assignee
Ricoh Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ricoh Co Ltd filed Critical Ricoh Co Ltd
Priority claimed from PCT/JP2022/011915 external-priority patent/WO2022202536A1/en
Publication of CN117121479A publication Critical patent/CN117121479A/en
Pending legal-status Critical Current

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Stereoscopic And Panoramic Photography (AREA)
  • Indication In Cameras, And Counting Of Exposures (AREA)
  • Studio Devices (AREA)

Abstract

An information processing apparatus includes a display control unit configured to display a three-dimensional image on a display unit, the three-dimensional image being determined based on an output of a light receiving unit that receives light projected on and reflected from an object, wherein the display control unit is configured to display a display image on the display unit, the display image including position identification information identifying a position of the light receiving unit acquired when the light receiving unit receives light reflected from the object, the position identification information being based on position identification information identifying a position of the light receiving unit acquired when the light receiving unit receives light reflected from the object, and includes the three-dimensional image.

Description

Information processing apparatus and information processing method
Technical Field
The present invention relates to an information processing apparatus and an information processing method.
Background
PTL 1 discloses a distance measuring device capable of stably and accurately measuring a distance to an object.
PTL 2 discloses an image pickup apparatus that can perform image processing to reduce the effect of reflection when light reflection occurs from a finger or the like of a person.
PTL 3 discloses a three-dimensional synthesis processing system including a measurement position display unit that extracts blocks whose measurement data density is lower than a predetermined threshold value, and outputs coordinates within the extracted block range as measurement positions where three-dimensional measurement devices should be set.
Disclosure of Invention
Technical problem
An object of the present invention is to provide an information processing apparatus and an information processing method with which a three-dimensional image can be easily compared with a corresponding situation of a three-dimensional image capturing place.
Solution to the problem
An information processing apparatus according to the present invention includes a display control unit configured to display a three-dimensional image on a display unit, the three-dimensional image being determined based on an output of a light receiving unit that receives light projected on and reflected from an object. The display control unit is configured to display a display image on the display unit, the display image including position identification information identifying a position of the light receiving unit obtained when the light receiving unit receives light reflected from the object, the position identification information being based on the position of the light receiving unit obtained when the light receiving unit receives light reflected from the object and the three-dimensional image.
Advantageous effects of the invention
According to the present invention, an information processing apparatus and an information processing method can be provided with which a three-dimensional image can be easily compared with a correspondence of a place where the three-dimensional image is captured
Other objects, features and advantages of the present invention will become more apparent from the following detailed description when taken in conjunction with the accompanying drawings.
Drawings
Fig. 1 is a view illustrating an example of an external appearance of an image pickup apparatus according to an embodiment of the present invention.
Fig. 2 is a schematic diagram showing the configuration of an image pickup apparatus according to the present embodiment.
Fig. 3A is a view showing a state of use of the image pickup apparatus according to the present embodiment.
Fig. 3B is a view showing a state of use of the image pickup apparatus according to the present embodiment.
Fig. 3C is a view showing a state of use of the image pickup apparatus according to the present embodiment.
Fig. 3D is a view showing a state of use of the image pickup apparatus according to the present embodiment.
Fig. 4 is a schematic diagram showing an example of the configuration of a processing block of the processing circuit according to the present embodiment.
Fig. 5 is a flowchart showing an example of the operation of the processing circuit of the image pickup apparatus according to the present embodiment.
Fig. 6A is a flowchart showing generation of omnidirectional image data according to the present embodiment.
Fig. 6B is a flowchart showing generation of omnidirectional image data according to the present embodiment.
Fig. 7 is a flowchart showing a case where a nearby object is determined according to the present embodiment.
Fig. 8 is a view showing display contents of the display unit according to the present embodiment.
Fig. 9 is a view showing the appearance of an image pickup apparatus according to a variation of the embodiment of the present invention.
Fig. 10 is a schematic diagram showing a configuration of a processing block of a processing circuit according to a variation.
Fig. 11 is a view depicting the appearance of an image pickup apparatus according to a second variation of the embodiment of the present invention.
Fig. 12 is a schematic diagram showing the configuration of processing blocks of a processing circuit according to a second modification.
Fig. 13 is a flowchart of determination regarding a nearby object according to a second modification.
Fig. 14 is a view showing the configuration of an image pickup apparatus according to a third variation of the embodiment of the present invention.
Fig. 15 is a flowchart of determination regarding a highly reflective object according to an embodiment of the present invention.
Fig. 16 is a flowchart showing determination regarding a distant object and a low-reflection object according to the present embodiment.
Fig. 17 is a flowchart of determination regarding image blur according to the present embodiment.
Fig. 18A is a determination flowchart drawn according to a fourth variation of the embodiment of the present invention.
Fig. 18B is a determination flowchart according to a fourth variation of the embodiment of the present invention.
Fig. 18C is a determination flowchart according to a fourth variation of the embodiment of the present invention.
Fig. 19 is a schematic diagram showing an example of the configuration of a processing block of a processing circuit according to a fifth variation of the embodiment of the present invention.
Fig. 20 is a schematic diagram showing a configuration example of an information processing system according to a sixth variation of the embodiment of the present invention.
Fig. 21 shows an example of a configuration of an information processing system according to a seventh variation of the present invention.
Fig. 22 is a view showing display contents of a display unit of fifth to seventh modifications.
Fig. 23A is a view showing a three-dimensional image displayed on a display unit.
Fig. 23B is a view showing a three-dimensional image displayed on the display unit.
Fig. 23C is a view showing that a three-dimensional image is displayed on the display unit.
Fig. 24 is a determination flowchart drawn according to fifth to seventh variations.
Fig. 25 shows display contents of a display unit according to fifth to seventh modifications.
Fig. 26 is a flowchart showing a process according to the fifth to seventh modifications.
Fig. 27 is a flowchart showing a process according to the fifth to seventh modifications.
Detailed Description
Hereinafter, embodiments of an image pickup apparatus and an image pickup processing method will be described in detail with reference to the accompanying drawings.
Fig. 1 is a schematic diagram showing an example of the appearance of an image pickup apparatus according to an embodiment of the present invention. Fig. 2 is a schematic diagram showing the configuration of the image pickup apparatus. Fig. 2 depicts a configuration inside the image pickup apparatus of fig. 1.
The image pickup apparatus 1 is an example of an information processing apparatus that outputs three-dimensional information determined based on received light. An image pickup unit (camera) 11, a projection unit (portion corresponding to a light emitting unit of the distance sensor) 12 that projects light other than visible light, and a ranging information acquisition unit (portion corresponding to a light receiving unit of the distance sensor) 13 that acquires ranging information based on the light projected by the projection unit 12 are contained in the housing 10 in an integrated manner. Each section is electrically connected to the processing circuit 14 in the housing 10 through a synchronization signal line L and operates in synchronization with the processing circuit 14.
The user inputs an image pickup instruction signal to the processing circuit 14 using the image pickup switch 15. The display unit 20 displays the content of the output signal corresponding to the processing circuit 14, and is a liquid crystal display or the like. The display unit 20 is a touch panel or the like, and can receive an operation input from a user. Based on the image pickup instruction, the processing circuit 14 controls each section and acquires RGB image data and ranging information, and reconstructs high-density three-dimensional point cloud data from the acquired ranging information data based on the RGB image data and the ranging information.
Although three-dimensional point cloud data can be reconstructed from the ranging information data, in this case, the accuracy of the three-dimensional point cloud data is limited by the number of pixels (resolution) of the ranging information acquisition unit 13. The process of reconstructing high-density three-dimensional point cloud data using the three-dimensional point cloud data will be described below. The reconstructed data is output to an external Personal Computer (PC) through a portable recording medium or through communication, and is used to display the three-dimensional restoration model.
The various components and processing circuitry 14 are powered by a battery contained within the housing 10. Alternatively, power may be supplied through a connection cable outside the housing 10.
The image pickup unit 11 captures two-dimensional image information, and includes image pickup devices 11A and 11A, fisheye lenses (wide angle lenses) 11B and 11B, and the like. The projection unit 12 includes light source units 12A and 12A, wide-angle lenses 12B and 12B, and the like. The ranging information acquisition unit 13 includes time-of-flight (TOF) sensors 13A and 13A, wide-angle lenses 13B and 13B, and the like. Although not depicted, each unit may include an optical system such as a prism or a lens group.
For example, an optical system for imaging the fisheye lenses 11B and 11B with respect to the light collected by the image-pickup devices 11A and 11A may be included in the image-pickup unit 11. Further, an optical system for guiding light from the light source units 12A and 12A to the wide-angle lenses 12B and 12B may also be included in the projection unit 12. Further, an optical system for imaging the light collected by the wide-angle lenses 13B and 13B to the TOF sensors 13A and 13A may be included in the ranging information acquisition unit 13. Each optical system can be appropriately prepared according to the configuration and arrangement of the image pickup devices 11A and 11A, the light source units 12a and 12A, TOF sensors 13A and 13A. A description of such an optical system as a prism or a lens group is omitted hereinafter.
The image pickup devices 11A and 11A, the light source units 12A and 12A, and the TOF sensors 13A and 13A are integrally included in the housing 10. The fisheye lens 11b, the wide angle lens 12b, the wide angle lens 13b, and the display unit 20 are disposed on a first surface of the front surface of the housing 10. On the first surface, the respective inner sides of the fisheye lens 11b, the wide angle lens 12b, and the wide angle lens 13b are opened.
The fisheye lens 11B, the wide angle lens 12B, the wide angle lens 13B, and the image pickup switch 15 are provided on the second surface of the back surface of the housing 10. On the second surface, the respective inner sides of the fisheye lens 11B, the wide angle lens 12B, and the wide angle lens 13B are opened.
The image pickup devices 11A and 11A are image sensors (area sensors) having two-dimensional resolution. The image pickup apparatuses 11A and 11A have an image pickup area in which a plurality of light receiving elements (photodiodes) are arranged in two-dimensional directions as respective pixels. The image pickup area is provided with red (R), green (G), and blue (B) color filters, such as bayer array filters for receiving visible light, and light passing through the color filters is stored in photodiodes. According to the present embodiment, each image sensor having a large number of pixels can be used to acquire a two-dimensional image of a wide angle of high resolution (for example, a range of 180 degrees celestial hemisphere with the image pickup direction forward as depicted in fig. 2).
The image pickup devices 11A and 11A convert light imaged in an image pickup area into an electric signal by a pixel circuit of each pixel to output a high-resolution RGB image. Each of the fisheye lenses 11B and 11B collects light at a wide angle (for example, a range of 180 degrees celestial hemisphere with the image pickup direction forward as depicted in fig. 2), and images the light to the image pickup region of a corresponding one of the image pickup devices 11A and 11A.
The light source units 12A and 12A are semiconductor lasers that emit laser light over a wavelength band (e.g., infrared) outside the visible light region for measuring distances. Each of the light source units 12A and 12A may use one semiconductor laser, or may use a plurality of semiconductor lasers in combination. A surface emitting laser such as a Vertical Cavity Surface Emitting Laser (VCSEL) may be used as the semiconductor laser.
Furthermore, the light of the semiconductor laser may be shaped by an optical lens so as to be vertically elongated, and the light may be scanned in one-dimensional direction of the measurement range using a light deflecting element such as a microelectromechanical system (MEMS) mirror. In the present embodiment, the light source units 12A and 12A are configured to expand the light of the semiconductor laser LA to a wide-angle range through the wide-angle lenses 12B and 12B without using a light deflecting element such as a MEMS mirror.
The wide-angle lenses 12B and 12B of the light source units 12A and 12A expand the light emitted from the light source units 12A and 12A to a wide-angle range (for example, a range of 18-degree celestial hemisphere with the image pickup direction facing forward, as depicted in fig. 2).
The wide-angle lenses 13B and 13B of the ranging information acquisition unit 13 capture reflected light after the light of the light source units 12A and 12A is projected by the projection unit 12 in each direction of a wide angle (for example, a celestial hemisphere range of 180 degrees, with the image pickup direction facing forward as shown in fig. 2) having a measurement range, and image the light to the light receiving areas of the TOF sensors 13A and 13A. The measurement range includes one or more projection targets (e.g., buildings), and light reflected by the projection targets (reflected light) is incident on the wide-angle lenses 13B and 13B. For example, the reflected light may be captured by, for example, a filter provided on the entire surface of the wide-angle lenses 13B and 13B, the filter cutting off light of a wavelength in the infrared region or higher. Note that the embodiment of the present invention is not limited thereto, and since it is necessary that light in the infrared region should be incident on the light receiving region, a device for passing light in the infrared region (for example, a filter in an optical path from the wide-angle lenses 13B and 13B to the light receiving region) may be provided.
The TOF sensors 13A and 13A are two-dimensional resolution optical sensors. The TOF sensors 13A and 13A have light receiving regions in which a plurality of light receiving elements (photodiodes) are arranged in two-dimensional directions. The TOF sensors 13A and 13A may be referred to as "second image pickup light receiving units". The TOF sensors 13A and 13A receive reflected light from each region (each region may be referred to as a position) of the measurement range with a light receiving element corresponding to each region, and measure (calculate) a distance to each region based on the light detected by each light receiving element.
According to the present embodiment, the distance is measured by a phase difference detection method. In the phase difference detection method, a laser light whose amplitude is modulated with a reference frequency is used to irradiate a measurement range, reflected light is received, a phase difference between the irradiated light and the reflected light is measured, a time is obtained, and then a distance is calculated by multiplying the time by a speed of light. The advantage of this approach is that the necessary resolution can be expected.
The TOF sensors 13A and 13A are driven in synchronization with illumination of light of the projection unit 12, and each light receiving element (corresponding to a pixel) calculates a distance corresponding to each pixel from a phase difference with respect to reflected light, and outputs ranging information image data (also referred to as "ranging image" or "TOF image") in which information indicating the distance to each region within a measurement range is associated with pixel information. The TOF sensors 13A and 13A can output phase information image data, wherein the phase information is associated with pixel information; ranging information image data may then be obtained in a post-process based on the phase information image data.
The number of areas into which the measurement range can be divided is determined by the resolution of the light receiving area. Therefore, if the resolution of the light receiving area is low for the purpose of miniaturization, the number of sets of pixel information of the ranging image data is reduced, so that the number of points included in each three-dimensional point cloud is reduced.
Alternatively, the distance may be measured by a pulse method instead of the phase difference detection method. For example, in this case, the light source units 12A and 12A emit the irradiation pulse P1 of the high peak power ultrashort pulse having a rise time of several nanoseconds (ns), and the TOF sensors 13A and 13A measure the time (t) required until the reflected pulse P2 (i.e., the reflected light of the irradiation pulse P1 emitted by the light source units 12A and 12A) is received while being synchronized with the light source units 12A and 12A.
With this method, for example, as the TOF sensors 13A and 13A, sensors of a circuit for measuring time mounted on the output side of the light receiving element are used. In each circuit, the time required for the light source units 12A and 12A to emit the irradiation pulse P1 and to receive the reflection pulse P2 is converted into the distance of each light receiving element to obtain the distance to each region.
This method is suitable for widening the angle of the image pickup device 1 because peak light can be used to output strong light. Further, when a configuration in which light is deflected (scanning is performed) using a MEMS mirror or the like is used, intense light can be emitted to a distant place while suppressing widening of the light, thereby increasing the measurement distance. In this case, an arrangement is provided that: the laser light emitted from the light source units 12A and 12A is scanned (deflected) toward the wide-angle lenses 12B and 12B by the MEMS mirrors.
The effective angle of view of the image pickup unit 11 and the effective angle of view of the ranging information acquisition unit 13 may preferably be equal to each other, for example, 180 degrees or more, but these angles do not have to be equal to each other. The effective angle of view of the image pickup unit 11 and the effective angle of view of the ranging information acquisition unit 13 can be reduced, if necessary. According to the present embodiment, the number of effective pixels is reduced to, for example, a range of 100 degrees to 180 degrees for each of the image pickup unit 11 and the ranging information acquisition unit 13, so that the image pickup apparatus 1 main body and the ranging information acquisition unit 13 are not included in the angle of view.
The resolution of the TOF sensors 13A and 13A may be preferentially set lower than that of the image pickup devices 11A and 11A to achieve miniaturization of the image pickup apparatus 1. Since the resolution of the TOF sensors 13A and 13A is lower than that of the image pickup devices 11A and 11A, the size of the light receiving area can be reduced, thereby reducing the size of the image pickup apparatus 1. In this case, the TOF sensors 13A and 13A thus have low resolution, and the density of the three-dimensional point cloud acquired by the TOF sensors 13A and 13A has low density. However, since the processing circuit 14 is provided as the "acquisition unit", the point cloud can be converted into a high-density three-dimensional point cloud. The process of converting the point cloud into a high-density three-dimensional point cloud by the processing circuit 14 will be described later.
For example, in the present embodiment, the image pickup device 11a, the light source unit 12a, and the TOF sensor 13a are arranged in a straight line along the longitudinal direction of the housing 10. Similarly, the image pickup device 11a, the light source unit 12a, and the TOF sensor 13a are arranged in a straight line along the longitudinal direction of the housing 10. Hereinafter, description will be made using the image pickup device 11a, the light source unit 12a, and the TOF sensor 13a as an example.
As depicted in fig. 2, the image pickup region (image pickup surface) of the image pickup device 11a or the light receiving region (light receiving surface) of the TOF sensor 13a may be disposed in a direction perpendicular to the longitudinal direction, or may be disposed in the longitudinal direction while providing a prism or the like, changing the straight propagation direction (optical path) of light by 90 degrees before the light is incident on the image pickup region or the light receiving region. Alternatively, the orientation of the sensors may also be set depending on the configuration. That is, the image pickup device 11a, the light source unit 12a, and the TOF sensor 13a are provided for the same measurement range. The image pickup unit 11, the projection unit 12, and the ranging information acquisition unit 13 are set to a measurement range toward the corresponding side of the housing 10.
In this regard, it is necessary to set the image pickup device 11a and the TOF sensor 13a to have the same base line to realize a parallel-stereoscopic configuration. Even if the image pickup apparatus 11a is only one image pickup apparatus, by setting the image pickup apparatus 11a and the TOF sensor to implement a parallel-stereoscopic configuration, the output of the TOF sensor 13a can be used to acquire parallax data. The light source unit 12a is configured to irradiate a measurement range of the TOF sensor 13a with light.
(processing Circuit)
Next, the processing of the processing circuit 14 will be described. The TOF images obtained by only the TOF sensors 13A and 13A have low resolution. Thus, in the present embodiment, an example is described in which the processing circuit 14 realizes high resolution and reconstructs high-density three-dimensional point cloud data. Some or all of the following processing performed by the processing circuit 14 as an "information processing unit" may be performed instead of the external device.
As described above, the three-dimensional point cloud data reconstructed by the image pickup apparatus 1 is output to an external apparatus such as a PC or the like through a portable recording medium or through communication, and is used to display a three-dimensional restoration model.
Therefore, the image pickup apparatus 1 can be made excellent in portability by increasing the speed, downsizing and weight saving as compared with the case where the image pickup apparatus 1 itself displays the three-dimensional restoration model.
However, at a place far from where the three-dimensional information is acquired, after reconstructing the three-dimensional information by an external device, it may be found that the photographed image unintentionally contains the photographer himself or a tripod, or that the acquired three-dimensional information does not have a desired layout. In this case, it takes time to revisit the place where the three-dimensional information is captured.
To solve this problem, it is conceivable to bring an external device (e.g., a PC) to the address, but in this case, the advantages of high speed, small volume, and light weight may be eliminated.
In addition, the captured three-dimensional information may also be transmitted to an external device through a communication line, and the reconstructed three-dimensional information may be received from the external device. However, since the amount of three-dimensional information is large, the advantage of the faster speed is eliminated, and it is difficult to visually determine whether the photographer himself or a tripod is included in the photographed image.
In particular, in the case of omnidirectional three-dimensional information, it is very difficult to visually determine whether a photographed image contains the photographer himself or his tripod.
In view of the foregoing, it is an object of the present embodiment to provide an image pickup apparatus 1 by which it can be easily determined in real time whether a photographer himself, a tripod, or the like appears in a photographed image, or the obtained three-dimensional information does not have a desired layout.
Fig. 3A to 3D are schematic diagrams showing a state of use of the image pickup apparatus according to the present embodiment.
In the state depicted in fig. 3A, the photographer M and the self-timer lever 1A supporting the image pickup apparatus 1 are not included in the omnidirectional image pickup range R, and the photographer M and the self-timer lever 1A are not present in the photographed omnidirectional image.
In the state depicted in fig. 3B, the photographer M is included in the omnidirectional image pickup range R, and the photographer M appears in the photographed omnidirectional image.
In the state depicted in fig. 3C, the tripod 1B supporting the image pickup apparatus 1 is included in the omnidirectional image pickup range R, and the tripod 1B appears in the omnidirectional picked-up image.
In the state described in fig. 3D, although the photographer M and the self-timer lever 1A supporting the image pickup apparatus 1 are not included in the omnidirectional image pickup range R, and the photographer M and the self-timer lever 1A are not present in the omnidirectional image, external light (e.g., sunlight, illumination, etc.) is strong, and thus misunderstanding of something that is not intended to be present in the omnidirectional image may occur.
In the state depicted in fig. 3B and 3C, since the color, type, and appearance of an object vary depending on a specific object, it is difficult to determine as well whether an object appears in a photographed image.
In the above-described state of fig. 3A to 3D, it is difficult to determine whether or not a specific object (a nearby object) (e.g., a photographer or a tripod) is present based on the ranging information image data output by the TOF sensors 13A and 13A, because it is difficult to determine whether or not the specific object is actually present or whether or not external light is too intense.
That is, when the charge storage amount associated with a specific pixel of the TOF sensors 13A and 13A is saturated, it is difficult to determine whether a specific object is present or whether the external light intensity is too strong based on only the outputs of the TOF sensors 13A and 13A.
In view of the foregoing, it is another object of an embodiment of the present invention to provide an image pickup apparatus 1 capable of accurately determining whether a specific object (e.g., a photographer himself or a tripod) appears in a photographed image by distinguishing the influence of external light. The present embodiment also has another object of determining not only a nearby object but also whether a highly reflective object, a distant object, a low reflective object, or an image blur or the like appears in a captured image.
Fig. 4 is a schematic diagram showing an example of the configuration of the processing block of the processing circuit 14. The processing circuit 14 depicted in fig. 4 includes a control unit 141, an RGB image data acquisition unit 142, a monochrome processing unit 143, a TOF image data acquisition unit 144, a resolution improvement unit 145, a matching processing unit 146, a re-projection processing unit 147, a semantic segmentation unit 148, a parallax calculation unit 149, a three-dimensional reconstruction processing unit 150, a determination unit 160, a display control unit 170 as an example of an output unit, and a transmission and reception unit 180 as an example of an output unit. In fig. 4, solid arrows indicate signal flow, and broken arrows indicate data flow.
When receiving an on signal (image pickup start signal) from the image pickup switch 15, the control unit 141 outputs a synchronization signal to the image pickup devices 11A and 11A, the light source units 12a and 12A, TOF sensors 13A and 13A, and controls the entire processing circuit 14. The control unit 141 first outputs an instruction signal for transmitting an ultrashort pulse to the light source units 12A and 12A, and simultaneously outputs an instruction signal for generating TOF image data to the TOF sensors 13A and 13A. The control unit 141 outputs instruction signals for image pickup to the image pickup devices 11A and 11A. Note that the image pickup by the image pickup devices 11A and 11A may be performed during the period in which the light source units 12A and 12A emit light, or may be performed at a period immediately before or after the light source units 12A and 12A emit light.
The RGB image data acquiring unit 142 acquires RGB image data captured by the image pickup devices 11A and 11A based on an image pickup instruction from the control unit 141, and outputs omnidirectional RGB image data. The monochrome processing unit 143 performs processing of acquiring a corresponding data type for performing matching processing with the TOF image data obtained from the TOF sensors 13A and 13A. In the present embodiment, the monochrome processing unit 143 performs processing of converting omnidirectional RGB image data into omnidirectional monochrome image data.
The TOF image data acquisition unit 144 acquires the TOF image data generated by the TOF sensors 13A and 13A based on an instruction signal for generating the TOF image data from the control unit 141, and outputs omnidirectional TOF image data.
The resolution increasing unit 145 regards the omni-directional TOF image data as monochrome image data, and increases the resolution of the image data. Specifically, the resolution increasing unit 145 replaces the distance value corresponding to each pixel of the omnidirectional TOF image data with the value (i.e., the gray value) of the omnidirectional monochromatic image data. Then, the resolution increasing unit 145 increases the resolution of the omnidirectional-monochromatic image data to the resolution of the omnidirectional-RGB image data acquired from the image pickup devices 11A and 11A. The resolution improvement, i.e., conversion to higher resolution data, may be implemented by performing a general conversion process, for example. As another method of improving the resolution, for example, a plurality of frames of continuously generated omnidirectional TOF image data may be acquired and used to perform super-resolution processing of inserting distance data between neighboring points.
The matching processing unit 146 extracts feature amounts of portions having textures for the omnidirectional monochromatic data obtained by increasing the resolution of the omnidirectional TOF image data and the omnidirectional monochromatic image data obtained from the omnidirectional RGB image data, and performs matching processing based on the extracted feature amounts. For example, the matching processing unit 146 extracts edges from each monochrome image, and performs matching processing between the extracted edge information sets. Alternatively, the matching process may be implemented using scale-invariant feature transform (SIFT) or the like, in which texture variations are represented as feature quantities. Note that the matching process is a process of searching for a corresponding pixel.
A particular method of matching processing is, for example, block matching. Block matching is a method of calculating the similarity between pixel values, which are extracted blocks of M by M (M is a positive integer) pixel size with respect to the surrounding of a reference pixel, and extracted blocks of M by M pixel size with respect to the surrounding of each search center pixel in other images, with respect to the center pixel having the highest similarity.
The similarity is calculated in different ways. For example, a formula representing normalized cross-correlation (NCC) coefficients may be used. The higher the value of the NCC coefficient, the higher the similarity, and if the pixel values of the pixel blocks are identical to each other, the NCC coefficient is 1.
Further, because distance data relative to the non-textured regions may be included in the omnidirectional TOF image data, the matching process may include weighting based on each region. For example, when calculation is performed using a formula representing NCC coefficients, weighting may be performed on non-edge portions (non-texture regions).
Alternatively, a Selective Correlation Coefficient (SCC) or the like may be used instead of the formula representing the NCC coefficient.
The re-projection processing unit 147 performs a process of re-projecting the omnidirectional TOF image data representing the distance of each position of the measurement range onto the two-dimensional coordinates (screen coordinate system) of the image pickup unit 11. The re-projection refers to recognizing coordinates of images of the image pickup devices 11A and 11A with respect to three-dimensional points calculated by the TOF sensors 13A and 13A. The omnidirectional TOF image data depicts the positions of three-dimensional points in a coordinate system centered on the ranging information acquisition unit 13 (mainly the wide-angle lenses 13B and 13B). Therefore, the three-dimensional point indicated by the omnidirectional TOF image data is re-projected onto the coordinate system centered on the image pickup unit 11 (mainly the fisheye lenses 11B and 11B).
For example, the re-projection processing unit 147 performs translation of coordinates of three-dimensional points of the omnidirectional TOF image data to coordinates of three-dimensional points centered on the image pickup unit 11, and then performs processing of converting the three-dimensional point coordinates of the omnidirectional TOF image data to two-dimensional coordinate system (screen coordinate system) coordinates of the omnidirectional RGB image data. Therefore, the coordinates of the three-dimensional point of the omnidirectional TOF image data and the coordinates of the omnidirectional two-dimensional image information captured by the image pickup unit 11 are associated with each other. Accordingly, the re-projection processing unit 147 correlates the three-dimensional point coordinates of the omnidirectional TOF image data with the coordinates of the omnidirectional two-dimensional image information captured by the image pickup unit 11.
The parallax calculation unit 149 calculates parallax with respect to each position using the distance difference between the corresponding pixels obtained by the matching process. In the parallax matching process, using the re-projection coordinates obtained by the re-projection processing unit 147, surrounding pixels of the position with respect to the re-projection coordinates are searched, so that the processing time can be shortened, or more detailed and high-resolution ranging information can be obtained.
The segmentation data obtained by the semantic segmentation process of the semantic segmentation unit 148 can be used for the disparity matching process. In this case, more detailed and high-resolution ranging information can be obtained.
Further, the parallax matching process may be performed only at the edges or only at the portions having a large feature amount. For other parts, for example, also using omnidirectional TOF image data, the propagation processing may be performed using omnidirectional RGB image features, using a probabilistic method, or the like.
The semantic segmentation unit 148 uses a deep learning technique to provide the input image with segmentation labels indicative of objects relative to the measurement range. This further improves the reliability of the computation, as each pixel of the omnidirectional TOF image data can be bound to any one of a plurality of distance areas classified on a per distance basis.
The three-dimensional reconstruction processing unit 150 acquires omnidirectional RGB image data from the RGB image data acquisition unit 142, reconstructs omnidirectional three-dimensional data based on the ranging information output by the parallax calculation unit 149, and outputs a high-density omnidirectional three-dimensional point cloud in which color information is added with respect to each three-dimensional point. The three-dimensional reconstruction processing unit 150 is an example of a three-dimensional information determination unit that determines three-dimensional information.
The determination unit 160 acquires omnidirectional RGB image data from the RGB image data acquisition unit 142, acquires omnidirectional TOF image data of two-dimensional coordinate system data converted into omnidirectional RGB image data from the re-projection processing unit 147, determines whether a specific object appears in a captured image based on these data, and outputs the determination result to the display control unit 170.
The display control unit 170 acquires the omnidirectional RGB image data from the RGB image data acquisition unit 142, and displays two-dimensional image information on the display unit 20 based on the acquired omnidirectional RGB image data. The display control unit 170 displays a display image on the display unit 20, the display image including information indicating the determination result obtained from the determination unit 160, and including two-dimensional image information.
The display control unit 170 is an example of an output unit that outputs two-dimensional image information captured by the image pickup unit 11 in addition to three-dimensional information, and the display unit 20 is an example of an output destination of the two-dimensional image information.
The display control unit 170 may acquire omnidirectional three-dimensional data from the three-dimensional reconstruction processing unit 150 and display three-dimensional information on the display unit 20. In this case, specifically, the display control unit 170 may select a case where two-dimensional image information is displayed on the display unit 20 or a case where three-dimensional information is displayed on the display unit 20 depending on a predetermined condition. Accordingly, the display control unit 170 may output two-dimensional image information in addition to the three-dimensional information.
The transmitting and receiving unit 180 communicates with an external device through a wired or wireless technology, and transmits (outputs) the omnidirectional three-dimensional data output from the three-dimensional reconstruction processing unit 150 and the omnidirectional two-dimensional image information output from the RGB image data acquiring unit 142 to the external device 300 which performs three-dimensional restoration processing through the network 400.
In the present embodiment, the two-dimensional image information captured by the image pickup unit 11 is "original two-dimensional image information" for generating "two-dimensional image data for display", or "two-dimensional image data for display". For example, there are a case where "two-dimensional image data for display" is generated from "original two-dimensional image information" in the image pickup device 1, and a case where "original two-dimensional image information" is transmitted from the image pickup device 1 to an external device, and the external device generates "two-dimensional image data for display" from "original two-dimensional image information".
The transmitting and receiving unit 180 is an example of an output unit that outputs three-dimensional information, and the external device 300 is an example of an output destination that outputs three-dimensional information.
The transmitting and receiving unit 180 may transmit only the omni-directional three-dimensional data, and not the omni-directional two-dimensional image information. The transmitting and receiving unit 180 may include an interface circuit associated with a portable storage medium such as an SD card, a personal computer, or the like.
(operation of processing Circuit)
Fig. 5 is a flowchart showing an example of the operation of the processing circuit 14 of the image pickup apparatus 1. When the user turns on the image pickup switch 15 and inputs an image pickup instruction signal, the control unit 141 of the processing circuit 14 performs an operation s of generating a high-density three-dimensional point cloud by the following method (an example of an image pickup processing method and an information processing method).
First, in step S1, the control unit 141 drives the light source units 12a and 12A, TOF sensors 13A and 13A, and the image pickup devices 11A and 11A to capture an image of a measurement range. The control unit 141 drives the light source units 12A and 12A to emit infrared light (an example of a projection step), and the TOF sensors 13A and 13A receive reflected light (an example of a light receiving step). Further, the image pickup apparatuses 11A and 11A capture images of the measurement range at the start of driving of the light source units 12A and 12A or in a period immediately before or after the start of driving (an example of an image pickup step).
Next, in step S2, the RGB image data acquiring unit 142 acquires RGB image data of the measurement range from the image pickup devices 11A and 11A. In step S3, the display control unit 170 acquires omnidirectional RGB image data from the RGB image data acquisition unit 142, and displays two-dimensional image information on the display unit 20 based on the acquired omnidirectional RGB image data.
The display control unit 170 displays two-dimensional image information of a part of the acquired omnidirectional RGB image data on the display unit 20, and changes the area of the two-dimensional image information displayed on the display unit 20 according to any one of various inputs by the user. The various inputs by the user may be implemented by operation switches other than the image pickup switch 15, or by the display unit 20 configured to function as an input unit of a touch panel or the like.
At this stage, the photographer may find that the two-dimensional image information displayed on the display unit 20 contains an image of the photographer or a tripod (if any), or does not acquire a desired layout.
Next, in step S4, the TOF image data acquisition unit 144 acquires TOF image data representing a distance of each position relative to the two-dimensional domain from the TOF sensors 13A and 13A.
Next, in step S5, the monochrome processing unit 143 converts the RGB image data into monochrome image data. The TOF image data and the RGB image data differ in data types of the ranging data and the RGB data, respectively, and cannot be used for the matching process. Therefore, each of the types of data is first converted into monochrome image data. Regarding the TOF image data, the monochrome processing unit 143 converts the value representing the distance of each pixel into a value of monochrome image data before the resolution is increased by the resolution increasing unit 145.
Next, in step S6, the resolution increasing unit 145 increases the resolution of the TOF image data. Next, in step S7, the matching processing unit 146 extracts a feature quantity having a texture portion in each monochrome image, and performs matching processing using the extracted feature quantity.
Next, in step S8, the parallax calculation unit 149 calculates the parallax of each position from the distance difference with respect to the corresponding pixel, and calculates the distance.
Next, the determination unit 160 acquires omnidirectional RGB image data from the RGB image data acquisition unit 142, acquires omnidirectional TOF image data of two-dimensional coordinate system data that has been converted into RGB image data from the re-projection processing unit 147, determines whether a nearby object appears in the captured image as a specific object based on these data sets, and outputs the determination result to the display control unit 170 (an example of a determination step).
In step S9, the display control unit 170 displays the information indicating the determination result obtained from the determination unit 160 on the display unit 20 by superimposing the information indicating the determination result on the two-dimensional image information or making the two-dimensional image information include the information indicating the determination result (an example of a display step). In step S9, the determination unit 160 determines whether there is a high-reflection object, a distant object, a low-reflection object, an image blur, or the like, and a nearby object as a specific object, and outputs the determination result to the display control unit 170.
In step S10, the three-dimensional reconstruction processing unit 150 acquires RGB image data from the RGB image data acquisition unit 142, reconstructs three-dimensional data based on the ranging information output from the parallax calculation unit 149, and outputs a high-density three-dimensional point cloud in which color information is added to each three-dimensional point.
Next, in step S11 (an example of a three-dimensional information output step), the transmitting and receiving unit 180 transmits the three-dimensional data output from the three-dimensional reconstruction processing unit 150 and the two-dimensional image information output from the RGB image data acquiring unit 142 to the external apparatus 300 that performs the three-dimensional restoration process through the network 400.
The transmitting and receiving unit 180 may transmit the three-dimensional data output from the three-dimensional reconstruction processing unit 150 without transmitting the two-dimensional image information output from the RGB image data acquiring unit 142.
As described above, the image pickup apparatus 1 includes the image pickup unit 11 and the display control unit 170, and the display control unit 170 outputs the two-dimensional image information captured by the image pickup unit 11 in addition to the three-dimensional information.
Therefore, the photographer can easily find out from the two-dimensional image information that the photographer himself, a tripod, or the like appears in the photographed image without using the three-dimensional information, or that a desired layout is not acquired.
Thus, the three-dimensional information can be acquired again at the place where the three-dimensional information is captured. Therefore, it is unnecessary to access the place where the three-dimensional information is acquired again, compared with the case where the photographer finds that the photographer himself, the tripod, or the like appears in the photographed image after leaving the place where the three-dimensional information is acquired, or the desired layout three-dimensional information is not acquired in the photographed image.
The three-dimensional information includes omnidirectional three-dimensional information. In this case, even if omnidirectional three-dimensional information is acquired, it is difficult to find that the photographer himself, a tripod, or the like appears in the photographed image or three-dimensional information of a desired layout is not acquired from the two-dimensional image information captured by the image pickup unit 11, but it is easy to find that the photographer himself, the tripod, or the like appears in the photographed image or three-dimensional information of a desired layout is not acquired from the two-dimensional image information captured by the image pickup unit 11.
The transmission and reception unit 180 outputs the two-dimensional image information G in step S3 before transmitting (outputting) the three-dimensional information in step S11. Before the three-dimensional reconstruction processing unit 150 determines three-dimensional information in step S10, the display control unit 170 outputs two-dimensional image information G in step S3.
Therefore, before checking the three-dimensional information, it can be determined from the two-dimensional image information whether the photographer himself, a tripod, or the like appears in the photographed image, or a desired layout is not acquired.
The display control unit 170 displays two-dimensional image information on the display unit 20. The image pickup apparatus 1 includes a display unit 20.
Therefore, it is possible to easily find out from the two-dimensional image information displayed on the display unit 20 that the photographer himself, a tripod, or the like appears in the photographed image, or that a desired layout is not acquired.
The display control unit 170 outputs two-dimensional image information to the display unit 20, unlike the external device 300 to which the transmission and reception unit 180 outputs three-dimensional information.
Therefore, it is possible to find that the photographer himself, a tripod, or the like appears in the photographed image or that the desired layout three-dimensional information is not acquired from the two-dimensional image information output to the display unit 20 other than the external apparatus 300 without checking the three-dimensional information output to the external apparatus 300.
The image pickup apparatus 1 includes a three-dimensional reconstruction processing unit 150 that determines three-dimensional information based on the output of the ranging information acquisition unit 13. The three-dimensional reconstruction processing unit 150 determines three-dimensional information based on the outputs of the ranging information acquisition unit 13 and the two-dimensional image information.
Therefore, without checking the three-dimensional information determined by the three-dimensional reconstruction processing unit 150, it is possible to find out from the two-dimensional image information captured by the image pickup unit 11 that the photographer himself, a tripod, or the like is present in the photographed image, or that a desired layout is not acquired.
Fig. 6A and 6B are flowcharts of generation of omnidirectional image data according to the present embodiment.
Fig. 6A is a flowchart showing a process of generating omnidirectional RGB image data corresponding to step S2 described in fig. 5.
In step S201, the RGB image data acquiring unit 142 acquires two RGB image data sets in the fisheye image format.
In step S202, the RGB image data acquiring unit 142 converts each RGB image data set into data in an equidistant columnar image format. The RGB image data acquisition unit 142 converts the two RGB image data sets into equidistant columnar image format data based on the same coordinate system, facilitating the next image connection. If necessary, the RGB image data may be converted into image data using one or more image formats other than the equidistant columnar image format. For example, the RGB image data may be converted into data having image coordinates obtained by perspective projection onto an arbitrary plane or perspective projection onto each surface of an arbitrary polyhedron.
The equidistant columnar image format will now be described. The equidistant columnar image format is an image format capable of expressing an omni-directional image, and is an image format of equidistant columnar images generated by equidistant columnar projection. Equidistant columnar projection is a method of expressing three-dimensional directions (e.g., longitude and latitude of a celestial sphere) using two variables, and provides a two-dimensional expression in which longitude and latitude are perpendicular to each other. Thus, equidistant columnar images are images generated by equidistant columnar projections, expressed by coordinates in which two angular variables of a spherical coordinate system are taken as two-axis variables.
In step S203, the RGB image data acquiring unit 142 connects the two RGB image data sets generated in step S202 together, and generates one omni-directional RGB image data set. The two RGB image data sets used each cover an area with a total viewing angle exceeding 180 degrees. Thus, the omni-directional RGB image data generated by properly concatenating the two RGB image data sets may cover the entire celestial region.
Further, the connection processing in step S203 may connect a plurality of images together using a known technique, and the method is not particularly limited.
Fig. 6B is a flowchart showing generation processing of omnidirectional TOF image data corresponding to step S4 described in fig. 5.
In step S401, the TOF image data acquisition unit 144 acquires two fisheye image-format ranging image data sets.
In step S402, the TOF image data acquisition unit 144 converts the two fisheye image-format TOF image data sets into equidistant cylindrical image-format data. As described above, the equidistant columnar image format is a system capable of expressing an omni-directional image. In step S402, the two TOF image datasets are converted to equidistant columnar image format data based on the same coordinate system, facilitating the image connection in step S403.
In step S403, the TOF image data acquisition unit 144 connects the two TOF image data sets generated in step S402 together, and generates one omni-directional TOF image data set. The two TOF image datasets used each cover an area with a total view angle exceeding 180 degrees. Thus, omnidirectional TOF image data generated by properly connecting two TOF image data sets together may cover the entire celestial region.
Further, the connection processing in step S403 may connect a plurality of images using a known technique, and the method is not particularly limited.
Fig. 7 is a flowchart showing a process of identifying a nearby object according to the present embodiment.
Fig. 7 is a flowchart showing a process of determining whether or not a nearby object appears in the captured image and corresponds to step S9 described in fig. 5.
In step S801, the determination unit 160 determines whether there is a charge storage-saturated pixel, for example, a charge storage-greater than or equal to a predetermined value, in the omnidirectional TOF image data based on the omnidirectional TOF image data acquired from the re-projection processing unit 147.
In step S802, when there is a pixel in which the charge storage amount is saturated in step S801, the determination unit 160 determines whether the charge storage amount is saturated in the omnidirectional RGB image data based on the omnidirectional RGB image data acquired from the RGB image data acquisition unit 142, as an example in which the charge storage amount is greater than or equal to a predetermined value, for a pixel having the same coordinates as the pixel in which the charge storage amount is saturated in step S801.
In step S802, when the charge storage amount is saturated, the determination unit 160 determines in step S801 that the charge storage amount is saturated due to external light (e.g., sunlight or illumination), and outputs error information to the display control unit 170. In step S803, the display control unit 170 displays a display image including the error information and the two-dimensional image information on the display unit 20 based on the error information acquired from the determination unit 160.
In step S802, when the charge storage amount is not saturated, the determination unit 160 determines in step S801 that the charge storage amount is saturated due to the presence of the nearby object, and outputs the coordinate position information of the pixel in which the charge storage amount is saturated in step S801 to the display control unit 170. In step S804, the display control unit 170 displays a display image on the display unit 20, the display image including identification information for identifying the adjacent object, based on the pixel coordinate position information and the two-dimensional image information acquired from the determination unit 160.
In step S805, when there is no pixel whose charge storage amount is saturated in step S801, the determination unit 160 determines whether there is a pixel having ranging information of 0.5m or less in the omnidirectional TOF image data based on the omnidirectional TOF image data acquired from the re-projection processing unit 147.
When there is no pixel having ranging information of 0.5m or less in step S805, the determination unit 160 ends the process.
When there is a pixel having ranging information of 0.5m or less in step S805, the determination unit 160 proceeds to step S804 described above. In step S804, it is determined that the pixel has ranging information of 0.5 meter or less due to the presence of the neighboring object in step S805, and coordinate position information of the pixel having ranging information of 0.5 meter or less is output to the display control unit 170 in step S805. The display control unit 170 displays a display image including identification information identifying a neighboring object based on the pixel coordinate position information acquired from the determination unit 160 and two-dimensional image information on the display unit 20.
As described above, when it is determined that the nearby object exists, the display control unit 170 superimposes the identification information on the two-dimensional image information or causes the identification information to be included in the two-dimensional information; when it is determined that the nearby object does not exist, the display control unit 170 does not superimpose the identification information on the two-dimensional image information, nor includes the identification information in the two-dimensional information.
That is, the display control unit 170 causes the display unit 20 to perform display in different manners depending on the presence or absence of the adjacent object.
The display control unit 170 displays a display image on the display unit 20, the display image including identification information for identifying a nearby object based on the coordinate position information of the pixels acquired from the determination unit 16, and including two-dimensional image information.
That is, the display control unit 170 causes the display unit 20 to perform different display at the position of the display unit 20 corresponding to the position of the adjacent object.
Fig. 8 is a schematic diagram showing display contents of a display unit according to an embodiment.
Fig. 8 is a schematic diagram corresponding to step S2 depicted in fig. 5 and steps S803 and S804 depicted in fig. 7.
The two-dimensional image information G is displayed on the display unit 20 by the display control unit 170. The display unit 20 displays a display image including identification information G1 and G2 for identifying objects such as a nearby object (e.g., a finger and a tripod), error information G3, and two-dimensional image information G by the display control unit 170. The error information G3 may be expressed with a mark such as "sun or illumination" as depicted in fig. 8.
As described above, the image pickup apparatus 1 includes the image pickup unit 11 for capturing an object, the projection unit 12 for projecting light to the object, the ranging information acquisition unit 13 for receiving light reflected from the object, and the display control unit 170 for causing the display unit 20 to perform display differently depending on the presence or absence of an object (e.g., a neighboring object determined based on the output of the image pickup unit 11).
Thus, the photographer can accurately find himself or herself or a nearby object (e.g., a tripod) appearing in the photographed image, distinguishing the photographer or the nearby object from the influence of external light.
The image pickup apparatus 1 includes a display unit 20. This allows the photographer to determine whether or not a nearby object is present in the photographed image.
The display control unit 170 causes the display unit 20 to perform different display at the position of the display unit 20 corresponding to the position of the adjacent object. This allows the photographer to recognize the position of the neighboring object appearing in the photographed image.
The display control unit 170 displays the image information G captured by the image pickup unit 11 on the display unit 20, and displays a display image including identification information G1 and G2 for identifying the adjacent object and the image information on the display unit 20. This ensures that the photographer can identify the position of the neighboring object appearing in the photographed image.
The image pickup apparatus 1 includes a determination unit 160 that determines that a nearby object exists as an example of a pixel whose charge storage amount is greater than or equal to a predetermined value when the charge storage amount of the pixel is saturated due to the light received by the ranging information acquisition unit 13, and that the charge storage amount of the same pixel of the image pickup unit 11 is saturated as an example of a pixel whose charge storage amount is not less than or equal to a predetermined value.
This allows the photographer to accurately find out that an adjacent object appears in the photographed image, thereby distinguishing the object from the influence of external light.
Fig. 9 is a view showing the appearance of an image pickup apparatus according to a variation of the present embodiment; fig. 10 is a schematic diagram showing the configuration of a processing block of a processing circuit according to the present variation.
In this variation, the display control unit 170 acquires omnidirectional RGB image data from the RGB image data acquisition unit 142, and displays two-dimensional image information on the display unit 520 of the display device 500 based on the acquired omnidirectional RGB image data. The display unit 520 is an example of a two-dimensional image information output destination.
Therefore, it can be easily found from the two-dimensional image information displayed on the display unit 520 that the photographer himself, a tripod, or the like appears in the photographed image, or that a desired layout is not acquired.
The display control unit 170 outputs two-dimensional image information to the display unit 520, unlike the external device 300 to which the transmission and reception unit 180 outputs three-dimensional information.
Therefore, it is possible to confirm that the photographer himself or herself or the tripod is not present in the photographed image from the two-dimensional image information output to the display unit 520 other than the external device 300 without checking the three-dimensional information output to the external device 300, and obtain the three-dimensional information having a desired layout.
The display control unit 170 may acquire three-dimensional data of the omni-directional image from the three-dimensional reconstruction processing unit 150 and display the three-dimensional information on the display unit 520. Specifically, the display control unit 170 may select a case where two-dimensional image information is displayed on the display unit 520 or a case where three-dimensional information is displayed on the display unit 520 according to a predetermined condition. Accordingly, the display control unit 170 may output two-dimensional image information in addition to the three-dimensional information.
The display control unit 170 displays a display image including the error information based on the error information acquired from the determination unit 160 and the two-dimensional image information on the display unit 520.
The display control unit 170 displays a display image including identification information for identifying a neighboring object based on the coordinate position information of the pixel acquired from the determination unit 160, and including two-dimensional image information on the display unit 520.
That is, the display control unit 170 causes the display unit 520 to perform display in different manners depending on the presence or absence of the nearby object determined based on the output of the ranging information acquisition unit 13 and the output of the image pickup unit 11.
Thus, the photographer can accurately find himself or herself or a nearby object (e.g., a tripod) appearing in the photographed image, distinguishing the photographer or the nearby object (e.g., the tripod) from the influence of the external light.
The display control unit 170 causes the display unit 520 to perform different displays at positions of the display unit 520 corresponding to positions of adjacent objects. This allows the photographer to recognize the position of the neighboring object appearing in the photographed image.
The display control unit 170 displays image information photographed by the image pickup unit 11 on the display unit 520, and displays a display image including identification information for identifying a nearby object on the display unit 520. This can ensure that the photographer can recognize the position of the neighboring image appearing in the photographed image.
Fig. 11 is a view showing the appearance of an image pickup apparatus according to a second variation of the embodiment of the present invention. Fig. 12 is a diagram showing a configuration of a processing block of a processing circuit according to the second modification.
In the second modification depicted in fig. 11, the image pickup apparatus 1 includes a plurality of display units 20A and 20A, instead of the display unit 20 shown in fig. 1. The display units 20A and 20A include LEDs or the like, and blink or continuously light up according to the output signal of the processing circuit 14.
The display unit 20A is disposed on a first surface of the front surface of the housing 10, and the display unit 20A is disposed on a second surface of the rear surface of the housing 10.
In the second modification depicted in fig. 12, the display control unit 170 displays information indicating the determination result acquired from the determination unit 160 on the display units 20A and 20A. For example, when a neighboring object exists on each side of the image pickup apparatus 1, the display units 20a and 20b may blink red.
The transmitting and receiving unit 180 transmits (outputs) the omnidirectional two-dimensional image information output from the RGB image data acquiring unit 142 to the display device 500 through the network 400. The display device 500 is an example of an output destination that outputs two-dimensional image information.
That is, in the second variation, in step S3 shown in fig. 5, the transmitting and receiving unit 180 acquires the omnidirectional RGB image data from the RGB image data acquiring unit 142, and transmits (outputs) the two-dimensional image information to the display device 500 based on the acquired omnidirectional RGB image data.
The transmitting and receiving unit 510 of the display device 500 receives the two-dimensional image information transmitted from the transmitting and receiving unit 180 of the image pickup device 1.
The display control unit 530 of the display device 500 displays the two-dimensional image information received by the transmitting and receiving unit 510 on the display unit 520. The display device 500 including the display control unit 530 is an example of an information processing device.
As described above, the image pickup apparatus 1 includes the image pickup unit 11 and the transmitting and receiving unit 180, and the transmitting and receiving unit 180 outputs two-dimensional image information captured by the image pickup unit 11 in addition to three-dimensional information.
Therefore, it can be easily found from the two-dimensional image information that the photographer himself, a tripod, or the like appears in the photographed image, or that the desired layout is not acquired with respect to the three-dimensional information without checking the three-dimensional information.
Thus, if necessary, the three-dimensional information may be acquired again at the place where the three-dimensional information is acquired. Therefore, it is unnecessary to access the place where the three-dimensional information is acquired again, as compared with a case where the photographer himself, the tripod, or the like is found to appear in the photographed image after leaving the place where the three-dimensional information is acquired, or the desired three-dimensional information layout is not obtained.
The transmitting and receiving unit 180 transmits (outputs) the two-dimensional image information G in step S3, and then transmits (outputs) the three-dimensional information in step S11. Before the three-dimensional reconstruction processing unit 150 determines three-dimensional information in step S10, the transmitting and receiving unit 180 transmits (outputs) the two-dimensional image information G in step S3.
Therefore, before checking the three-dimensional information, it may be determined that the photographer himself, a tripod, or the like appears in the photographed image based on the two-dimensional image information, or that a desired layout is not acquired.
The transmitting and receiving unit 180 transmits the two-dimensional image information to the display device 500, and the display device 500 displays the two-dimensional image information on the display unit 520.
Therefore, it can be easily found from the two-dimensional image information displayed on the display unit 520 that the photographer himself, a tripod, or the like appears in the photographed image or that a desired layout is not acquired.
The transmitting and receiving unit 180 transmits the two-dimensional image information to the display device 500, unlike the external device 300 to which the three-dimensional information is output.
Therefore, it is possible to find that the photographer himself, a tripod, or the like appears in the photographed image or that a desired layout is not acquired with respect to the three-dimensional information from the two-dimensional image information output to the display unit 520 of the display device 500 other than the external device 300 without checking the three-dimensional information output to the external device 300.
The transmitting and receiving unit 180 may transmit the three-dimensional information to the display device 500. Specifically, the transmitting and receiving unit 180 may select a case of transmitting two-dimensional image information to the display device 500 or a case of transmitting three-dimensional information to the display device 500 according to a predetermined condition. Accordingly, the transmitting and receiving unit 180 may transmit two-dimensional image information in addition to three-dimensional information to the display device 500.
Fig. 13 is a flowchart of identifying a nearby object according to a second variation.
Fig. 13 is a flowchart showing a process of determining whether a nearby object is present in a captured image according to the second variation, corresponding to step S9 described in fig. 5.
In step S811, the determination unit 160 determines whether there is a pixel in which the charge storage amount is saturated based on the omnidirectional TOF image data acquired from the re-projection processing unit 147, as an example of a pixel in which the charge storage amount is greater than or equal to a predetermined value in the omnidirectional TOF image data.
In step S812, when there is a pixel in which the charge storage amount is saturated in step S811, the determination unit 160 determines, based on the omnidirectional RGB image data acquired from the RGB image data acquisition unit 142, whether the charge storage amount is saturated as an example of a predetermined value of a pixel having the same coordinates as the pixel in which the charge storage amount is saturated in step S811 in the omnidirectional RGM image data or not.
When the charge storage amount in step S812 is saturated, the determination unit 160 determines that the charge storage amount in step S811 is saturated by external light, and outputs error information to the display control unit 170. In step S813, the display control unit 170 displays error information on the display units 20A and 20A based on the error information acquired from the determination unit 160.
When the charge storage amount is not saturated in step S812, the determination unit 160 determines in step S811 that the charge storage amount is saturated due to the presence of the nearby object, and outputs the coordinate position information of the pixel in which the charge storage amount is saturated in step S811 to the display control unit 170. In step S814, the display control unit 170 determines whether the coordinate position information indicates the front side of the housing 10 based on the coordinate position information of the pixel acquired from the determination unit 160.
In step S815, when there is no pixel saturated in charge storage amount in step S811, the determination unit 160 determines whether there is any pixel having ranging information of 0.5m or less in the omnidirectional TOF image data based on the omnidirectional TOF image data acquired from the re-projection processing unit 147.
When there is no pixel having ranging information of 0.5m or less in step S815, the determination unit 160 ends the process.
When there is a pixel having ranging information of 0.5m or less in step S815, the determination unit 160 proceeds to step S814 as described above, determines that the pixel has ranging information of 0.5m or less determined in step S815 due to the presence of a nearby object, and outputs coordinate position information of the pixel having ranging information of 0.5m or less determined in step S815 to the display control unit 170. The display control unit 170 determines whether the coordinate position information indicates the front side of the housing 10 based on the pixel coordinate position information acquired from the determination unit 160.
In step S816, when it is determined in step S814 that the coordinate position information indicates the front side, the display control unit 170 blinks the display unit 20a arranged on the front side of the housing 10.
In step S817, if it is not determined in step S814 that the coordinate position information indicates the front side, the display control unit 17 blinks the display unit 20A on the rear side of the housing 10.
As described above, when it is determined that a nearby object exists, the display control unit 170 causes the display unit 20A or the display unit 20A to blink, and when it is determined that no nearby object exists, the display control unit 170 does not cause any one of the display unit 20A and the display unit 20A to blink.
That is, the display control unit 170 causes the display unit 20A and the display unit 20A to perform different displays depending on the presence or absence of the adjacent object.
Therefore, the photographer can accurately find that the photographer himself, a neighboring object (e.g., a tripod, etc.), etc. appear in the photographed image, and distinguish the photographer himself, the neighboring object (e.g., a tripod, etc.), etc. from the influence of the external light.
The display control unit 170 blinks the display unit 20A or the display unit 20A based on the pixel coordinate position information acquired from the determination unit 160.
That is, the display control unit 170 performs display at a display unit (i.e., the display unit 20A or the display unit 20A) at a different position according to the position of the adjacent object. This allows the photographer to identify the position of the neighboring object appearing in the photographed image.
The display control unit 170 causes the display unit 20A or 20A near the adjacent object to perform different displays depending on the presence or absence of the adjacent object. This allows the photographer to recognize the position of a specific object appearing in the photographed image.
Fig. 14 is a schematic diagram showing the configuration of an image pickup apparatus according to a third variation of the embodiment of the present invention.
In a third variation depicted in fig. 14, the image pickup apparatus 1 includes, in addition to the configuration depicted in fig. 2, another image pickup unit 111 including other image pickup devices 111A and other fisheye lenses (wide angle lenses) 111B and 111B.
In the third variation, the RGB image pickup unit 11 and the other image pickup unit 111 are set to have the same base line. In this case, processing can be performed in the processing circuit 14 using a plurality of eyes. That is, by driving the image pickup unit 11 and the other image pickup unit 111 at the same time at a predetermined distance on one plane, RGB images with respect to two viewpoints are obtained. This allows parallax calculated based on two RGB images to be used and further improves the distance accuracy over the entire measurement range.
Specifically, since the RGB image pickup unit 11 and the other image pickup units 111 are provided, multi-base stereo (MSB) of SSD, EPI processing, or the like can be used as in conventional parallax calculation. Therefore, by using this configuration, parallax improves reliability, and high spatial resolution and accuracy are obtained.
As described above, the image pickup apparatus 1 includes the other image pickup unit 111, and the three-dimensional reconstruction processing unit 150 determines three-dimensional information based on the output of the ranging information acquisition unit 13, two-dimensional image information, and other two-dimensional image information captured by the other image pickup unit 111.
The image pickup apparatus 1 may include other image pickup units 111, and a three-dimensional information determination unit that determines three-dimensional information based on two-dimensional image information captured by the other image pickup unit 111 and other two-dimensional image information without using the output of the ranging information acquisition unit 13.
Therefore, it can be found that the photographer himself, a tripod, or the like has appeared in the photographed image, or that the required layout has not been acquired with respect to the three-dimensional information in the two-dimensional image information captured by the image pickup unit 11 without checking the three-dimensional information determined by the three-dimensional reconstruction processing unit 150 based on the two-dimensional image information.
Fig. 15 is a flowchart for identifying a highly reflective object according to an embodiment of the present invention, and is a flowchart for showing a process for determining whether a highly reflective object is present in a captured image, corresponding to step S9 depicted in fig. 5.
In step S21, the determination unit 160 determines whether there is a pixel in which the charge storage amount is saturated based on the omnidirectional TOF image data acquired from the re-projection processing unit 147, as an example of a pixel in which the charge storage amount is greater than or equal to a predetermined value in the omnidirectional TOF image data.
In step S22, when there is a pixel in which the charge storage amount is saturated as determined in step S21, the determination unit 160 determines whether or not, in the omnidirectional RGB image data, a pixel having the same coordinates as the pixel in which the charge storage amount is saturated as determined in step S21 corresponds to the reference information of the highly reflective object, based on the omnidirectional RGB image data acquired from the RGB image data acquisition unit 142. As reference information with respect to the highly reflective object, the model image may be used to determine the degree of consistency between the RGB image data and the model image through an image recognition process. Alternatively, as reference information indicating a highly reflective object and RGB image data, a spectrum, color, or the like parameter may be used to determine the degree of consistency based on a predetermined threshold. Further alternatively, the reference information may also be stored as table data. Furthermore, a learning model may be used.
The processing circuit 14 stores an image of a highly reflective object (e.g., an image of a metal or mirror) as model image information. In step S22, the determination unit 160 uses a determiner (e.g., a determiner using artificial intelligence techniques) to determine whether the acquired image corresponds to any of the stored images of highly reflective objects.
In step S23, when it is determined that the image acquired in step S22 corresponds to any one of the stored images of the highly reflective object, the determination unit 160 outputs the coordinate position information of the pixel found in step S22 to the display control unit 170. The display control unit 170 displays a display image on the display unit 20 or 520 based on the pixel coordinate position information acquired from the determination unit 160 (step S24), the display image including identification information for identifying the highly reflective object and the two-dimensional image information, and ends the process.
Step S22 and step S23 are examples of the determination step, and step S24 is an example of the display step.
In step S25, when it is determined that the image acquired in step S22 does not coincide with the stored image of the highly reflective object, the determination unit 160 continues to determine the nearby object (step S23), and executes the nearby object determination flow depicted in fig. 7.
As described above, the image pickup apparatus 1 includes the determination unit 160 for determining whether or not a highly reflective object exists based on both the output of the ranging information acquisition unit 13 and the output of the image pickup unit 11; and a display control unit 170 for causing the display unit 20 or 520 to perform different displays depending on the presence or absence of the highly reflective object.
This allows the photographer to accurately find highly reflective objects (e.g., mirrors), if any, present in the captured image, distinguishing the highly reflective objects from the effects of nearby objects or external light.
The image pickup apparatus 1 includes a display unit 20. This allows the photographer to ensure that highly reflective objects appear in the photographed image.
The display control unit 170 causes the display unit 20 or 520 to perform different display at the position of the display unit 20 or 520 corresponding to the position of the highly reflective object. This allows the photographer to recognize the position of the highly reflective object.
Similar to the case of the adjacent object described above with reference to fig. 13, the display unit 20 includes a plurality of display units 20A and 20A, and the display control unit 170 causes a display unit closer to the highly reflective object among the plurality of display units 20A and 20A to perform different displays depending on the presence or absence of the object. This allows the photographer to identify the position of the highly reflective object.
Similar to the case of the neighboring object described above with reference to fig. 3, the display control unit 170 displays the image information G captured by the image pickup unit 11 on the display unit 20 or 520, and displays a display image including identification information for identifying the highly reflective object and the image information G on the display unit 20 or 520. This allows the photographer to determine the position of the highly reflective object.
When the charge storage amount is saturated at a pixel, for example, at the pixel, the charge storage amount with respect to the light received by the ranging information acquisition unit 13 is greater than or equal to a predetermined value, and when the image information captured by the image pickup unit coincides with the model image information as the reference information with respect to the highly reflective object, the determination unit 160 determines that the highly reflective object is present.
This allows the photographer to accurately find out that a highly reflective object appears in the photographed image, distinguishing the highly reflective object from an adjacent object or the influence of external light.
The image pickup apparatus 1 acquires ranging information with respect to an object based on the light received by the ranging information acquisition unit 13. In this case, the photographer can understand that the reason why the required ranging information cannot be acquired is not a nearby object or external light, but a highly reflective object.
The image pickup apparatus 1 includes a transmitting and receiving unit 180 that outputs three-dimensional information determined based on the ranging information acquired from the ranging information acquisition unit 13. In this case, the photographer can understand that the reason why the required three-dimensional information cannot be acquired is a highly reflective object, not a nearby object or external light.
Fig. 16 is a flowchart showing the determination with respect to the distant object and the low-reflection object in the present embodiment, and is a flowchart depicting a flow of determining whether the distant object or the low-reflection object appears in the captured image, corresponding to step S9 depicted in fig. 5.
In step S41, the determination unit 160 determines whether there is a pixel in the omnidirectional TOF image data whose charge storage amount is less than or equal to a threshold value at which ranging information can be acquired based on the omnidirectional TOF image data acquired from the re-projection processing unit 147.
In step S42, when there is no pixel whose storage amount is less than or equal to the threshold value in step S41, the determination unit 160 determines whether there is a pixel having ranging information of 10m or more in the omnidirectional TOF image data based on the omnidirectional TOF image data acquired from the re-projection processing unit 147. When there is a pixel having ranging information of 10m or more, the determination unit 160 determines that the pixel corresponds to a distant object, and outputs coordinate position information of the pixel to the display control unit 170.
The display control unit 170 displays a display image on the display unit 20 or 520 based on the pixel coordinate position information acquired from the determination unit 160 (step S43), the display image including identification information for identifying the distant object and the two-dimensional image information, and ends the process.
When there is no pixel having ranging information of 10m or more in step S42, the determination unit 160 ends the processing.
In step S44, when there is a pixel whose charge storage amount is less than or equal to the threshold value in step S41, the determination unit 160 determines whether or not the charge storage amount is less than or equal to a threshold value capable of identifying an object with respect to the pixel in the omnidirectional RGB image data, which has the same coordinates as those of the pixel whose charge storage amount is less than or equal to the threshold value in step S41, based on the omnidirectional RGB image data acquired from the RGB image data acquisition unit 142.
When it is determined in step S44 that the charge storage amount is less than or equal to the threshold value at which the object can be identified, the determination unit 160 determines that the pixel corresponds to the low reflection object, and outputs the coordinate position information of the pixel to the display control unit 170.
The display control unit 170 displays a display image on the display unit 20 or 520 based on the pixel coordinate position information acquired from the determination unit 160 (step S45), the display image including identification information for identifying the low-reflection object and the two-dimensional image information, and ends the process.
In step S46, when it is determined in step S44 that the charge storage amount is greater than the threshold value at which the object can be identified, the determination unit 160 determines the distance with respect to the RGB image data including the pixel found in step S44 based on model information, which is an example of reference information, wherein the distance is associated with the image. When the model image is thus used as the reference information, the degree of consistency between the RGB image data and the model image can be determined by image recognition. Further, for reference information and RGB image data, parameters may be used to determine the degree of consistency based on predetermined thresholds. Further, the reference information may be stored in a table, or a learning model may be used.
The processing circuit 14 stores an image associated with each of the plurality of distances as model information. In step S46, the determination unit 160 uses a determiner (e.g., a determiner using artificial intelligence techniques) to determine whether the acquired image coincides with the image of each of the distances included in the plurality of distances.
In step S47, the determination unit 160 determines whether the distance associated with the image acquired in step S46 is 10m or more; and, when the determination distance is 10m or more, the determination unit 160 determines that the pixel corresponds to a distant object, outputs coordinate position information of the pixel to the display control unit 170, and proceeds to step S43.
When the distance associated with the image acquired in step S46 is smaller than 10m, the determination unit 160 determines that the pixel corresponds to a low reflection object, outputs coordinate position information of the pixel to the display control unit 170 (step S47), and proceeds to step S45.
Steps S41, S42, S44, and S47 are examples of the determination step, and steps S43 and S45 are examples of the display step.
As described above, the image pickup apparatus 1 includes the determination unit 160 for determining whether there is a distant object or a low reflection object based on the output of the ranging information acquisition unit 13 and the output of the image pickup unit 11; and a display control unit 170 for causing the display unit 20 or 520 to perform different displays depending on the presence or absence of a distant object or a low reflection object.
This allows the photographer to accurately determine that a distant object or a low reflection object (such as a black object) appears in the photographed image.
The image pickup apparatus 1 includes a display unit 20. This allows the photographer to know that the captured image contains an image of a distant object or a low reflection object.
The display control unit 170 causes the display unit 20 or 520 to perform different displays at the position of the display unit 20 or 520 corresponding to the position of the distant object or the low reflection object. This allows the photographer to identify the location of a distant object or a low reflection object.
Similar to the case of the adjacent object described above with reference to fig. 13, the display unit 20 includes a plurality of display units 20A and 20A, and the display control unit 170 makes a display unit closer to the object at a far position or the low reflection object among the plurality of display units 20A and 20A perform different display depending on the presence or absence of the object. This allows the photographer to identify the position of a distant object or a low reflection object.
Similar to the case of the nearby object described above with reference to fig. 3, the display control unit 170 displays the image information G captured by the image pickup unit 11 on the display unit 20 or 520, and displays a display image including identification information for identifying a distant object or a low-reflection object and the image information G on the display unit 20 or 520.
When the charge storage amount of a pixel with respect to the light received by the ranging information acquisition unit 13 is less than or equal to the threshold value, the determination unit 160 determines that the pixel corresponds to an object that is a low reflection object or a distant object based on the output of the image pickup unit 11. This allows the photographer to accurately find out that a low reflection object or a distant object appears in the photographed image.
The determination unit 160 determines that a low-reflection object is present when the charge storage amount corresponding to the pixel with respect to the light received by the ranging information acquisition unit 13 is less than or equal to the threshold value, and the charge storage amount corresponding to the pixel of the image pickup unit 11 is less than or equal to the threshold value. This allows the photographer to accurately find out that a low reflection object appears in the photographed image.
When the charge storage amount corresponding to the pixel with respect to the light received by the ranging information acquisition unit 13 is equal to or less than the threshold value, the charge storage amount corresponding to the pixel of the image pickup unit 11 is equal to or more than the threshold value, and the distance determined based on the pixel is equal to or more than the threshold value, the determination unit 160 determines that the pixel corresponds to a distant object.
This allows the photographer to accurately find out that a distant object appears in the photographed image.
The image pickup device 1 acquires ranging information of an object based on the light received by the ranging information acquisition unit 13. In this case, the photographer can understand that the reason why the required ranging information cannot be acquired is a distant object or a low reflection object.
The image pickup apparatus 1 includes a transmitting and receiving unit 180 that outputs three-dimensional information determined based on the ranging information acquired from the ranging information acquisition unit 13 as an example of an output unit. In this case, the photographer can understand that the reason why the required three-dimensional information cannot be acquired is a distant object or a low-reflection object.
Fig. 17 is a flowchart showing a determination process for whether there is image blur in the captured image, corresponding to step S9 described in fig. 5.
In step S51, the determination unit 160 determines whether or not there are pixels of the image including the edge peripheral region in the omnidirectional RGB image based on the omnidirectional RGB image data acquired from the RGB image data acquisition unit 142.
The determination unit 160 detects edges occurring in the captured image using a change in the luminance value of the pixel or by comparing the first derivative or the second derivative thereof (e.g., with a threshold value), and identifies pixels of the image including the image of the edge peripheral region, but edges may also be detected by another method.
Next, in step S52, when there is a pixel of the image including the image of the edge peripheral region in step S51, the TOF image data is included in the omnidirectional TOF image data acquired from the re-projection processing unit 147, and includes a pixel having the same coordinates as those of the pixel of the image including the image of the edge peripheral region determined in step S51, for determining whether the edge of the TOF phase image is offset based on the TOF image data, and when it is determined that the edge is moving, the pixel coordinate position information found in step S51 is output to the display control unit 170.
The display control unit 170 displays a display image on the display unit 20 or 520 based on the pixel coordinate position information acquired from the determination unit 160 (step S53), the display image including identification information for indicating image blur and two-dimensional image information, and ends the process.
Steps S51 and S52 are examples of the determination step, and step S53 is an example of the display step.
When there are no pixels of the image corresponding to the image including the edge peripheral region in step S51, or there is no shift in the edge of the TOF phase image in step S52, the determination unit 160 ends the process.
In the present embodiment, the distance is measured by the phase difference detection method, and for each of the phases of 0 °, 90 °, 180 °, and 270 °, the image pickup apparatus 1 acquires N TOF phase images of the same phase and adds them.
Thus by adding together N phase images of the same phase, the dynamic range of the phase image relative to the corresponding phase is enlarged. Further, the capturing time required for adding together the N phase images for each phase is also shortened, so that a phase image with higher positional accuracy, which is less affected by image blurring or the like, is acquired. Therefore, using the phase image with the dynamic range enlarged, the process of detecting the image shift amount described below can be accurately performed.
The determination unit 160 may calculate the pixel shift amount of each phase by a general process of determining the optical flow or using a machine learning technique disclosed in a reference described below, and compare a value obtained by adding together the pixel shift amounts of the basis of each phase thus calculated with a threshold value, thereby finally determining whether or not there is an image blur. However, the determination unit 160 may determine whether there is image blur by other methods.
Paper name: packing thread-dimensional ToF Artifacts Through Learning and the FLAT Dataset
The authors: qi Guo (sea, harvard University); iuri frame; orazio Gallo; todd ziegler (sea, harvard University); jan Kautz
Monday, september 10,2018
The publication: ECCV (European computer vision conference) 2018
URL (uniform resource locator):
https://research.nvidia.com/publication/2018-09_Tackling-three-dimensional-ToF
as described above, the image pickup apparatus 1 includes the determination unit 160 for determining whether or not there is image blur based on both the output of the ranging information acquisition unit 13 and the output of the image pickup unit 11; and a display control unit 170 for causing the display unit 20 or 520 to perform different displays depending on whether image blurring exists or does not exist.
This allows the photographer to accurately find that image blur appears in the photographed image.
The image pickup apparatus 1 includes a display unit 20. This allows the photographer to accurately find that image blur appears in the photographed image.
The display control unit 170 causes the display unit 20 or 520 to perform different display at the position of the display unit 20 or 520 corresponding to the position where the image blur occurs. This allows the photographer to recognize the position where the image blur occurs.
Similar to the case of the nearby object described above with reference to fig. 13, the display unit 20 includes a plurality of display units 20A and 20A, and the display control unit 170 causes a display unit closer to the image blur occurrence position among the plurality of display units 20A and 20A to perform different display depending on the presence or absence of the image blur. This enables the photographer to identify the position of the image blur.
Similar to the case of the nearby object described above with reference to fig. 3, the display control unit 170 displays the image information G captured by the image pickup unit 11 on the display unit 20 or 520, and displays a display image including identification information for indicating image blur and the image information G on the display unit 20 or 520. This allows the photographer to recognize the position of the image blur.
When an image edge is detected based on the image information captured by the image pickup unit 11 and an offset of the corresponding pixel with respect to the light received by the ranging information acquisition unit 13 is detected, the determination unit 160 determines that there is an image blur.
This allows the photographer to accurately find that image blur appears in the photographed image.
The image pickup apparatus 1 acquires ranging information with respect to an object based on the light received by the ranging information acquisition unit 13. In this case, the photographer may find that the reason why the required ranging information cannot be acquired is image blur.
The image pickup apparatus 1 includes a transmitting and receiving unit 180 that outputs three-dimensional information determined based on the ranging information acquired from the ranging information acquisition unit 13 as an example of an output unit. In this case, the photographer may find that the reason why the required three-dimensional information cannot be acquired is image blur.
Fig. 18A-18C are schematic diagrams of determining flows according to a fourth variant of an embodiment of the invention.
In step S9 described in fig. 5, the determination unit 160 determines the presence or absence of a specific object (e.g., a neighboring object), and the display control unit 170 causes the display unit 20 or 520 to perform different displays depending on the presence or absence of the specific object. However, in the fourth modification, the determination unit 160 does not determine the presence or absence of the specific object, nor does the display control unit 170 cause the display unit 20 or 520 to perform different display depending on the presence or absence of the specific object, but allows the user to recognize the specific object as will be described below.
In the flow depicted in fig. 18A, based on the omnidirectional TOF image data acquired from the re-projection processing unit 147, in step S31, the determining unit 160 determines whether there is a pixel in which the charge storage amount is saturated as an example of a pixel in which the charge storage amount is greater than or equal to a predetermined value and the ranging information is greater than or equal to a threshold value, and if there is a pixel in which the charge storage amount is greater than or equal to the threshold value, outputs coordinate position information of the pixel to the display control unit 170.
Based on the coordinate position information of the pixel acquired from the determination unit 160, the display control unit 170 displays a display image including position identification information for identifying the position and two-dimensional image information on the display unit 20 or 520 in the same manner as in the case of the adjacent object described above with reference to fig. 3 (step S32), and ends the processing.
When the charge storage amount is smaller than the threshold value in step S31, the determination unit 160 ends the process.
In the flow depicted in fig. 18B, the determination unit 160 determines whether or not there is a pixel whose charge storage amount is less than or equal to a threshold value at which ranging information can be acquired in the omnidirectional TOF image data based on the omnidirectional TOF image data acquired from the re-projection processing unit 147, and when there is a pixel whose charge storage amount is less than or equal to the threshold value, outputs coordinate position information of the pixel to the display control unit 170 (step S33).
In step S34, the display control unit 170 displays a display image including position identification information for identifying a position and two-dimensional image information on the display unit 20 or 520 based on the pixel coordinate position information acquired from the determination unit 160 in the same manner as in the case of the adjacent object described above with reference to fig. 3, and ends the processing.
When the charge storage amount is greater than the threshold value in step S33, the determination unit 160 ends the process.
In the flow depicted in fig. 18C, the determination unit 160 determines whether or not there is a pixel in the omnidirectional TOF image data, which is a pixel that is offset from the TOF phase image and for which the ranging information cannot be acquired, based on the omnidirectional TOF image data acquired from the re-projection processing unit 147. When there is a pixel shifted from the TOF phase image, the determination unit 160 outputs coordinate position information of the pixel to the display control unit 170 (step S35).
The determination unit 160 determines whether the TOF phase image is offset by the same method as described with respect to step S52 of fig. 17.
The display control unit 170 displays display images including position identification information for identifying a position and two-dimensional image information on the display units 20 and 520 based on the coordinate position information of the pixels acquired from the determination unit 160 in the same manner as in the case of the adjacent object described above with reference to fig. 3 (step S36), and ends the processing.
When there is no pixel shifted from the TOF phase image, the determination unit 160 ends the processing.
As described above, the image pickup apparatus 1 includes the display control unit 170 for displaying a display image on the display unit 20 or 520, the display image including the two-dimensional image information G captured by the image pickup unit 11 that captures the image of the object and the position identification information determined based on the position information determined by the determination unit 160, the determination unit 160 having determined whether the output of the ranging information acquisition unit 13 is greater than or equal to the threshold value or less than or equal to the threshold value.
Therefore, by finding a position where the output of the ranging information acquiring unit 13 is greater than or equal to the threshold value or less, that is, a position where the output of the ranging information acquiring unit 13 is too strong or too weak to acquire the desired output, the reason why the desired output cannot be acquired can be understood using the two-dimensional image G.
The image pickup apparatus 1 includes a display control unit 170 for displaying a display image on the display unit 20 or 520, the display image including position identification information for identifying a position at which ranging information with respect to the object cannot be acquired based on the output of the ranging information acquisition unit 13 based on the position of the position information determined by the determination unit 160; the display image also includes two-dimensional image information G captured by the image pickup unit 11 capturing an image of the object.
Therefore, by identifying a position where the ranging information with respect to the object cannot be acquired from the two-dimensional image G, the cause of the failure to acquire the ranging information with respect to the object can be understood.
The determination units 160, 560, and 660 determine that the distance information with respect to the object cannot be acquired not only when the output of the distance information acquisition unit 13 is greater than or equal to the threshold value or less than or equal to the threshold value, but also when the image blur is detected from the output of the distance information acquisition unit 13.
Fig. 19 is a schematic diagram showing an example of the configuration of a processing block of a processing circuit according to a fifth variation of the embodiment of the present invention.
The processing block of the processing circuit according to the fifth modification depicted in fig. 19 is different from the processing block of the processing circuit 14 according to the present embodiment depicted in fig. 4 in that the determination unit 160 outputs a determination result to the transmission and reception unit 180, acquires omnidirectional three-dimensional data from the three-dimensional reconstruction processing unit 150, and outputs a determination result to the transmission and reception unit 180, and the display control unit 170 acquires omnidirectional three-dimensional data from the three-dimensional reconstruction processing unit 150.
In addition to the omnidirectional three-dimensional data output by the three-dimensional reconstruction processing unit 150 and the omnidirectional two-dimensional image information output by the RGB image data acquisition unit 142, the transmission and reception unit 180 transmits (outputs) the determination result of the determination unit 160 to the external apparatus 300 that performs the three-dimensional restoration processing via the network 400.
The display control unit 170 displays a three-dimensional image on the display unit 20 based on the omnidirectional three-dimensional data acquired from the three-dimensional reconstruction processing unit 150, and displays a display image on the display unit 20, the display image including identification information for identifying a specific object based on a result of determination by the determination unit 160, and including a three-dimensional image, the determination unit 160 determining whether or not the specific object is present based on both the output of the image pickup unit 11 and the output of the ranging information acquisition unit 13. The particular object may be a nearby object, a highly reflective object, a distant object, a low reflective object, or an image blur area.
This allows the identification of the cause of the desired three-dimensional image 3G not displayed as a nearby object, a highly reflective object, a distant object, a low reflective object, or image blur by viewing the three-dimensional image 3G.
Fig. 20 is a schematic diagram depicting a configuration example of an information processing system according to a sixth variation of the embodiment of the present invention.
The information processing system according to the sixth variation depicted in fig. 20 includes the image pickup apparatus 1 and the display apparatus 500.
The image pickup apparatus 1 depicted in fig. 20 includes image pickup devices 11a and 11A, TOF sensors 13A and 13A, light source units 12A and 12A, and an image pickup switch 15 configured to be the same as or similar to the corresponding devices depicted in fig. 4.
The processing circuit 4 of the image pickup apparatus 1 depicted in fig. 20 includes a control unit 141, an RGB image data acquisition unit 142, a TOF image data acquisition unit 144, and a transmission and reception unit 180. The control unit 141 is configured the same as or similar to the control unit 141 depicted in fig. 4.
Similar to fig. 4, the RGB image data acquiring unit 142 acquires RGB image data captured by the image pickup devices 11A and 11A based on an image pickup instruction from the control unit 141, and outputs omnidirectional RGB image data. However, the output destination of the RGB image data is different from that of the RGB image data of fig. 4, and is the transmitting and receiving unit 180.
Similar to fig. 4, the TOF image data acquisition unit 144 acquires the TOF image data generated by the TOF sensors 13A and 13A, and outputs omnidirectional TOF image data in accordance with an instruction for generating the TOF image data from the control unit 141. However, the output destination of the TOF image data acquisition unit 144 is different from that of the TOF image data acquisition unit 144 of fig. 4, which is the transmitting and receiving unit 180.
Unlike fig. 4, the transmitting and receiving unit 180 transmits (outputs) the omnidirectional RGB image data output from the RGB image data acquiring unit 142 and the omnidirectional TOF image data output from the TOF image data acquiring unit 144 to the display apparatus 500.
The display device 500 shown in fig. 20 includes a transmitting and receiving unit 510, a display unit 520, and a display unit 530 identical or similar to the corresponding unit of the second variation shown in fig. 12; and further includes an RGB image data acquisition unit 542, a monochrome processing unit 543, a TOF image data acquisition unit 544, a resolution improvement unit 545, a matching processing unit 546, a re-projection processing unit 547, a semantic segmentation unit 548, a parallax calculation unit 549, a three-dimensional reconstruction processing unit 550, and a determination unit 560.
The transmitting and receiving unit 510 receives the omnidirectional RGB image data and the omnidirectional TOF image data transmitted from the image pickup apparatus 1.
The RGB image data acquisition unit 542 acquires omnidirectional RGB image data from the transmission and reception unit 510, and the TOF image data acquisition unit 544 acquires omnidirectional RGB image data from the transmission and reception unit 510. Except for these points, the configurations of the RGB image data acquisition unit 542 and the TOF image data acquisition unit 544 are the same as or similar to the RGB image data acquisition unit 142 and the TOF image data acquisition unit 144, respectively.
The monochrome processing unit 543, the TOF image data acquisition unit 544, the resolution improvement unit 545, the matching processing unit 546, the re-projection processing unit 547, the semantic segmentation unit 548, the parallax calculation unit 549, the three-dimensional reconstruction processing unit 550, and the determination unit 560 are configured as the same or similar monochrome processing unit 143, the TOF image data acquisition unit 144, the resolution improvement unit 145, the matching processing unit 146, the re-projection processing unit 147, the semantic segmentation unit 148, the parallax calculation unit 149, the three-dimensional reconstruction processing unit 150, and the determination unit 160 as shown in fig. 4.
The display control unit 530 may acquire omnidirectional RGB image data from the RGB image data acquisition unit 542 and display a two-dimensional image on the display unit 520 based on the acquired omnidirectional RGB image data, or may acquire omnidirectional three-dimensional data from the three-dimensional reconstruction processing unit 545 and display a three-dimensional image on the display unit 520.
The display control unit 530 displays a display image including information indicating the determination result acquired from the determination unit 160 and a two-dimensional image or a three-dimensional image on the display unit 520.
As described above, the display apparatus 500 includes the transmitting and receiving unit 510, as an example of the receiving unit, the transmitting and receiving unit 510 receiving the output of the image pickup unit 11 capturing the image of the object and receiving the output of the ranging information acquiring unit 13, the ranging information acquiring unit 13 projecting light to the object and receiving light reflected from the object, the determining unit 560 for determining whether or not a specific object exists based on the output of the ranging information acquiring unit 13 and the output of the image pickup unit 11 received by the transmitting and receiving unit 510, and the display control unit 530 for causing the display unit 520 to perform different displays depending on the presence or absence of the specific object based on the determination result of the determining unit 560.
The particular object may be a nearby object, a highly reflective object, a distant object, a low reflective object, or an image blur area.
The display device 500 includes a display control unit 530 for displaying a display image on the display unit 520, the display image including identification information for identifying a specific object based on a determination result of the determination unit 560, the determination unit 560 determining the presence or absence of the specific object based on both an output of the image pickup unit 11 capturing an image of the object and an output of the ranging information acquisition unit 13 receiving light projected onto and reflected from the object; the display image further includes a three-dimensional image 3G determined by the three-dimensional reconstruction processing unit 550.
Fig. 21 is a schematic diagram depicting an example of a configuration of an information processing system according to a seventh variation of the embodiment of the present invention.
The information processing system according to the seventh modification depicted in fig. 21 includes the image pickup apparatus 1, the display apparatus 500, and the server 600.
The image pickup apparatus 1 shown in fig. 21 is configured to be the same as or similar to the image pickup apparatus 1 shown in fig. 20, and the display apparatus 500 shown in fig. 21 is configured to be the same as or similar to the display apparatus 500 shown in fig. 12.
The server 600 shown in fig. 21 includes a receiving unit 610, an RGB image data acquiring unit 642, a monochrome processing unit 643, a TOF image data acquiring unit 644, a resolution improving unit 645, a matching processing unit 646, a re-projection processing unit 647, a semantic dividing unit 648, a parallax calculating unit 649, a three-dimensional reconstruction processing unit 650, a determining unit 660, and a transmitting unit 680.
The reception unit 610 receives the omnidirectional RGB image data and the omnidirectional TOF image data transmitted by the image pickup apparatus 1 via the network 400.
The RGB image data acquiring unit 642 acquires omnidirectional RGB image data from the receiving unit 610, and the TOF image data acquiring unit 644 acquires omnidirectional RGB image data from the receiving unit 610, but these units 642 and 644 are configured to be the same as or similar to the RGB image data acquiring unit 142 and the TOF image data acquiring unit 144 shown in fig. 4, respectively, except for these functions.
The monochrome processing unit 643, the TOF image data acquisition unit 644, the resolution improvement unit 645, the matching processing unit 646, the re-projection processing unit 647, the semantic segmentation unit 648, the parallax calculation unit 649, the three-dimensional reconstruction processing unit 650, and the determination unit 660 are the same as or similar to the monochrome processing unit 143, the TOF image data acquisition unit 144, the resolution improvement unit 145, the matching processing unit 146, the re-projection processing unit 147, the semantic segmentation unit 148, the parallax calculation unit 149, the three-dimensional reconstruction processing unit 150, and the determination unit 160 depicted in fig. 4.
The transmitting unit 680 transmits (outputs) the omnidirectional three-dimensional data output from the three-dimensional reconstruction processing unit 650, the omnidirectional two-dimensional image information output from the RGB image data acquiring unit 642, and the determination result of the determining unit 660 to the display apparatus 500 through the network 400.
The transmitting and receiving unit 510 of the display device 500 receives the omnidirectional three-dimensional data, the two-dimensional image information, and the determination result of the determining unit 160 transmitted from the server 600.
The display control unit 530 of the display apparatus 500 may acquire omnidirectional RGB image data from the transmitting and receiving unit 510, and display a two-dimensional image on the display unit 520 based on the acquired omnidirectional RGB image data; and may acquire omnidirectional three-dimensional data from the transmitting and receiving unit 510 and display a three-dimensional image on the display unit 520.
The display control unit 530 displays a display image including information indicating the determination result acquired from the transmitting and receiving unit 510 and including a two-dimensional image or a three-dimensional image on the display unit 520.
As described above, the display apparatus 500 includes the transmitting and receiving unit 510 for receiving a determination result regarding whether a specific object exists from the determining unit 660 of the server 600 based on both the output of the image pickup unit 11 for capturing an image of the object and the output of the ranging information acquiring unit 13 for receiving light projected to and reflected from the object, and the display control unit 530 for causing the display unit 520 to perform different displays depending on the existence or nonexistence of the specific object based on the determination result received by the transmitting and receiving unit 510. The particular object may be a nearby object, a highly reflective object, a distant object, a low reflective object, or an image blur area.
The display device 500 includes a display control unit 530 for displaying a display image on the display unit 520, the display image including identification information for identifying a specific object based on a determination result of the determination unit 660, the determination unit 660 determining whether the specific object exists based on both an output of the image pickup unit 11 for capturing an image of the object and an output of the ranging information acquisition unit 13 for receiving light projected to and reflected from the object; the display image further includes a three-dimensional image 3G determined by the three-dimensional reconstruction processing unit 650.
Fig. 22 is a schematic diagram showing display contents of a display unit according to fifth to seventh modifications.
As depicted in fig. 22, identification information 3ga,3gb, or 3Gc including information for identifying a specific object is displayed on the display unit 520 by the display control unit 530. The 3ga,3gb, or 3Gc may be position identification information that identifies the position of the specific object.
In fig. 22, the display unit 520 is depicted, but a three-dimensional image 3G including identification information 3ga,3gb, or 3Gc for identifying a specific object is also displayed on the display unit 20 by the display control unit 170.
In fig. 22, a blind spot is identified by identification information 3Ga highlighted in pink or the like, a reflective object is identified by identification information 3Gb highlighted in orange or the like, and a distant object is identified by identification information 3Gc highlighted in mosaic processing or the like.
All of these items of the identification information 3Ga,3Gb, and 3Gc may be displayed at the same time, and either or both of these items may also be displayed at the same time.
Fig. 23A to 23C show three-dimensional images displayed on a display unit according to an embodiment of the present invention (including the embodiment and variations of the embodiment).
Fig. 23A depicts the position of the virtual camera and a predetermined area when the omni-directional image is expressed with a three-dimensional sphere. The virtual camera IC corresponds to the position of the viewpoint when the user views the image, and the viewpoint position when the user views the image is relative to the omnidirectional image CE expressed in a three-dimensional sphere.
Fig. 23B is a three-dimensional perspective view of fig. 23A, and fig. 23C depicts an image of a predetermined area displayed on a display screen.
Fig. 23B depicts the omnidirectional image CE depicted in fig. 23A expressed as a three-dimensional sphere CS. When the omnidirectional image CE is thus expressed as a three-dimensional sphere CS, as depicted in fig. 23A, the virtual camera IC is located inside the omnidirectional image CE.
The predetermined area T in the omnidirectional image CE is an image pickup area of the virtual camera IC, and is specified by predetermined area information including an image pickup direction and a viewing angle of the virtual camera IC with respect to a three-dimensional virtual space including the omnidirectional image CE.
Scaling of the predetermined area T may be achieved by moving the virtual camera IC closer to or farther from the omnidirectional image CE. The predetermined area image Q is an image of a predetermined area T of the omnidirectional image CE. Accordingly, the predetermined area T can be specified by the angle of view α and the distance f between the virtual camera IC and the omnidirectional image CE.
That is, the display control unit 170 or 530 may change the position and orientation of the virtual camera IC at the viewpoint position at which the three-dimensional image 3G is viewed, thereby changing the display area of the three-dimensional image 3G to be displayed on the display unit 20 or 520.
Therefore, a three-dimensional image displayed by the display unit has been described, with an omni-directional image being used as an example. The same is true for the case of using three-dimensional point cloud data. The three-dimensional point cloud is arranged in a virtual space, and the virtual camera is arranged in the virtual space. A three-dimensional image is obtained by projecting a three-dimensional point cloud onto a predetermined projection plane in a virtual space based on predetermined area information indicating a viewpoint position, an image pickup direction, and a view angle of a virtual camera. The viewpoint position and orientation of the virtual camera may be changed to change the display area of the three-dimensional image.
Fig. 24 is a determination flowchart of fifth to seventh variations. In step S61, the determination unit 160, 560, or 660 determines whether or not there is an area (coordinate) having a density smaller than a threshold value with respect to the point cloud data in the omnidirectional three-dimensional data based on the omnidirectional three-dimensional data acquired from the three-dimensional reconstruction processing unit 150, 550, or 650.
In step S62, when it is determined in step S61 that there is an area (coordinate) whose density with respect to the point cloud data is less than the threshold value, the determination unit 160, 560, or 660 determines whether a plurality of pixels having the same coordinate as the area whose density with respect to the point cloud data is less than the threshold value include pixels of an object determined to be distant, based on the output of the image pickup unit 11 according to the flow depicted in fig. 16. When the included pixel is determined to be a pixel including a distant object, the coordinate position information of the pixel will be output to the display control unit 170 or 530.
As depicted in fig. 22, the display control unit 170 or 530 displays a display image on the display unit 20 or 520 (step S63), the display image including position identification information 3Gc for identifying the position of the distant object based on the pixel coordinate position information acquired from the determination unit 160, 560, or 660, and including the three-dimensional image G, and ends the processing.
In step S62, for a plurality of pixels having the same coordinates as those of the region having a density smaller than the threshold value with respect to the point cloud data, when the pixel determined to correspond to the distant object is not included, the determination unit 160, 560, or 660 determines whether the pixel determined to be the low-reflection object is included based on the output of the image pickup unit 11 according to the flow depicted in fig. 16. When the pixel determined as the low reflection object is included, the coordinate position information of the pixel is output to the display control unit 170 or 530 in step S64.
As depicted in fig. 22, the display control unit 170 or 530 displays a display image on the display unit 20 or 520 (step S65), the display image including position identification information 3Gb for identifying the position of the low reflection object based on the coordinate position information of the pixel acquired from the determination unit 160, 560 or 660, and including the three-dimensional image G, and ends the processing.
In step S64, when a plurality of pixels having the same coordinates as the region having a density smaller than the threshold value with respect to the point cloud data do not include pixels determined to correspond to the low reflection object, the determination unit 160, 560, or 660 determines that the plurality of pixels correspond to blind spots, and outputs coordinate position information of the pixels to the display control unit 170 or 530.
As depicted in fig. 22, the display control unit 170 or 530 displays a display image on the display unit 20 or 520 (step S66), the display image including the position identification information 3Ga for identifying the blind spot position based on the pixel coordinate position information acquired from the determination units 160, 560, and 660, and including the three-dimensional image G, and ends the process. Steps S61, S62, and S64 are examples of the determination step, and steps S63, S65, and S66 are examples of the display step.
As described above, the image pickup apparatus 1 and the display apparatus 500 include the display control units 170 and 530, which cause the display units 20 and 520 to display images and perform different displays. Displaying the image includes: identification information 3Ga, 3Gb, or 3Gc that identifies a specific object based on the determination results of the determination units 160, 560, and 660 that determine whether or not the specific object is present based on both the output of the image pickup unit 11 that captures the image of the object and the output of the ranging information acquisition unit 13 that projects light to the object and receives light reflected from the object; and includes a three-dimensional image 3G that is determined by the three-dimensional reconstruction processing units 150, 550, and 650 based on the output of the ranging information acquisition unit 13, the three-dimensional reconstruction processing units 150, 550, and 650 being examples of the three-dimensional information determination unit.
The specific object may be not only a distant object, a low reflection object or a blind spot, but also a nearby object, a high reflection object or an image blur area.
This allows to determine which of the distant objects, low reflection objects, blind spots, adjacent objects, high reflection objects, and image blur results in not displaying the desired three-dimensional image 3G by viewing the three-dimensional image 3G.
The image pickup apparatus 1 and the display apparatus 500 include display control units 170 and 530 for displaying a three-dimensional image 3G on the display units 20 and 520, which is determined based on the output of the ranging information acquisition unit 13, the ranging information acquisition unit 13 projecting light to an object and receiving light reflected from the object. The display control units 170 and 530 display a display image on the display units 20 and 520, the display image including (i) at least one of position identification information 3Ga, 3Gb, or 3Gc for identifying a position of at least one of a distant object, a low reflection object, or a blind spot with respect to the ranging information acquisition unit 13 based on at least one of an object distant from the ranging information acquisition unit 13 when receiving light reflected from the object, a low reflection object having low reflection with respect to the projected light, or a blind spot with respect to the ranging information acquisition unit 13 when receiving light reflected from the object; and (ii) a three-dimensional image 3G.
This allows to determine which of the distant object, the low reflection object, and the blind spot is causing the desired three-dimensional image 3G not to be displayed by viewing the three-dimensional image 3G. Accordingly, measures can be taken appropriately depending on the cause, for example, capturing an image again.
The three-dimensional image 3G is determined by the three-dimensional reconstruction processing unit 150, 550, or 650, and the three-dimensional reconstruction processing unit 150, 550, or 650 is an example of a three-dimensional information determining unit.
The display control units 170 and 530 may display images, each including position identification information of any one of 3Ga, 3Gb, and 3Gc and a three-dimensional image 3G, on the display units 20 and 520 based on position information of any one of the distant object, the low-reflection object, and the blind spot, and may display images, each including position identification information of any two or all of 3Ga, 3Gb, and 3Gc and a three-dimensional image 3G on the display units 20 and 520 based on position information of any two or all of the distant object, the low-reflection object, and the blind spot.
As depicted in fig. 19, when the information processing apparatus is the image pickup device 1, the image pickup device 1 includes the ranging information acquisition unit 13 and the three-dimensional reconstruction processing unit 150.
When the information processing apparatus 500 is the display apparatus 500, as shown in fig. 20 and 21, the display apparatus 500 does not include the ranging information acquisition unit 13, and the image pickup apparatus 1 includes the ranging information acquisition unit 13, and transmits the output of the ranging information acquisition unit 13 to the display apparatus 500 or the server 600.
As depicted in fig. 20, the display device 500 may or may not include a three-dimensional reconstruction processing unit 550.
As depicted in fig. 21, when the display apparatus 500 does not include the three-dimensional reconstruction processing unit 550, the image pickup apparatus 1 may include the three-dimensional reconstruction processing unit 150 and transmit a three-dimensional image to the display apparatus 500, or the server 600 may include the three-dimensional reconstruction processing unit 650 and transmit a three-dimensional image to the display apparatus 500.
The display control units 170 and 530 display images on the display units 20 and 520, the display images including position identification information 3Ga, 3Gb, and 3Gc, and including a three-dimensional image 3G, the position identification information being based on position information indicating a position where at least one of a low-reflection object or a blind spot exists with respect to an object whose density of point cloud data included in the three-dimensional image 3G is lower than a threshold value and far.
Thus, by viewing the three-dimensional image 3G, it can be determined that the reason why the point cloud data density is lower than the threshold is a distant object, a low-reflection object, or a blind spot.
The display control units 170 and 530 display a display image on the display units 20 and 520, the display image including position identification information 3Ga, 3Gb, or 3Gc, and including a three-dimensional image 3G, the position identification information being based on position information indicating a position of at least one of an object far in the three-dimensional image 3G, a low-reflection object, or a blind spot determined based on an output of the image pickup unit 11 capturing an image of the object.
Therefore, based on the output of the image pickup unit 11, it can be accurately determined which of the distant object, the low-reflection object, or the blind spot is the cause of not displaying the desired three-dimensional image 3G.
As depicted in fig. 19, when the information processing apparatus is the image pickup apparatus 1, the image pickup apparatus 1 includes the image pickup unit 11. As depicted in fig. 20 and 21, when the information processing apparatus 500 is the display apparatus 500, the display apparatus 500 does not include the image pickup unit 11, the image pickup apparatus 1 includes the image pickup unit 11, and the output of the image pickup unit 11 is transmitted to the display apparatus 500 or the server 600.
The image pickup apparatus 1 and the display apparatus 500 include determination units 160, 560, and 660 for determining the positions of distant objects, low-reflection objects, or blind spots in the three-dimensional image 3G. The display control units 170 and 530 display images on the display units 20 and 520 based on the determination results of the determination units 160, 560, and 660, the display images including the position identification information 3Ga, 3Gb, or 3Gc and the three-dimensional image 3G.
As depicted in fig. 19, when the information processing apparatus is the image pickup apparatus 1, the image pickup apparatus 1 includes the determination unit 160.
When the information processing apparatus 500 is the display apparatus 500, the display apparatus 500 may include the determination unit 560 as shown in fig. 20, or may not include the determination unit 560.
When the display apparatus 500 does not include the determination unit 560, the image pickup apparatus 1 may include the determination unit 160 to transmit the determination result to the display apparatus 500, or the server 600 may include the determination unit 660 shown in fig. 21 to transmit the determination result to the display apparatus 500.
Fig. 25 is another view showing display contents of the display device according to the fifth to seventh modifications.
As shown in fig. 25, by the display control unit 530, a three-dimensional image 3G including position identification information 3G1 and 3G2 for identifying the position of the ranging information acquisition unit 13 acquired when the ranging information acquisition unit 13 receives light reflected from an object is displayed on the display unit 520.
The three-dimensional image 3G is determined based on the output of the ranging information acquiring unit 13 located at the first position and the output of the ranging information acquiring unit 13 located at the second position different from the first position. The position identification information 3G1 is an example of first position identification information identifying a first position, and the position identification information 3G2 is an example of second position identification information identifying a second position.
In fig. 25, the display unit 520 is depicted, but the display control unit 170 also displays a three-dimensional image 3G on the display unit 20, the three-dimensional image 3G including position identification information 3G1 and 3G2 for identifying the position of the ranging information acquisition unit 13 acquired when receiving light reflected from an object.
As depicted in fig. 22, the display control unit 170 or 530 displays a display image including the three-dimensional image 3G and the identification information 3Ga, 3Gb, or 3Gc (an example of low-density identification information) on the display unit 20 or 520. Meanwhile, as depicted in fig. 25, position identification information 3G1 and 3G2 for identifying the position of the ranging information acquisition unit 13 obtained when receiving light reflected from an object may be included in the display image.
Fig. 26 is a flowchart showing the processing in the fifth to seventh modifications.
In step S71, the three-dimensional reconstruction processing unit 150, 550, or 650 reads the omnidirectional high-density three-dimensional point cloud data, and in step S72, acquires the origin with respect to the three-dimensional point cloud data as the position information indicating the image pickup position of the ranging information acquisition unit 13 acquired when the ranging information acquisition unit receives the light reflected from the object.
In step S73, the three-dimensional reconstruction processing unit 150, 550 or 650 determines whether there is three-dimensional point cloud data previously read. If there is no previously read three-dimensional point cloud data, the three-dimensional point cloud data read in step S71 and the position information obtained in step S72 are output to the display control unit 170 or 530.
As depicted in fig. 25, the display control unit 170 or 530 displays a display image on the display unit 20 or 520 based on the three-dimensional point cloud data and the position information acquired from the three-dimensional reconstruction processing unit 150, 550 or 650 (step S74), which includes position identification information 3G1 for identifying the position of the ranging information acquisition unit 13 acquired when the ranging information acquisition unit 13 receives light reflected from an object and the three-dimensional image 3G, and ends the process.
In step S75, the three-dimensional reconstruction processing unit 150, 550 or 650 merges the three-dimensional point cloud data read in step S71 with the previously read three-dimensional point cloud data (if any) in step S73.
In step S76, the three-dimensional reconstruction processing unit 150, 550 or 650 calculates coordinates with respect to the three-dimensional point cloud data combined in step S75 as position information of the corresponding image pickup position for each of the origin with respect to the three-dimensional point cloud data read in step S71 and the origin with respect to the three-dimensional point cloud data read previously, and outputs the three-dimensional point cloud data combined in step S75 and the calculation information indicating the plurality of origins to the display control unit 170 or 530.
In step S74, the display control unit 170 or 530 displays a display image on the display unit 20 or 520, as depicted in fig. 25, the display image including a plurality of sets of position identification information 3G1 and 3G2 for identifying the position of the ranging information acquisition unit 13 acquired when the ranging information acquisition unit 13 receives light reflected from an object, and a three-dimensional image 3G based on three-dimensional point cloud data and a plurality of sets of position information acquired from the three-dimensional reconstruction processing unit 150, 550 or 650.
Fig. 27 is another flowchart showing the processing in the fifth to seventh modifications.
In step S81, the three-dimensional reconstruction processing unit 150, 550 or 650 reads omnidirectional high-density three-dimensional point cloud data. In step S82, the determination unit 160, 560, or 660 performs steps S61, S62, and S64 of the flow depicted in fig. 24 based on the omnidirectional three-dimensional data acquired from the three-dimensional reconstruction processing unit 150, 550, or 650 to extract a low-density portion having a density lower than a threshold value with respect to the point cloud data.
When the virtual camera IC depicted in fig. 23A to 23C is located at the position of the position identification information 3G1 or 3G2 depicted in fig. 25, the display control unit 170 or 530 performs steps S63, S65, and S66 of the flow depicted in fig. 24 to change the orientation of the virtual camera IC so as to include the identification information of at least one of the examples 3Ga, 3Gb, or 3Gc of the low-density identification information depicted in fig. 22 in the display image (step S83).
As described above, the image pickup apparatus 1 and the display apparatus 500 include the display control units 170 and 530 for displaying the three-dimensional image 3G determined based on the output of the ranging information acquiring unit 13 on the display units 20 and 520. Based on the position information indicating the position of the ranging information acquiring unit 13 acquired when the ranging information acquiring unit 13 receives the reflected light from the object, the display control units 170 and 530 display images including the position identification information 3G1 and 3G2 for identifying the position of the ranging information acquiring unit 13 acquired when the ranging information acquiring unit 13 receives the reflected light from the object, and including the three-dimensional image 3G on the display units 20 and 520.
Therefore, the positional relationship between the image pickup position indicating the position of the ranging information acquiring unit 13, which is acquired when the ranging information acquiring unit 13 receives the object reflected light, and the specific object position can be understood from the three-dimensional image 3G.
The three-dimensional image 3G and the position information are determined by the three-dimensional reconstruction processing units 150, 550, and 650.
When the information processing apparatus is the image pickup apparatus 1, the image pickup apparatus 1 includes the ranging information acquisition unit 13 and the three-dimensional reconstruction processing unit 150, as depicted in fig. 19.
When the information processing apparatus 500 is the display apparatus 500, as depicted in fig. 20 and 21, the display apparatus 500 does not include the ranging information acquisition unit 13, and the image pickup apparatus 1 includes the ranging information acquisition unit 13, and transmits the output of the ranging information acquisition unit 13 to the display apparatus 500 or the server 600.
As depicted in fig. 20, the display device 500 may include the three-dimensional reconstruction processing unit 550, and also need not include the three-dimensional reconstruction processing unit 550.
When the display apparatus 500 does not include the three-dimensional reconstruction processing unit 550, the image pickup apparatus 1 may include the three-dimensional reconstruction processing unit 150 and transmit the three-dimensional image and the positional information to the display apparatus 500, or the server 600 may include the three-dimensional reconstruction processing unit 650 (as depicted in fig. 21) and transmit the three-dimensional image and the positional information to the display apparatus 500.
The display control units 170 and 530 display, on the display units 20 and 520, a display image including identification information 3Ga, 3Gb, or 3Gc, which is an example of low-density identification information for identifying a region based on region information indicating a region whose density with respect to the point cloud data is lower than a threshold value with respect to the three-dimensional image 3G, and including the three-dimensional image 3G.
In this case, since the positional relationship between the image pickup position and the area where the density with respect to the point cloud data is lower than the threshold value can be understood, the cause of the density with respect to the point cloud data being lower than the threshold value can be identified. For example, when the area is far from the image pickup position, it may be determined that a distant object is a cause, when the area is located at a blind spot with respect to the image pickup position, it may be determined that a blind spot is a cause, or when the area is not far nor at a blind spot, it may be determined that a low reflection object is a cause.
The display control units 170 and 530 change the position and orientation of the virtual camera IC at the viewpoint position of viewing the three-dimensional image 3G, thereby changing the display area of the three-dimensional image 3G to be displayed on the display units 20 and 520.
When the position of the virtual camera IC is located at the position 3G1 or 3G2 identified by the position identification information, the display control units 170 and 530 change the orientation of the virtual camera IC to a predetermined orientation. The predetermined positioning means that the display area should include a position (e.g., a low-density point cloud area) at which an image is taken again, a position (e.g., a position previously set as a site survey inspection), or any position at which a photographer or other person performing an inspection work wishes to perform an inspection. Specific examples of inspection positions preset for field investigation include: the location where the site is constantly changing (material yard), the location of each component of the building, the space between components, the space for new installation, the space for temporary installation (yard and scaffold used during construction and subsequently removed, etc.), the storage space of heavy machinery (forklift, crane, etc.), the working space (rotation range of the robotic arm, material movement route, etc.), resident movement route (detour route during construction), etc.
This allows the line of sight of the user at the image pickup position to be directed to a particular object desired to be viewed in the field.
The display control units 170 and 530 change the orientation of the virtual camera IC such that the display area includes previously set coordinates or a low-density portion in which the density with respect to the point cloud data is lower than a threshold value with respect to the three-dimensional image 3G. The previously set coordinates do not identify the image, and even if, for example, the image at the predetermined coordinates changes before and after the three-dimensional point cloud data is merged in step S75 of fig. 26, the previously set coordinates remain unchanged.
This allows the line of sight of the user at the image pickup position to be directed to a specific object corresponding to the low-density portion in the 3G three-dimensional image.
The display control units 170 and 530 display a three-dimensional image 3G, the three-dimensional image 3G being determined based on the output of the ranging information acquisition unit 13 located at a first position and the output of the ranging information acquisition unit 13 located at a second position different from the first position; the display units 20 and 520 display a display image including first position identification information 3G1 for identifying a first position and second position identification information 3G2 for identifying a second position, and a three-dimensional image 3G on the display units 20 and 520.
Therefore, the positional relationship between the first and second image pickup positions and the specific object can be understood from the three-dimensional image 3G.
< summary of examples >
As described above, the image pickup apparatus 1 according to the embodiment of the present invention includes the image pickup unit 11 for capturing an image of an object, the projection unit 12 for projecting light onto the object, the ranging information acquisition unit 13 (an example of a light receiving unit) for receiving light reflected from the object, the determination unit 160 for determining whether or not a highly reflective object is present based on the output of the ranging information acquisition unit 13 and the output of the image pickup unit 11; and a display control unit 170 for causing the display unit 20 or 520 to perform different display depending on the presence or absence of the highly reflective object.
This allows the photographer to accurately find a highly reflective object (e.g., a mirror) included in the photographed image, thereby distinguishing the object from an adjacent object or the influence of external light.
The image pickup apparatus 1 includes a display unit 20. This allows the photographer to find that the photographed image is included in the highly reflective object.
The display control unit 170 causes the display unit 20 or 520 to perform different display on the position of the display unit 20 or 520 corresponding to the position of the highly reflective object. This allows the photographer to recognize the position of the highly reflective object.
The display unit 20 includes a plurality of display units 20A and 20A, and the display control unit 170 causes a display unit closer to a highly reflective object among the plurality of display units 20A and 20A to perform different displays depending on the presence or absence of the object. This allows the photographer to recognize the position of the highly reflective object.
The display control unit 170 displays the image information G captured by the image pickup unit 11 on the display unit 20 or 520, and displays a display image including identification information for identifying a highly reflective object and the image information G on the display unit 20 or 520. This allows the photographer to recognize the position of the highly reflective object.
When the charge storage amount is saturated (in the example of a pixel in which the charge storage amount of the pixel is greater than or equal to a predetermined value of the light received by the ranging information acquisition unit 13) and the image information captured by the image pickup unit coincides with the model image information (in the example of reference information indicating a highly reflective object), the determination unit 160 determines that a highly reflective object is present.
This allows the photographer to accurately determine that a highly reflective object appears in the photographed image, thereby distinguishing the object from the influence of an adjacent object or external light.
The image pickup apparatus 1 acquires ranging information with respect to an object based on the light received by the ranging information acquisition unit 13. In this case, the photographer can determine that the reason why the required ranging information cannot be acquired is not a nearby object or external light, but a highly reflective object.
The image pickup apparatus 1 includes a transmitting and receiving unit 180 that outputs three-dimensional information determined based on the ranging information acquired from the ranging information acquisition unit 13 as an example of an output unit. In this case, the photographer can determine that the reason why the required three-dimensional information is not obtained is a highly reflective object, not a nearby object or external light.
The image processing method according to the embodiment of the invention comprises the following steps: an image pickup step of capturing an image of an object by the image pickup unit 11; a projection step of projecting light to the object by the projection unit 12; a light receiving step of receiving light reflected from the object by the ranging information acquisition unit 13; a determining step of determining whether or not a highly reflective object is present by the determining unit 160, 560 or 660 based on the output of the ranging information acquiring unit 13 and the output of the image pickup unit 11; and a display step of causing the display control units 170 and 530 to perform different displays depending on the presence or absence of the highly reflective object.
The image pickup apparatus 1 and the display apparatus 500 are provided, as examples of the information processing apparatus according to the embodiment of the present invention, with display control units 170 and 530 that cause the display units 20 and 520 to perform different displays depending on the presence or absence of the highly reflective object, based on the determination results of the determination units 160, 560, and 660 based on the output of the image pickup unit 11 (capturing an image of the object) and the output of the ranging information acquisition unit 13 (projecting light onto the object and receiving light reflected from the object).
The display apparatus 500 is an example of an information processing device according to an embodiment of the present invention, and includes a transmitting and receiving unit 510 as an example of a receiving unit that receives a determination result from the determining unit 160 of the image pickup apparatus 1 or from the determining unit 660 of the server 600, the determination result determining whether or not a specific object is present based on both the output of the image pickup unit 11 that captures an image of the object and the output of the ranging information acquiring unit 13 that projects light and receives light reflected from the object, and a display control unit 530 that causes the display unit 520 to perform different displays depending on the presence or absence of the specific object based on the determination result received by the transmitting and receiving unit 510. The particular object may be a nearby object, a highly reflective object, a distant object, a low reflective object, a blind spot, or an image blur area.
The display apparatus 500 is an example of an information processing apparatus according to an embodiment of the present invention, and includes a transmitting and receiving unit 510 as an example of a receiving unit that receives an output of the image pickup unit 11 capturing an output of an object and an output of the ranging information acquiring unit 13 receiving light projected onto and reflected from the object, a determining unit 560 for determining whether a specific object exists based on both the output of the ranging information acquiring unit 13 and the output of the image pickup unit 11 received by the transmitting and receiving unit 510, and a display control unit 530 for causing the display unit to perform different displays depending on the existence or nonexistence of the specific object based on the determination result of the determining unit 560. The particular object may be a nearby object, a highly reflective object, a distant object, a low reflective object, a blind spot, or an image blur area.
The image pickup apparatus 1 and the display apparatus 500, as examples of the information processing apparatus according to the embodiment of the present invention, include display control units 170 and 530 for displaying a display image including identification information 3Ga, 3Gb, or 3Gc on display units 20 and 520 for identifying a specific object based on the determination results of determination units 160 and 560, determination units 160 and 560 for determining whether the specific object exists or not according to both the output of the image pickup unit 11 for capturing an image of the object and the output of the ranging information acquisition unit 13 for receiving light projected to and reflected from the object, and include a three-dimensional image 3G. The specific object may be not only a distant object, a low reflection object or a blind spot, but also a nearby object, a high reflection object or an image blur area.
The three-dimensional image 3G is determined by the three-dimensional reconstruction processing unit 150, 550, or 650 (which is an example of a three-dimensional information determining unit) based on the output of the ranging information acquiring unit 13.
This allows to determine which of the distant object, the low reflection object, the blind spot, the adjacent object, the high reflection object, and the image blur is the cause of failing to display the desired three-dimensional image 3G by viewing the three-dimensional image 3G.
The image pickup apparatus 1 and the display apparatus 500, as examples of the information processing apparatus according to the embodiment of the present invention, include display control units 170 and 530 for displaying a display image on the display units 20 and 520, the display image including position identification information for identifying a position based on position information indicating the position for which the determination units 160 and 560 determine whether or not the output of the ranging information acquisition unit 13 for receiving light projected to and reflected from an object is greater than or equal to a threshold value, and two-dimensional image information G captured by the image pickup unit 11 for capturing an image of the object.
Therefore, by identifying a position from the two-dimensional image G where the output of the ranging information acquiring unit 13 is greater than or equal to the threshold value or less, that is, a position where the output of the ranging information acquiring unit 13 is too strong or too weak to acquire the desired output, the cause of the failure to acquire the desired output can be determined.
The image pickup apparatus 1 and the display apparatus 500, as examples of the information processing apparatus according to the embodiment of the present invention, include display control units 170 and 530 for indicating a position based on position information, wherein determination units 160 and 560 determine that it is impossible to acquire ranging information with respect to an object based on an output of a ranging information acquisition unit 13 for receiving light projected to and reflected from the object, display images including position identification information for identifying a position and two-dimensional image information G captured by an image pickup unit 11 for capturing an image of the object on display units 20 and 520.
Therefore, by identifying a position from the two-dimensional image G where the ranging information with respect to the object cannot be acquired, it is possible to find a cause of the failure to acquire the ranging information with respect to the object.
The determination unit 160, 560, or 660 determines that ranging information with respect to an object cannot be acquired not only when the output of the ranging information acquisition unit 13 is greater than or equal to a threshold value or less than or equal to a threshold value, but also when an image blur is detected from the output of the ranging information acquisition unit 13.
As described above, when the information processing apparatus is the image pickup apparatus 1, the image pickup apparatus 1 includes the image pickup unit 11, the ranging information acquisition unit 13, the three-dimensional reconstruction processing unit 150, and the determination unit 160 as shown in fig. 19.
When the information processing apparatus 500 is the display apparatus 500, the display apparatus 500 does not include the image pickup unit 11 and the ranging information acquisition unit 13, and the image pickup apparatus 1 includes the image pickup unit 11 and the ranging information acquisition unit 13, as shown in fig. 20 and 21, and transmits the outputs of these units to the display apparatus 500 or the server 600.
As depicted in fig. 20, the display apparatus 500 may or may not need to include the determination unit 560.
When the display apparatus 500 does not include the determination unit 560, the image pickup apparatus 1 may include the determination unit 160 to transmit the determination result to the display apparatus 500, or the server 600 may include the determination unit 660 depicted in fig. 21 to transmit the determination result to the display apparatus 500.
Similarly, the display device 500 may include the three-dimensional reconstruction processing unit 550 depicted in fig. 20, or need not include the three-dimensional reconstruction processing unit 550.
When the display apparatus 500 does not include the three-dimensional reconstruction processing unit 550, the image pickup apparatus 1 may include the three-dimensional reconstruction processing unit 150 and transmit a three-dimensional image to the display apparatus 500, or the server 600 may include the three-dimensional reconstruction processing unit 650 and transmit a three-dimensional image to the display apparatus 500, as illustrated in fig. 21.
As described above, the image pickup apparatus 1 according to the embodiment of the present invention includes the image pickup unit 11 for capturing an image of an object, the projection unit 12 for projecting light onto the object, the ranging information acquisition unit 13 (an example of a light receiving unit) for receiving light reflected from the object, the determination unit 160 for determining whether a distant object or a low reflection object exists according to both the output of the ranging information acquisition unit 13 and the output of the image pickup unit 11; and a display control unit 170 for causing the display unit 20 or 520 to perform different displays depending on the presence or absence of a distant object or a low reflection object.
This allows the photographer to accurately determine that a distant object or a low reflection object (e.g., a black object) appears in the photographed image.
The image pickup apparatus 1 includes a display unit 20. This ensures that the photographer will find out that distant objects or low reflection objects appear in the photographed image.
The display control unit 170 causes the display unit 20 or 520 to perform different displays at the position of the display unit 20 or 520 corresponding to the position of the distant object or the low reflection object. This allows the photographer to identify the location of a distant object or a low reflection object.
The display unit 20 includes a plurality of display units 20A and 20A, and the display control unit 170 causes a display unit of the plurality of display units 20A and 20A, which is close to a distant object or a low reflection object, to perform different displays depending on the presence or absence of the object. This allows the photographer to identify the position of a distant object or a low reflection object.
The display control unit 170 displays the image information G captured by the image pickup unit 11 on the display unit 20 or 520, and displays a display image including identification information for identifying a distant object or a low-reflection object and the image information G on the display unit 20 or 520.
When the charge storage amount with respect to the pixel caused by the light received by the ranging information acquisition unit 13 is less than or equal to the threshold value, the determination unit 160 determines whether it is a low reflection object or a distant object based on the output of the image pickup unit 11. This allows the photographer to accurately find out that a low reflection object or a distant object appears in the photographed image.
When the charge storage amount with respect to the pixel caused by the light received by the ranging information acquisition unit 13 is less than or equal to the threshold value, and the charge storage amount with respect to the pixel of the image pickup unit 11 is less than or equal to the threshold value, the determination unit 160 determines that a low reflection object is present. This allows the photographer to accurately find out that a low reflection object appears in the photographed image.
When the amount of charge storage with respect to the pixel caused by the light received by the ranging information acquisition unit 13 is less than or equal to the threshold value, the amount of charge stored with respect to the pixel of the image pickup unit 11 is greater than or equal to the threshold value, and the distance determined based on the pixel is greater than or equal to the threshold value, the determination unit 160 determines that a distant object is present.
This allows the photographer to accurately find out that a distant object appears in the photographed image.
The image pickup apparatus 1 acquires ranging information with respect to an object based on the light received by the ranging information acquisition unit 13. In this case, the photographer can recognize that the reason why the required ranging information cannot be acquired is a distant object or a low reflection object.
The image pickup apparatus 1 includes a transmitting and receiving unit 180 that outputs three-dimensional information determined based on the ranging information acquired from the ranging information acquisition unit 13 as an example of an output unit. In this case, the photographer can recognize that the reason why the required three-dimensional information is not obtained is a distant object or a low-reflection object.
The image processing method according to an embodiment of the present invention includes: an image pickup step of capturing an image of an object by the image pickup unit 11; a projection step of projecting light to the object by the projection unit 12; a light receiving step of receiving light reflected from the object by the ranging information acquisition unit 13; a determining step of determining whether there is a distant object or a low reflection object by the determining unit 160, 560, or 660 based on both the output of the ranging information acquiring unit 13 and the output of the image pickup unit 11; and a display step of causing the display unit 20 or 520 to perform different displays by the display control unit 170 or 530 depending on whether or not a distant object or a low reflection object exists or does not exist.
The image pickup apparatus 1 and the display apparatus 500, as examples of the information processing apparatus according to the embodiment of the present invention, include display control units 170 and 530 that perform different displays based on determination results determined by the determination units 160, 560, and 660 based on both the output of the image pickup unit 11 capturing an image of an object and the output of the ranging information acquisition unit 13 projecting light and receiving light reflected from the object, making the display units 20 and 520 dependent on the presence or absence of a distant object or a low reflection object.
As described above, the image pickup apparatus 1 according to the embodiment of the present invention includes the image pickup unit 11 for capturing an image of an object, the projection unit 12 for projecting light onto the object, the ranging information acquisition unit 13 (an example of a light receiving unit) for receiving light reflected from the object, the determination unit 160 for determining whether or not image blurring occurs based on both the output of the ranging information acquisition unit 13 and the output of the image pickup unit 11, and the display control unit 170 for causing the display unit 20 or 520 to perform different displays depending on whether or not image blurring occurs.
This allows the photographer to accurately find that image blur appears in the photographed image.
The image pickup apparatus 1 includes a display unit 20. This allows the photographer to find that image blur appears in the photographed image.
The display control unit 170 causes the display unit 20 or 520 to perform different display at the position of the display unit 20 or 520 corresponding to the position of the image blur. This allows the photographer to recognize the position of the image blur.
The display unit 20 includes a plurality of display units 20A and 20A, and the display control unit 170 causes a display unit near an image blur position among the plurality of display units 20A and 20A to perform different displays depending on the presence or absence of image blur. This allows the photographer to recognize the position of the image blur.
The display control unit 170 displays the image information G captured by the image pickup unit 11 on the display unit 20 or 520, and displays a display image including identification information for identifying image blur and the image information G on the display unit 20 or 520. This allows the photographer to recognize the position of the image blur.
When an edge of an image is detected based on image information captured by the image pickup unit 11 and a pixel shift with respect to light received by the ranging information acquisition unit 13, the determination unit 160 determines image blur.
This allows the photographer to accurately find that image blur appears in the photographed image.
The image pickup apparatus 1 acquires ranging information with respect to an object based on the light received by the ranging information acquisition unit 13. In this case, the photographer can understand that the reason why the required ranging information is not obtained is image blur.
The image pickup apparatus 1 includes a transmitting and receiving unit 180 that outputs three-dimensional information determined based on the ranging information acquired from the ranging information acquisition unit 13 as an example of an output unit. In this case, the photographer can recognize the cause of the inability to acquire the required three-dimensional information as image blur.
The image processing method according to an embodiment of the present invention includes: an image pickup step of capturing an image of the object by the image pickup unit 11; a projection step of projecting light to the object by the projection unit 12; a light receiving step of receiving light reflected from the object by the ranging information acquisition unit 13; a determining step of determining whether or not image blurring occurs by the determining unit 160, 560, or 660 based on both the output of the ranging information acquiring unit 13 and the output of the image pickup unit 11; and a display step of causing the display unit 20 or 520 to perform different display by the display control unit 170 or 530 depending on whether or not image blurring occurs.
The image pickup apparatus 1 and the display apparatus 500, as examples of the information processing apparatus according to the embodiment of the present invention, include display control units 170 and 530 for causing the display units 20 and 520 to perform different display depending on the presence or absence of image blur based on the determination results of the determination units 160, 560 and 660, the determination units 160, 560 and 660 determining whether or not there is image blur based on both the output of the image pickup unit 11 for capturing an image of an object and the output of the ranging information acquisition unit 13 for receiving light that has been projected onto and reflected from the object.
The image pickup apparatus 1 and the display apparatus 500, as examples of the information processing apparatus according to the embodiment of the present invention, include display control units 170 and 530 for displaying a three-dimensional image 3G determined based on the output of the ranging information acquisition unit 13, the ranging information acquisition unit 13 being an example of a light receiving unit that receives light projected to and reflected from an object. The display control units 170 and 530 display a display image on the display units 20 and 520, the display image including (i) position identification information 3Ga, 3Gb, or 3Gc for identifying a position of at least one of an object at a far position, a low reflection object, or a blind spot in the three-dimensional image 3G, the position being determined as a position of at least one of an object at a far position far from the ranging information acquisition unit 13 when the ranging information acquisition unit 13 receives light reflected from the object, a low reflection object having a low reflection with respect to the projected light, or a blind spot with respect to the ranging information acquisition unit 13 when the ranging information acquisition unit 13 receives light reflected from the object; and (ii) a three-dimensional image 3G.
Therefore, since the reason why the desired three-dimensional image 3G cannot be acquired can be recognized as a distant object, a low-reflection object, or a blind spot by viewing the three-dimensional image 3G, necessary measures (e.g., re-shooting an image) can be taken depending on the reason.
The three-dimensional image 3G is determined by the three-dimensional reconstruction processing units 150, 550, and 650, and the three-dimensional reconstruction processing units 150, 550, and 650 are examples of the three-dimensional information determining unit.
The display control unit 170 or 530 may display a display image including any one of position identification information of 3Ga, 3Gb, and 3Gc based on position information of any one of a distant object, a low reflection object, and a blind spot, and including a three-dimensional image 3G, on the display unit 20 or 520, and may display a display image including any two or all of position identification information of 3Ga, 3Gb, and 3Gc, and including a three-dimensional image 3G, on the display unit 20 or 520, based on position information of any two or all of a distant object, a low reflection object, and a blind spot.
When the information processing apparatus is the image pickup apparatus 1, the image pickup apparatus 1 includes the ranging information acquisition unit 13 and the three-dimensional reconstruction processing unit 150, as depicted in fig. 19.
When the information processing apparatus 500 is the display apparatus 500, as shown in fig. 20 and 21, the display apparatus 500 does not include the ranging information acquisition unit 13, and the image pickup apparatus 1 includes the ranging information acquisition unit 13, and transmits the output of the ranging information acquisition unit 13 to the display apparatus 500 or the server 600.
The display device 500 may include the three-dimensional reconstruction processing unit 550 or need not include the three-dimensional reconstruction processing unit 550. When the display apparatus 500 does not include the three-dimensional reconstruction processing unit 550, the image pickup apparatus 1 may include the three-dimensional reconstruction processing unit 150 and transmit a three-dimensional image to the display apparatus 500, or, as depicted in fig. 21, the server 600 may include the three-dimensional reconstruction processing unit 650 and transmit a three-dimensional image to the display apparatus 500.
The display control unit 170 or 530 displays a display image on the display unit 20 or 520, the display image including (i) position identification information 3Ga, 3Gb, or 3Gc based on position information indicating a position that is a position lower than a threshold with respect to a density of point cloud data included in the three-dimensional image 3G and is determined to correspond to at least one of a distant object, a low-reflection object, or a blind spot, and (ii) the three-dimensional image 3G.
Therefore, the reason why the density of the point cloud data is lower than the threshold value can be recognized as a distant object, a low reflection object, or a blind spot by viewing the three-dimensional image 3G.
The display control unit 170 or 530 displays a display image on the display unit 20 or 520, the display image (i) being based on position identification information 3Ga, 3Gb, or 3Gc of position information indicating position information of at least one of an object, a low-reflection object, or a blind spot determined to be far based on an output of the image pickup unit 11 for capturing an image of the object, and (ii) a three-dimensional image 3G.
Therefore, the reason why the required three-dimensional image 3G is not displayed can be accurately recognized as a distant object, a low-reflection object, or a blind spot based on the output of the image pickup unit 11.
When the information processing apparatus is the image pickup apparatus 1, the image pickup apparatus 1 includes the image pickup unit 11, as depicted in fig. 19. When the information processing apparatus 500 is the display apparatus 500, as depicted in fig. 20 and 21, the display apparatus 500 does not include the image pickup unit 11, and the image pickup apparatus 1 includes the image pickup unit 11, and the output of the image pickup unit 11 is transmitted to the display apparatus 500 or the server 600.
The image pickup apparatus 1 and the display apparatus 500 include determination units 160 and 560, the determination units 160 and 560 being configured to determine a position of at least one of a distant object, a low-reflection object, or a blind spot in the three-dimensional image 3G; and the display control units 170 and 530 display images on the display units 20 and 520, the display images (i) position identification information 3Ga, 3Gb, or 3Gc based on the determination results of the determination units 160 and 560, and (ii) a three-dimensional image 3G.
When the information processing apparatus is the image pickup apparatus 1, the image pickup apparatus 1 includes a determination unit 160, as depicted in fig. 19.
When the information processing apparatus 500 is the display apparatus 500, the display apparatus 500 may include the determination unit 560 as shown in fig. 20, or the determination unit 560 need not be included.
When the display apparatus 500 does not include the determination unit 560, the image pickup apparatus 1 may include the determination unit 160 and transmit the determination result to the display apparatus 500, or the server 600 may include the determination unit 660 as depicted in fig. 21 and transmit the determination result to the display apparatus 500.
The display control unit 170 or 530 changes the position and orientation of the virtual camera IC at the viewpoint position at which the three-dimensional image 3G is viewed, thereby changing the display area in which the three-dimensional image 3G to be displayed on the display unit 20 or 520 is displayed.
The image pickup apparatus 1 and the display apparatus 500 include, as examples of an information processing apparatus according to an embodiment of the present invention, display control units 170 and 530 for displaying a three-dimensional image 3G determined based on an output of a ranging information acquisition unit 13 on display units 20 and 520, the ranging information acquisition unit 13 being, as examples of a light receiving unit, receiving light projected to and reflected from an object; and display control units 170 and 530 display images on display units 20 and 520, the display images including (i) position identification information 3G1 and 3G2 for identifying the position of the ranging information acquisition unit 13 acquired when the ranging information acquisition unit 13 receives light reflected from an object, based on the position information of the ranging information acquisition unit 13 acquired when the ranging information acquisition unit 13 receives light reflected from the object; and (ii) a three-dimensional image 3G.
Therefore, the image pickup position indicating the position of the ranging information acquiring unit 13 acquired when the ranging information acquiring unit 13 receives the light reflected from the object, and the positional relationship with respect to the specific object can be understood from the three-dimensional image 3G. That is, the positional relationship between the image pickup position and the specific object at the place of three-dimensional image capturing and the positional relationship between the image pickup position and the specific object with respect to the three-dimensional image can be easily compared.
The three-dimensional image 3G and the position information are determined by the three-dimensional reconstruction processing units 150, 550, and 650, and the three-dimensional reconstruction processing units 150, 550, and 650 are examples of the three-dimensional information determining unit.
When the information processing apparatus is the image pickup apparatus 1, the image pickup apparatus 1 includes a ranging information acquisition unit 13 and a three-dimensional reconstruction processing unit 150.
When the information processing apparatus 500 is the display apparatus 500, the display apparatus 500 does not include the ranging information acquiring unit 13, and the image pickup apparatus 1 includes the ranging information acquiring unit 13, and transmits the output of the ranging information acquiring unit 13 to the display apparatus 500 or the server 600.
The display apparatus 500 may include the three-dimensional reconstruction processing unit 550, or need not include the three-dimensional reconstruction processing unit 550, and when the display apparatus 500 does not include the three-dimensional reconstruction processing unit 550, the image pickup apparatus 1 may include the three-dimensional reconstruction processing unit 150 and transmit the three-dimensional image and the position information to the display apparatus 500, or the server 600 may include the three-dimensional reconstruction processing unit 650 and transmit the three-dimensional image and the position information to the display apparatus 500.
The display control unit 170 or 530 displays a display image on the display unit 20 or 520, the display image including (i) identification information 3Ga, 3Gb, or 3Gc, which is an example of low-density identification information for identifying a region based on region information indicating a region where the density of the point cloud with respect to the three-dimensional image 3G is lower than a threshold value, and (ii) the three-dimensional image 3G.
In this case, since the positional relationship between the image pickup position and the area where the density with respect to the point cloud data is lower than the threshold value can be understood, the cause of the density with respect to the point cloud data being lower than the threshold value can be determined. For example, when the area is farther than the image pickup position, it can be found that an object that is a distance is a cause; when the area is in a blind spot with respect to the image pickup position, it can be found that the blind spot is the cause; when this area is neither a distant object nor a blind spot, a low reflection object can be found to be the cause.
The display control unit 170 or 530 changes the position and orientation of the virtual camera IC at the viewpoint position at which the three-dimensional image 3G is viewed, thereby changing the display area of the three-dimensional image 3G to be displayed on the display unit 20 or 520.
When the position of the virtual camera IC is at the position 3G1 or 3G2 identified by the position identification information, the display control unit 170 or 530 changes the orientation of the virtual camera IC to a predetermined orientation.
In this way, the line of sight of the user at the image pickup position is directed to a specific object that is desired to be viewed in the field.
The display control unit 170 or 530 changes the orientation of the virtual camera IC such that the display area includes a predetermined coordinate or a low-density portion in which the density of the point cloud data with respect to the three-dimensional image 3G is lower than a threshold value.
In this way, the line of sight of the user at the image pickup position can be directed to a predetermined coordinate or a specific object corresponding to the low-density portion in the three-dimensional image 3G.
The display control unit 170 or 530 displays a three-dimensional image 3G on the display unit 20 or 520, the three-dimensional image 3G being determined based on the output of the ranging information acquisition unit 13 located at a first position and the output of the ranging information acquisition unit 13 located at a second position different from the first position; and the display control unit 170 or 530 displays a display image including (i) first position identification information 3G1 for identifying the first position and second position identification information 3G2 for identifying the second position, and (ii) a three-dimensional image 3G on the display unit 20 or 520.
Therefore, the positional relationship between the first and second image pickup positions and the specific object can be understood from the three-dimensional image 3G.
Although the information processing apparatus and the information processing method have been described with reference to the embodiments, the present application is not limited to the embodiments, and various modifications and improvements may be made without departing from the scope of the claimed application.
The present application is based on and claims priority from Japanese patent application Nos. 2021-048090 filed on 3 months 23 of 2021 and 2021-048245 filed on 3 months 23 of 2021. Japanese patent application No. 2021-048090 and Japanese patent application No. 2021-048245 are incorporated herein by reference in their entirety.
Description of symbols
1. Image pickup apparatus (example of information processing device)
3G three-dimensional image
3Ga,3Gb,3Gc identification information
3G1,3G2 position identifying information
10. Shell body
11. Image pickup unit
11a,11a image sensor
11b,11b fish-eye lens
12. Projection unit
12a,12a light source unit
12b,12b wide-angle lens
13. Distance measurement information acquisition unit (example of light receiving unit)
13a,13a TOF sensor
13b,13b wide-angle lens
14. Processing circuit
15. Image pick-up switch
20. Display unit
20A,20a display unit
111. Other image pickup units
150. 550, 650 three-dimensional reconstruction processing unit (example of three-dimensional information determination unit)
160. 560, 660 determination unit
170. Display control unit (example of output unit)
180. Transmitting and receiving unit (example of output unit)
300. External device (output destination example)
500. Display device (output destination and information processing device example)
520. Display unit (output destination example)
530. Display control unit (example of output unit)
600. Server device
L-shaped synchronous signal line
List of references
Patent literature
[ PTL 1] Japanese unexamined patent application publication No. 2018-077071
[ PTL 2] Japanese patent No. 5423287
[ PTL 3] Japanese patent No. 6192938

Claims (13)

1. An information processing apparatus comprising:
a display control unit configured to display a three-dimensional image on a display unit, the three-dimensional image being determined based on an output of a light receiving unit that receives light projected onto an object and reflected from the object,
wherein,
the display control unit is configured to display a display image on the display unit, the display image including position identification information that identifies a position of a light receiving unit acquired when the light receiving unit receives light reflected from the object, based on the position identification information of the light receiving unit acquired when the light receiving unit receives light reflected from the object, and the three-dimensional image.
2. The information processing apparatus according to claim 1, wherein
The display control unit is configured to display a display image on the display unit based on area information indicating an area where a density of point cloud data with respect to the three-dimensional image is lower than a threshold, the display image including low density identification information identifying the area.
3. The information processing apparatus according to claim 1,
wherein,
the display control unit is configured to display a display image including position identification information and a three-dimensional image on the display unit, the position identification information identifying a position of at least one of a distant object, a low-reflection object, or a blind spot based on second position information indicating a position of at least one of an object determined to correspond to a distant place in the three-dimensional image that is distant from the light receiving unit when the light receiving unit receives light reflected from the object, the low-reflection object having a low reflectance with respect to projected light, or a blind spot with respect to the light receiving unit when the light receiving unit receives light reflected from the object.
4. The information processing apparatus according to claim 3,
Wherein,
the display control unit is configured to display the display image on the display unit based on the second position information indicating a position determined to be lower than a threshold with respect to a density of point cloud data of the three-dimensional image and determined to correspond to at least one of a distant object, a low-reflection object, or a blind spot.
5. The information processing apparatus according to claim 3 or 4,
wherein,
the display control unit is configured to display the display image on the display unit based on the second position information indicating a position corresponding to at least one of a distant object, a low-reflection object, or a blind spot in the three-dimensional image, which is determined based on an output of an image pickup unit that captures an image of the object.
6. The information processing apparatus according to any one of claims 3 to 5, further comprising
A determining unit configured to determine a position corresponding to at least one of a distant object, a low-reflection object, or a blind spot in the three-dimensional image,
wherein,
the display control unit is configured to display the display image on the display unit based on a determination result of the determination unit.
7. The information processing apparatus according to any one of claims 1 to 6,
wherein,
the display control unit is configured to change a display area of the three-dimensional image to be displayed on the display unit by changing a position and an orientation of a virtual camera located at a viewpoint position at which the three-dimensional image is viewed.
8. The information processing apparatus according to claim 7,
wherein,
the display control unit is configured to change an orientation of the virtual camera to a predetermined direction when a position of the virtual camera is located at the position identified by the position identification information.
9. The information processing apparatus according to claim 4,
wherein,
the display control unit is configured to change the orientation of the virtual camera such that predetermined coordinates in the three-dimensional image are included in the display image.
10. The information processing apparatus according to claim 4 or 5,
wherein,
the display control unit is configured to change the orientation of the virtual camera such that a low density portion having a density of point cloud data with respect to the three-dimensional image below a threshold value is included in the display image.
11. The information processing apparatus according to any one of claims 1 to 10,
Wherein,
the display control unit is configured to display, on the display unit, a three-dimensional image determined based on an output of the light receiving unit located at a first position and an output of the light receiving unit located at a second position different from the first position, and to display, on the display unit, a display image including first position identification information identifying the first position and second position identification information identifying the second position.
12. An information processing method for displaying a three-dimensional image determined based on an output of a light receiving unit that receives light projected onto an object and reflected from the object on a display unit,
the information processing method comprises the following steps:
identifying a position of a light receiving unit acquired when the light receiving unit receives light reflected from the object based on position information indicating the position of the light receiving unit acquired when the light receiving unit receives light reflected from the object,
displaying a display image on the display unit, the display image including position identification information based on position identification information identifying a position identified in the identification, and including the three-dimensional image.
13. The information processing method according to claim 12, further comprising
Determining a position in the three-dimensional image based on an output of an image pickup unit configured to capture an image of the object, the position being at least one of a distant object from the light receiving unit when the light receiving unit receives light reflected from the object, a low reflection object with respect to projected light, or a blind spot with respect to the light receiving unit when the light receiving unit receives light reflected from the object,
wherein,
the displaying includes displaying the display image on the display unit based on the second position information indicating the position determined in the determining, the display image including second position information indicating a position corresponding to at least one of a distant object, a low-reflection object, or a blind spot.
CN202280023403.0A 2021-03-23 2022-03-16 Information processing apparatus and information processing method Pending CN117121479A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2021-048245 2021-03-23
JP2021048090A JP2022147012A (en) 2021-03-23 2021-03-23 Information processing apparatus and information processing method
JP2021-048090 2021-03-23
PCT/JP2022/011915 WO2022202536A1 (en) 2021-03-23 2022-03-16 Information processing apparatus and information processing method

Publications (1)

Publication Number Publication Date
CN117121479A true CN117121479A (en) 2023-11-24

Family

ID=83463630

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280023403.0A Pending CN117121479A (en) 2021-03-23 2022-03-16 Information processing apparatus and information processing method

Country Status (2)

Country Link
JP (1) JP2022147012A (en)
CN (1) CN117121479A (en)

Also Published As

Publication number Publication date
JP2022147012A (en) 2022-10-06

Similar Documents

Publication Publication Date Title
CA2577840C (en) A method for automated 3d imaging
US20170374342A1 (en) Laser-enhanced visual simultaneous localization and mapping (slam) for mobile devices
WO2017222558A1 (en) Laser-enhanced visual simultaneous localization and mapping (slam) for mobile devices
JP2001266128A (en) Method and device for obtaining depth information and recording medium recording depth information obtaining program
JP6868167B1 (en) Imaging device and imaging processing method
CN117121479A (en) Information processing apparatus and information processing method
WO2022202536A1 (en) Information processing apparatus and information processing method
WO2022202775A1 (en) Imaging device, imaging method, and information processing device
JP7120365B1 (en) IMAGING DEVICE, IMAGING METHOD AND INFORMATION PROCESSING DEVICE
JP6966011B1 (en) Imaging device, imaging method and information processing device
JP7006824B1 (en) Information processing equipment
JP7031771B1 (en) Imaging device, imaging method and information processing device
JP7040660B1 (en) Information processing equipment and information processing method
JP6868168B1 (en) Imaging device and imaging processing method
CN114119696A (en) Method, device and system for acquiring depth image and computer readable storage medium
KR102660776B1 (en) Information processing devices and information processing methods
JP2022147124A (en) Information processing apparatus
JP2021150882A (en) Image capture device and image capture processing method
JP2021150880A (en) Image capture device and image capture processing method
JP2015005200A (en) Information processing apparatus, information processing system, information processing method, program, and memory medium
JP3338861B2 (en) 3D environment measurement camera
JPH11183142A (en) Method and apparatus for picking up three-dimensional image
AU2005279700B2 (en) A method for automated 3D imaging
CN113573038A (en) Binocular system and depth map acquisition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination