WO2022044806A1 - Information processing device and method - Google Patents

Information processing device and method Download PDF

Info

Publication number
WO2022044806A1
WO2022044806A1 PCT/JP2021/029625 JP2021029625W WO2022044806A1 WO 2022044806 A1 WO2022044806 A1 WO 2022044806A1 JP 2021029625 W JP2021029625 W JP 2021029625W WO 2022044806 A1 WO2022044806 A1 WO 2022044806A1
Authority
WO
WIPO (PCT)
Prior art keywords
projection
unit
image
information processing
projected
Prior art date
Application number
PCT/JP2021/029625
Other languages
French (fr)
Japanese (ja)
Inventor
直樹 小林
清登 染谷
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2022044806A1 publication Critical patent/WO2022044806A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/74Projection arrangements for image reproduction, e.g. using eidophor

Definitions

  • the present disclosure relates to an information processing device and a method, and more particularly to an information processing device and a method capable of suppressing a decrease in the accuracy of projection correction.
  • the projector, camera, and screen are modeled and estimated, so if the accuracy of the model estimation is low, the accuracy of the final corrected image may decrease.
  • This disclosure is made in view of such a situation, and makes it possible to suppress a decrease in the accuracy of projection correction.
  • the information processing apparatus sets the three-dimensional point position, which is the projection position on the three-dimensional space corresponding to the pixel of interest of the projection unit that projects the input image, to the viewpoint of the projection image projected by the projection unit.
  • the input points are projected onto the light receiving area of the virtual imaging unit, which is a virtual imaging unit that captures the projected image at the position and orientation, and the projection points indicating the positions where the three-dimensional point positions in the light receiving region are projected are input.
  • the three-dimensional point position which is the projection position in the three-dimensional space corresponding to the pixel of interest of the projection unit that projects the input image, is the viewpoint of the projection image projected by the projection unit.
  • the input points are projected onto the light receiving area of the virtual imaging unit, which is a virtual imaging unit that captures the projected image at the position and orientation, and the projection points indicating the positions where the three-dimensional point positions in the light receiving region are projected are input.
  • This is an information processing method for deriving a correction vector corresponding to the pixel of interest by converting to the coordinate system of an image.
  • the 3D point position which is the projection position in the 3D space corresponding to the pixel of interest of the projection unit that projects the input image, is the projection projected by the projection unit.
  • a projection point that is projected onto the light receiving area of the virtual imaging unit, which is a virtual imaging unit that captures the projected image at the position and orientation as the viewpoint of the image, and the three-dimensional point position in the light receiving region is projected.
  • the correction vector corresponding to the pixel of interest is derived by converting to the coordinate system of the input image.
  • Conventional general projection imaging systems for example, Flexible Display Technologies
  • a camera or a camera is placed at a position facing the screen.
  • the projector had to be fixedly placed.
  • the coordinator when adjusting the size, position, inclination, etc. of the image after projection correction, it is necessary for the coordinator to correct each of the positions of the four corners of the projected image as shown in A of FIG. Since the stone adjustment is also performed visually at the same time, not only complicated work is required, but also the correction accuracy may depend on the ability of the coordinator.
  • the correction accuracy may depend on the accuracy of the model. Therefore, if the accuracy of the model estimation is low, the accuracy of the final corrected image may be reduced.
  • the projected image is the position and orientation of the three-dimensional point position, which is the projection position on the three-dimensional space corresponding to the pixel of interest of the projection unit that projects the input image, as the viewpoint of the projection image projected by the projection unit.
  • a position in which a three-dimensional point position, which is a projection position in a three-dimensional space corresponding to a pixel of interest of a projection unit that projects an input image is used as a viewpoint of a projection image projected by the projection unit and a position
  • the coordinate system of the input image is a projection point that is projected onto the light receiving area of the virtual imaging unit, which is a virtual imaging unit that captures the projected image in the posture, and the projection point indicating the position where the three-dimensional point position in the light receiving region is projected.
  • projection correction becomes possible without the need for modeling the projection unit. That is, it is possible to perform projection correction with an accuracy that does not depend on the accuracy of the model. That is, it is possible to suppress a decrease in the accuracy of projection correction. Further, for example, as shown in B of FIG. 1, adjustment can be made only by the zoom shift roll while holding the corrected image. That is, the user does not need to consider and adjust the keystone correction. That is, it is possible to suppress a decrease in the accuracy of projection correction.
  • FIG. 2 is a block diagram showing a main configuration example of a projection imaging system which is an embodiment of an information processing system to which the present technology is applied.
  • the projection imaging system 100 includes a portable terminal device 101, a projector 102-1, and a projector 102-2, and is a system that projects an image on a screen 120 or images a screen 120.
  • the portable terminal device 101, the projector 102-1, and the projector 102-2 are connected to each other so as to be able to communicate with each other via the communication path 110.
  • the communication path 110 is arbitrary and may be wired or wireless.
  • the portable terminal device 101, the projector 102-1, and the projector 102-2 can send and receive control signals, image data, and the like via the communication path 110.
  • the portable terminal device 101 is a user-portable device such as a smartphone, a tablet terminal, a notebook personal computer, or the like.
  • the portable terminal device 101 has a communication function, an information processing function, and an image pickup function.
  • the portable terminal device 101 can control the image projection by the projector 102-1 and the projector 102-2.
  • the portable terminal device 101 can perform projection correction of the projector 102-1 and the projector 102-2.
  • the portable terminal device 101 can capture a projected image projected on the screen 120 by the projector 102-1 or the projector 102-2.
  • Projector 102-1 and projector 102-2 are projection devices that project images.
  • the projector 102-1 and the projector 102-2 are similar devices to each other.
  • the projector 102 is referred to as a projector 102.
  • the projector 102 can project an input image onto the screen 120 under the control of the portable terminal device 101.
  • Projector 102-1 and projector 102-2 can project images in cooperation with each other.
  • the projector 102-1 and the projector 102-2 can project an image at the same position with each other to realize high brightness of the projected image.
  • the projector 102-1 and the projector 102-2 project an image so that the projected images of each other are arranged side by side, and the two projected images form one image, and the projected image is enlarged (high resolution). Can be realized.
  • the projector 102-1 and the projector 102-2 can project an image so as to superimpose a part of each other's projected images or to include the other projected image in one projected image.
  • the projector 102-1 and projector 102-2 not only have higher brightness and larger screens, but also have, for example, higher dynamic range and higher frame rate of the projected image. Etc. can also be realized.
  • the projector 102 can geometrically correct the projected image under the control of the portable terminal device 101 so that the projected image is superimposed at the correct position.
  • the projector 102-1 projects the input image in the pixel region 121-1 with geometric correction like the corrected image 122-1.
  • the projector 102-2 projects the input image in the pixel region 121-2 with geometric correction like the corrected image 122-2.
  • the image of the pixel area 121-1 is projected by the projector 102-1 like the projected image 123-1. Further, the image of the pixel area 121-2 is projected by the projector 102-2 like the projected image 123-2. In the portion where the projected image 123-1 and the projected image 123-2 overlap, the corrected image 122-1 and the corrected image 122-2 are not distorted at the same position as the projected image 124 (as in the projected image 124). It is projected (in a rectangular shape).
  • the screen 120 is, for example, a flat screen in which the projection surface is formed by a flat surface.
  • the portable terminal device 101 can perform projection correction of the projector 102 three-dimensionally.
  • the projection imaging system 100 is composed of one portable terminal device 101 and two projectors 102, but the number of each device is arbitrary and is not limited to this example.
  • the projection imaging system 100 may have a plurality of portable terminal devices 101, or may have three or more projectors 102.
  • the portable terminal device 101 may be integrally configured with any of the projectors 102.
  • FIG. 3 is a diagram showing a main configuration example of a portable terminal device 101, which is an embodiment of an information processing device to which the present technology is applied.
  • the portable terminal device 101 includes an information processing unit 151, an image pickup unit 152, an input unit 161, an output unit 162, a storage unit 163, a communication unit 164, and a drive 165.
  • the information processing unit 151 has, for example, a CPU (Central Processing Unit), a ROM (Read Only Memory), a RAM (Random Access Memory), and the like, and can be used to execute various application programs (software). It is a computer that can realize the function. For example, the information processing unit 151 can install and execute an application program (software) that performs processing related to projection correction.
  • the computer includes a computer embedded in dedicated hardware and, for example, a general-purpose personal computer capable of executing various functions by installing various programs.
  • the image pickup unit 152 has an optical system, an image sensor, and the like, and can capture an image of a subject to generate an image.
  • the image pickup unit 152 can supply the generated captured image to the information processing unit 151.
  • the input unit 161 has, for example, input devices such as a keyboard, a mouse, a microphone, a touch panel, and an input terminal, and can supply information input via those input devices to the information processing unit 151.
  • the output unit 162 has, for example, an output device such as a display (display unit), a speaker (audio output unit), and an output terminal, and outputs information supplied from the information processing unit 151 via those output devices. Can be done.
  • an output device such as a display (display unit), a speaker (audio output unit), and an output terminal, and outputs information supplied from the information processing unit 151 via those output devices. Can be done.
  • the storage unit 163 has, for example, a storage medium such as a hard disk, a RAM disk, or a non-volatile memory, and can store the information supplied from the information processing unit 151 in the storage medium.
  • the storage unit 163 can read out the information stored in the storage medium and supply it to the information processing unit 151.
  • the communication unit 164 has, for example, a network interface, can receive information transmitted from another device, and can supply the received information to the information processing unit 151.
  • the communication unit 164 can transmit the information supplied from the information processing unit 151 to another device.
  • the drive 165 has an interface of a removable recording medium 171 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, reads information recorded on the removable recording medium 171 mounted on the drive 165, and reads information from the information processing unit 151. Can be supplied to.
  • the drive 165 can record the information supplied from the information processing unit 151 on the writable removable recording medium 171 attached to the drive 165.
  • the information processing unit 151 loads and executes, for example, the application program stored in the storage unit 163. At that time, the information processing unit 151 can appropriately store data and the like necessary for executing various processes.
  • the application program, data, and the like can be recorded and provided on a removable recording medium 171 as a package media or the like, for example. In that case, the application program, data, and the like are read out by the drive 165 equipped with the removable recording medium 171 and installed in the storage unit 163 via the information processing unit 151.
  • the application program can also be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
  • the application program, data, and the like are received by the communication unit 164 and installed in the storage unit 163 via the information processing unit 151. Further, the application program, data, and the like can be installed in advance in the ROM and the storage unit 163 in the information processing unit 151.
  • FIG. 4 shows a function realized by the information processing unit 151 executing an application program as a functional block.
  • the information processing unit 151 executes the application program to execute the corresponding point detection unit 181 and the camera posture estimation unit 182, the screen reconstruction unit 183, and the correction vector derivation unit 184 as functional blocks. It can have a projection control unit 185 and a projection area adjustment unit 186.
  • the corresponding point detection unit 181 detects the corresponding point for each captured image based on the captured image of the projected image projected on the screen 120.
  • the corresponding point detection unit 181 supplies the corresponding point information indicating the detected corresponding point to the camera posture estimation unit 182.
  • the camera posture estimation unit 182 estimates the position and posture of the camera corresponding to the captured image (that is, the position and posture taken by the portable terminal device 101 (imaging unit 152)) based on the corresponding point information. do. Further, the camera posture estimation unit 182 derives a three-dimensional point position, which is a projection position in the three-dimensional space corresponding to the pixel of interest of the projector 102, based on the estimated position and posture of the camera. The camera posture estimation unit 182 supplies the camera posture information indicating the estimated position and posture, the three-dimensional point position, and the like to the screen reconstruction unit 183 together with the corresponding point information.
  • the screen reconstruction unit 183 sets a projection surface (virtual screen) on which the projector 102 projects an image based on the corresponding point information and the camera posture information.
  • the screen 120 is a flat screen
  • the screen reconstruction unit 183 sets a flat projection plane.
  • the screen reconstruction unit 183 sets a viewpoint for viewing the projected image projected on the projection surface (that is, a viewpoint corresponding to the projection surface), and a virtual view showing the view when the projected image is viewed at that viewpoint.
  • Set the viewpoint camera that is, the viewpoint camera corresponding to the projection plane).
  • the screen reconstruction unit 183 generates temporary screen information including information about the projection surface set in this way and viewpoint camera information which is information about the viewpoint camera set in this way, and together with camera attitude information and the like, a correction vector. It is supplied to the out-licensing unit 184.
  • the correction vector derivation unit 184 derives a correction vector indicating how to correct each pixel of the input image based on the information and the like. That is, the correction vector derivation unit 184 sets the three-dimensional point position, which is the projection position on the three-dimensional space corresponding to the pixel of interest of the projection unit that projects the input image, as the viewpoint of the projection image projected by the projection unit.
  • the projection point of the input image is projected onto the light receiving area of the virtual imaging unit, which is a virtual imaging unit that captures the projected image at the position and orientation, and the projection point indicating the position where the three-dimensional point position in the light receiving region is projected.
  • a correction vector corresponding to the pixel of interest is derived.
  • the correction vector derivation unit 184 can also acquire the projection correction information prepared in advance indicating the method of projection correction, and derive the correction vector based on the projection correction information.
  • the correction vector derivation unit 184 is a projection position parameter which is a parameter related to the control of the projection area (position, size, inclination, etc.) which is the area where the projection image is projected on the projection surface, which is supplied from the projection area adjustment unit 186. Can also be obtained and a correction vector can be derived based on the projection position parameter.
  • the correction vector derivation unit 184 supplies the derived correction vector to the projection control unit 185.
  • the projection control unit 185 supplies the correction vector to the projector 102 to be controlled. Further, the projection control unit 185 supplies a projection instruction of the corrected image to the projector 102 to project the corrected image.
  • the projection area adjustment unit 186 generates, for example, a user interface (UI (User Interface)) image, supplies it to the output unit 162, and displays the UI image on the monitor. Further, the projection area adjustment unit 186 acquires the user instruction for the UI image received by the input unit 161.
  • the projection area adjustment unit 186 controls the position, size, inclination, and the like of the projection area, which is the area on the projection surface on which the projection image is projected, based on the user's instruction. For example, the projection area adjustment unit 186 displays a UI image that accepts instructions such as shift, zoom, and roll of the projection area on the monitor, and acquires the input user instructions regarding shift, zoom, roll, and the like of the projection area.
  • the projection area adjusting unit 186 supplies the correction vector deriving unit 184 with projection position parameters corresponding to the user's instructions, that is, projection position parameters for shifting, zooming, and rolling the projection area as instructed.
  • FIG. 5 is a diagram showing a main configuration example of the projector 102, which is an embodiment of an information processing apparatus to which the present technology is applied.
  • the projector 102 has an information processing unit 201, a projection unit 202, an input unit 211, an output unit 212, a storage unit 213, a communication unit 214, and a drive 215.
  • the information processing unit 201 is a computer that has, for example, a CPU, ROM, RAM, etc., and can realize various functions by executing an application program (software) using them.
  • the information processing unit 201 may install and execute an application program (software) that performs processing related to image projection.
  • the computer includes a computer embedded in dedicated hardware and, for example, a general-purpose personal computer capable of executing various functions by installing various programs.
  • the projection unit 202 has an optical device, a light source, and the like, and can be controlled by the information processing unit 201 to project a desired image.
  • the projection unit 202 can project an image supplied from the information processing unit 201.
  • the input unit 211 has, for example, input devices such as a keyboard, mouse, microphone, touch panel, and input terminal, and can supply information input via those input devices to the information processing unit 201.
  • the output unit 212 has, for example, an output device such as a display (display unit), a speaker (audio output unit), and an output terminal, and outputs information supplied from the information processing unit 201 via those output devices. Can be done.
  • an output device such as a display (display unit), a speaker (audio output unit), and an output terminal, and outputs information supplied from the information processing unit 201 via those output devices. Can be done.
  • the storage unit 213 has a storage medium such as a hard disk, a RAM disk, or a non-volatile memory, and can store the information supplied from the information processing unit 201 in the storage medium.
  • the storage unit 213 can read out the information stored in the storage medium and supply it to the information processing unit 201.
  • the communication unit 214 has, for example, a network interface, can receive information transmitted from another device, and can supply the received information to the information processing unit 201.
  • the communication unit 214 may transmit the information supplied from the information processing unit 201 to another device.
  • the drive 215 has an interface of a removable recording medium 221 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, reads information recorded on the removable recording medium 221 mounted on the drive 215, and reads out information processing unit 201. Can be supplied to.
  • the drive 215 can record the information supplied from the information processing unit 201 on the writable removable recording medium 221 attached to the drive 215.
  • the information processing unit 201 loads and executes, for example, the application program stored in the storage unit 213. At that time, the information processing unit 201 can appropriately store data and the like necessary for executing various processes.
  • the application program, data, and the like can be recorded and provided on a removable recording medium 221 as a package media or the like, for example. In that case, the application program, data, and the like are read out by the drive 215 equipped with the removable recording medium 221 and installed in the storage unit 213 via the information processing unit 201.
  • the application program can also be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
  • the application program, data, and the like are received by the communication unit 214 and installed in the storage unit 213 via the information processing unit 201. Further, the application program, data, and the like can be installed in advance in the ROM or the storage unit 213 in the information processing unit 201.
  • FIG. 6 shows a function realized by the information processing unit 201 executing an application program as a functional block.
  • the information processing unit 201 can have a correction vector acquisition unit 231, an image acquisition unit 232, and a correction image generation unit 233 as functional blocks by executing an application program.
  • the correction vector acquisition unit 231 acquires the correction vector supplied from the portable terminal device 101 and supplies it to the correction image generation unit 233.
  • the image acquisition unit 232 acquires the input image and supplies it to the correction image generation unit 233.
  • the corrected image generation unit 233 corrects the input image using the correction vector and generates the corrected image.
  • the corrected image generation unit 233 supplies the corrected image to the projection unit 202 and projects the corrected image.
  • the portable terminal device 101 can perform projection correction without modeling the projector 102 (projection unit 202). That is, the portable terminal device 101 can perform projection correction with an accuracy that does not depend on the accuracy of the model. That is, the portable terminal device 101 can suppress a decrease in the accuracy of projection correction. Further, the portable terminal device 101 can adjust the position, size, tilt, and the like of the projection area only by the zoom shift roll while holding the corrected image. That is, the user does not need to consider the keystone correction in this correction. That is, the portable terminal device 101 can suppress a decrease in the accuracy of projection correction. In other words, the projection imaging system 100 can suppress a decrease in the accuracy of projection correction.
  • the corresponding point detection unit 181 detects the corresponding point in step S101.
  • the internal variables of the camera of the image pickup unit 152 for example, focal length, principal point position, lens distortion, etc. are calibrated.
  • the projector 102-1 and the projector 102-2 project a sensing pattern which is a predetermined pattern image.
  • a handheld camera imaging unit 152 of the portable terminal device 101 captures the projected image from three places.
  • the portable terminal device 101 decodes the captured images to acquire the corresponding points of the projector pixels on the captured images.
  • the pattern image used for this sensing is composed of the same Structured Light patterns of different colors.
  • the pattern image 301 projected by the projector 102-1 has a pattern in which black dots are arranged at equal intervals on a red background
  • the pattern image 302 projected by the projector 102-2 has black dots arranged at equal intervals on a blue background. It may have a pattern arranged in.
  • the projector 102-1 and the projector 102-2 simultaneously project these pattern images on the screen 120. At that time, the projected images of the pattern images may be superimposed on each other.
  • the handheld camera (imaging unit 152 of the portable terminal device 101) is moved by the user to capture the projected image from three places.
  • the image pickup unit 152 whose position is controlled by the user, takes images from three locations from the left direction, the front direction, and the right direction with respect to the projection area on the screen 120 (left viewpoint camera 311 in FIG. 8, front view).
  • Camera 312, right viewpoint camera 313) It should be noted that this imaging may be performed at a plurality of positions different from each other. Therefore, the number of shooting positions is arbitrary as long as it is 2 or more. In general, the larger the number of shooting positions (number of shootings), the more accurately the screen can be reconstructed.
  • each shooting position is arbitrary.
  • the screen can be reconstructed more accurately when the shooting position is wider (the position where the difference in the shooting orientation is larger than each other), for example, as in the above example.
  • the position of the front camera 312 in the above example may be such that the position of the front camera 312 is between the left viewpoint camera 311 and the right viewpoint camera 312, and is directly in front of the screen 120 (position facing the screen 120). It may be there, but it does not have to be directly in front of it.
  • the position of the front camera 312 may be substantially in front of (near directly in front of) the screen 120 manually positioned by the user.
  • these captured images may include both the projected image projected by the projector 102-1 and the projected image projected by the projector 102-2.
  • the corresponding point detection unit 181 decodes such a captured image and separates a plurality of pattern images included in the captured image and projected by different projectors 102.
  • the method of separating this pattern is arbitrary. For example, the relationship between the color information of the captured image obtained by capturing the mixed image of the projected images projected by giving different color information from the plurality of projectors 102, and the color information of the captured image and the color information of the projected image and the background.
  • a separated image for each color information may be generated based on the color model shown.
  • the color model includes color information of the projected image changed according to the spectral characteristics of the projection unit 202 and the image pickup unit 152 that acquires the image capture image, a attenuation coefficient indicating the attenuation that occurs in the mixed image captured by the image pickup unit 152, and a background.
  • the color information of is used as a parameter. Therefore, a separated image for each color information is generated based on the color model by using a parameter that minimizes the difference between the color information of the captured image and the color information estimated by the color model.
  • the Structured Light pattern used here may be any pattern as long as it is capable of color separation and decoding in one imaging. Further, when the camera is fixedly installed on a tripod or the like instead of being held by hand, a pattern such as Gray Code that decodes using information of a plurality of patterns in the time direction may be used. When the camera is fixed, the color separation process is not required, and the projector 102-1 and the projector 102-2 may project the pattern image at different timings in time.
  • the corresponding point detection unit 181 detects the corresponding point based on the plurality of captured images as described above.
  • the projected image 321 of the screen 120 is a projected image of the pattern image 301 projected by the projector 102-1.
  • the projected image 322 is a projected image of the pattern image 302 projected by the projector 102-2.
  • the captured image 331 is an captured image generated by the left viewpoint camera 311 capturing the screen 120 on which the projected image 321 and the projected image 322 are projected.
  • the captured image 332 is an captured image generated by the front camera 312 capturing the projected image 321 and the screen 120 on which the projected image 322 is projected.
  • the captured image 333 is an captured image generated by the right viewpoint camera 313 taking an image of the screen 120 on which the projected image 321 and the projected image 322 are projected.
  • the corresponding point detection unit 181 displays a pixel included in each of the captured image 331, the captured image 332, and the captured image 333, which displays points corresponding to each other, that is, a predetermined position of the pattern image 301 or the pattern image 302. Detect as a corresponding point (for example, white circle in the figure).
  • step S102 the camera posture estimation unit 182 is based on the two-dimensional correspondence point information between the three captured images obtained by the above-mentioned correspondence point detection process, and the left viewpoints at three three-dimensionally matching points. The positions and orientations of the cameras 311 to the right viewpoint camera 313 are estimated.
  • the corresponding point information of the two captured images For example, when focusing on the captured image 331 generated by the left viewpoint camera 311 and the captured image 332 generated by the front camera 312, as shown in FIG. 10A, the front viewpoint is captured from the corresponding point of the left viewpoint captured image 331.
  • the homography matrix (H 12 ) that transforms into the corresponding points of the image 332 is obtained.
  • This homography is obtained by RANSAC (Random Sample Consensus), which is a robust estimation algorithm, so that even if there are outliers at the corresponding points, they are not significantly affected.
  • RT decomposition of this homography matrix By RT decomposition of this homography matrix, the relative position and relative posture of the front camera 312 with respect to the left viewpoint camera 311 are derived.
  • the RT decomposition method for example, the method described in "Journal of the Society of Image Electronics and Electronics / Vol. 40 (2011) No. 3, p.421-427" is used.
  • the scale is indefinite, so the scale is determined by some rule.
  • the position and orientation of the front camera 312 with respect to the left viewpoint camera 311 obtained here and the corresponding point information thereof are used for triangulation to obtain the corresponding points.
  • Find the 3D point when finding a three-dimensional point by triangulation, the corresponding rays may not intersect each other. In that case, the midpoint of the line segment connecting the points where the corresponding rays are closest to each other is the three-dimensional point. And.
  • the front camera 312 and the right viewpoint camera 313 are focused on, and the same processing is performed to obtain the relative position and the relative posture of the right viewpoint camera 313 with respect to the front camera 312.
  • the scales of the front camera 312 and the right viewpoint camera 313 are indefinite, so the scale is determined by some rule.
  • the three-dimensional points of the corresponding points are obtained by performing triangulation using the positions and postures of the front camera 312 and the right viewpoint camera 313 and their corresponding point information.
  • the average distance from the camera (left-view camera 311 or front camera 312) of the three-dimensional points of the corresponding points obtained from the left-view camera 311 and the front camera 312, and the front Corrected the scales of the front camera 312 and the right viewpoint camera 313 so that the average distances from the camera (front camera 312 or right viewpoint camera 313) of the three-dimensional points of the corresponding points obtained from the camera 312 and the right viewpoint camera 313 match. do.
  • the scale is modified by changing the lengths of the translational component vectors of the front camera 312 and the right viewpoint camera 313.
  • the left viewpoint camera 311 is fixed as a reference, and the positions and orientations of the front camera 312 and the right viewpoint camera 313 are optimized for the internal parameters, the external parameters, and the world coordinate point cloud.
  • the evaluation value is the sum of squares of the distances from the three-dimensional points of the corresponding points to the corresponding three light rays, and optimization is performed so that this is the smallest.
  • the three-dimensional corresponding points of the three rays are the triangular survey points of the corresponding rays of the left viewpoint camera 311 and the front camera 312, and the triangular survey points of the corresponding rays of the front camera 312 and the right viewpoint camera 313.
  • the positions and orientations of the three cameras that is, the left viewpoint camera 311 and the front camera 312, and the right viewpoint camera 313) are estimated.
  • step S103 the screen reconstructing unit 183 reconstructs the screen based on the position and orientation of each camera estimated as described above.
  • each camera left viewpoint camera 311, front camera 312, and right viewpoint camera 313
  • the position and orientation of each camera estimated by the process of step S102 as shown in A of FIG. 13 and acquired by the process of step S101.
  • the most matching plane as shown in B of FIG. 13 is obtained, and this is referred to as a temporary plane screen 351. ..
  • the RANSAC method is used in order to suppress the influence of outliers mixed in the three-dimensional point cloud.
  • a virtual viewpoint is located at a certain distance in the direction facing the temporary flat screen 351 (normal direction of the temporary flat screen 351).
  • the viewpoint is a position set as a model of a position for viewing the projected image projected on the temporary flat screen 351.
  • the viewpoint camera 361 is a virtual camera for obtaining a captured image showing the state of the projected image when viewed from the viewpoint.
  • the viewpoint camera 361 is used as a reference for obtaining a correction vector so that the corrected image expected from the viewpoint camera 361 can be seen in the correction vector calculation process described later.
  • the vertical angle of the viewpoint camera 361 is a straight line group when the three-dimensional corresponding point group in the same row in the image projected from the projector 102 is linearly approximated. In the direction of the average of.
  • the vertical direction of the viewpoint camera 361 obtained here matches the vertical direction of the world where the actual screen 120 is installed, and the roll direction of the corrected image is automatically adjusted. It can be carried out.
  • the temporary flat screen 351 is used only for determining the position of the viewpoint camera 361.
  • step S104 the correction vector derivation unit 184 has the position and orientation of the camera estimated in the process of step S102, the corresponding point information acquired in the process of step S101, and the position of the viewpoint camera 361 obtained in the process of step S103. And the correction vector of each projector 102 is derived based on the posture.
  • the correction vector derivation unit 184 interpolates the missing points when the sensing points are missing, for example, as shown in FIG. Homography of the sensing point group 373 in the projector pixels around the missing sensing point and the two-dimensional point group obtained by projecting those three-dimensional points on the plane 374 that is a plane approximation of the corresponding three-dimensional point group is obtained. By converting the missing sensing point 371 of interest in the projector pixel by its homography, the corresponding three-dimensional point 372 is obtained, and this is used as the interpolation point of the missing three-dimensional point.
  • the correction vector derivation unit 184 uses the sensing points in the vicinity of the pixels corresponding to the outer peripheral positions of the pattern image 301 and the pattern image 302 projected by the projector 102, and performs the same processing as the interpolation of the missing points described above to perform three-dimensional points. Find the position of. Then, the correction vector derivation unit 184 projects each of the three-dimensional points onto the light receiving region of the viewpoint camera 361 (that is, the captured image generated by the viewpoint camera 361) on the captured image generated by the viewpoint camera 361. The outer peripheral region of the projector 102 (the range projected by each projector 102) is estimated.
  • the correction vector derivation unit 184 performs such processing for each of the projector 102-1 and the projector 102-2. As a result, the correction vector derivation unit 184 can estimate how the projected images projected by the two projectors 102 appear on the captured image generated by the viewpoint camera 361, that is, how they appear on the screen 120. can.
  • the projection range (outer peripheral position) of each projector 102 on the captured image generated by the viewpoint camera 361 can be obtained by using its internal variables and external variables.
  • the internal variable estimation for modeling the projector 102 a method of calibrating in advance and an internal variable corresponding to the zoom / shift can be acquired online in a projector having an optical zoom / shift adjustment. A method of introducing a mechanism in advance, a method of estimating the internal variable itself online, etc. can be considered.
  • the correction vector derivation unit 184 determines where on the captured image of the viewpoint camera 361 (that is, on the screen 120) the input image is presented, as shown in FIG.
  • the correction vector derivation unit 184 sets a rectangular region having the same aspect ratio as the input image in a region simultaneously included in the outer periphery of the images projected by the two projectors 102 (that is, a region in which both images are superimposed).
  • the aspect ratio of the input image 402 is 16: 9
  • the correction vector derivation unit 184 has a projection range 391 of the projector 102-1 and a projection range 392 of the projector 102-2 in the captured image 390 generated by the viewpoint camera 361.
  • a maximum rectangular area of 16: 9 is set in the area where the images overlap each other, and this is set as the image presentation position 401.
  • the initial values are the projection position parameters (zoom, shift, roll) that determine the image presentation position (for example, zoom: 1.0, shift: (0.0, 0.0), roll: 0.0). ..
  • This projection position parameter can be changed by the projection area adjustment process described later.
  • the correction vector derivation unit 184 obtains the correction vector corresponding to each pixel of the projector as shown in FIG.
  • the correction vector derivation unit 184 obtains a three-dimensional point position for a certain pixel of interest of the projector 102 of interest.
  • each of the three-dimensional point coordinate values (X, Y, Z) of the sensing points of the 4x4 around the pixel of interest for which the three-dimensional points have already been obtained is obtained by Bicubic interpolation. Note that this interpolation method is an example and is not limited to this example. For example, Bilinear interpolation may be applied.
  • the correction vector derivation unit 184 projects the obtained three-dimensional point of the pixel of interest onto the light receiving region (pixel region) of the viewpoint camera 361.
  • the position of the projection point in the light receiving region (captured image 390) of the viewpoint camera 361 in the image presentation position 401 is obtained and converted into the coordinate system of the input image 402.
  • the coordinate values (u, v) obtained here become the correction vector in the pixel of interest of the projector 102. That is, the expected correction image can be generated by storing the input pixel value of the pixel position of the obtained correction vector in the pixel of interest of the projector 102.
  • the correction vector derivation unit 184 performs this processing for all the pixels of the two projectors 102 to obtain the correction vectors for each of the projectors 102-1 and the projector 102-2.
  • step S105 the projection control unit 185 supplies the correction vector to the projector 102-1 and the projector 102-2, and projects the corrected image generated by using the correction vector.
  • the corrected image generation unit 233 of the projector 102 corrects the input image acquired by the image acquisition unit 232 using this correction vector, and generates a corrected image. By generating the corrected image in this way in each projector 102 and projecting it from the projection unit 202 onto the screen 120, the corrected projected image (projected image of the corrected image) is presented on the screen 120.
  • the correction image generation unit 233 of the projector 102-1 uses the correction vector for the projector 102-1 (supplied from the portable terminal device 101) acquired by the correction vector acquisition unit 231 to obtain an image.
  • a corrected image corresponding to the input image acquired by 232 is generated.
  • the correction image generation unit 233 of the projector 102-2 uses the correction vector for the projector 102-1 (supplied from the portable terminal device 101) acquired by the correction vector acquisition unit 231 by the image acquisition unit 232. Generates a corrected image corresponding to the acquired input image. It should be noted that these processes can be executed in parallel with each other.
  • the projection unit 202 of the projector 102-1 projects the corrected image thus generated on the screen 120.
  • the projection unit 202 of the projector 102-2 projects the corrected image thus generated on the screen 120. It should be noted that these processes can be executed in parallel with each other.
  • the projection area adjusting unit 186 determines whether or not to adjust the position, size, inclination, etc. of the projection area (image projection position on the screen 120). For example, the user visually observes the projected image of the corrected image projected on the screen 120, determines whether or not the projected image has the expected position, size, and inclination, and gives a user instruction based on the determination result. Is input to the input unit 161. The projection area adjustment unit 186 determines whether to adjust the position, size, inclination, etc. of the projection area based on the received user instruction.
  • step S107 the projection area adjusting unit 186 controls the output unit 162 to display the UI image, and controls the input unit 161 to accept the user instruction.
  • the projection area adjustment unit 186 has a key corresponding to enlargement / reduction of the zoom parameter, a key corresponding to the up / down / left / right of the shift parameter, a key corresponding to the tilt angle of the roll parameter, and the like, as shown in A of FIG.
  • the UI image 501 in which various keys 502 are arranged is displayed on the monitor so that the user can operate it.
  • the projection area adjustment unit 186 controls the input unit 161 to receive an operation on the UI image 501 by the user.
  • the projection area adjustment unit 186 supplies the projection position parameter corresponding to the received user instruction to the correction vector derivation unit 184, generates a correction vector, generates a correction image using the correction vector, and projects the correction image. Let me.
  • the corrected image on the screen 120 is enlarged / reduced, moved up / down / left / right, and rotated left / right in conjunction with the instruction input by the user.
  • the three-dimensional estimation is performed, adjustment can be easily performed while guaranteeing that the corrected image itself is rectangular (keystone correction).
  • an area 512 imitating the screen 120 and an image 513 imitating the projected image displayed in the area 512 are displayed, and the user operates the image 513 with a finger or the like.
  • a UI image 511 capable of shifting, zooming, rolling, etc. may be displayed on the monitor. With such a UI image, the projection size, projection position, and projection angle can be adjusted more intuitively.
  • the projection area adjustment unit 186 adjusts the projection position parameter based on the received user instruction, and supplies it to the correction vector derivation unit 184. For example, when the zoom parameter is updated according to the user's instruction, the projection area adjustment unit 186 may reduce the projection area from the projection image 523 to the projection image 524 or reduce the projection area as shown in FIG. 20A. The projection area is enlarged from 524 like the projection image 523. That is, the projection area adjustment unit 186 sets the projection position parameter so that the projection area has a size according to the user's instruction, and supplies the projection area parameter to the correction vector derivation unit 184.
  • the projection area adjusting unit 186 moves the projection area from the projection image 525 to the projection image 526 or vice versa, as shown in FIG. 20B. That is, the projection area adjustment unit 186 sets the projection position parameter so that the projection area is at a position according to the user's instruction, and supplies the projection area parameter to the correction vector derivation unit 184. Further, by updating the roll parameter according to the user's instruction, the projection area adjusting unit 186 rotates the projection area from the projection image 527 to the projection image 528 or vice versa, as shown in FIG. 20C. That is, the projection area adjustment unit 186 sets the projection position parameter so that the projection area has an inclination according to the user's instruction, and supplies it to the correction vector derivation unit 184.
  • step S104 the correction vector derivation unit 184 derives the correction vector using the updated projection position parameter. Assuming that the projector is installed horizontally, the roll direction is automatically adjusted by the process of step S103, so that the roll direction cannot be manually adjusted.
  • step S106 when the user's desired projection area is set and it is determined that the position, size, inclination, etc. of the projection area are not adjusted, the projection correction process ends.
  • the projection correction can be performed without modeling the projector 102 (projection unit 202), so that the portable terminal device 101 (projection imaging system 100) can perform the projection correction. It is possible to suppress a decrease in accuracy. Further, since the user does not need to consider the keystone correction in the adjustment of the projection area, the portable terminal device 101 (projection imaging system 100) can suppress the decrease in the accuracy of the projection correction.
  • the projection surface may be a curved surface and is not limited to a flat surface.
  • it may be a cylindrical surface. That is, the screen 120 may be a cylindrical screen instead of a flat screen.
  • the corresponding point detection by the corresponding point detection unit 181 and the camera posture estimation by the camera posture estimation unit 182 are the same as in the case of the flat screen described in the first embodiment.
  • the aspect ratio of the presented image is 16: 9, that is, the format of the image is the same as that of the flat screen.
  • the screen reconstruction unit 183 is a three-dimensional point obtained from the position and orientation of the camera estimated in step S102 and the corresponding point information detected in step S101, as shown in A of FIG. 21.
  • the most matching cylindrical surface as shown in B of FIG. 21 is obtained, and this is referred to as a temporary cylindrical surface screen 601.
  • the RANSAC method is applied in order to suppress the influence of outliers mixed in the three-dimensional point cloud.
  • the screen reconstruction unit 183 views the position of the cylindrical surface screen radius on the perpendicular extension line from the estimated center of gravity of the three-dimensional point cloud to the temporary cylindrical surface screen 601 as shown in A of FIG. Then, the viewpoint camera 611 is set at that viewpoint.
  • the direction of the viewpoint camera 611 is the direction of the center of gravity of the estimated three-dimensional point cloud.
  • the roll direction is determined by using the same method as in the case of the flat screen. In this way, the viewpoint camera 611 corresponding to the temporary cylindrical screen 601 is set.
  • step S104 the correction vector derivation unit 184 has the position and orientation of the camera estimated in the process of step S102, the corresponding point information detected in the process of step S101, and the temporary cylindrical surface obtained in the process of step S103.
  • the correction vector of each projector 102 is obtained based on the position and orientation of the screen 601 and the viewpoint camera 611.
  • the correction vector derivation unit 184 estimates how the projection of the projector 102 is projected on the cylindrical coordinate system of the temporary cylindrical surface screen 601 as shown in FIG. 23.
  • the pixels corresponding to the outer peripheral position of the projector 102 are obtained by using the sensing points in the vicinity, and the positions of the three-dimensional points are obtained by the same processing as the interpolation of the missing points in the correction vector calculation of the flat screen, and then the third.
  • the outer peripheral region of the projector 102 on the cylindrical coordinate system (the outer peripheral region 631 and the outer peripheral region 632 in the captured image 630 of the viewpoint camera 611). To estimate.
  • the coordinate system is expressed in the unit of the distance of [mm] in both the vertical and horizontal directions.
  • the correction vector derivation unit 184 performs this processing for each projector 102. This makes it possible to estimate how it will appear on the cylindrical screen. It should be noted that this process is performed when the projector 102 is not modeled or cannot be modeled. If the model can be modeled, the outer peripheral position of the projector 102 on the captured image of the viewpoint camera 611 can be obtained by using the internal variables and the external variables, as in the case of the flat screen.
  • the correction vector derivation unit 184 sets a rectangular region having the same aspect ratio as the input image in the region simultaneously included in the outer circumferences of the two projectors previously obtained in the cylindrical coordinate system. For example, in the correction vector derivation unit 184, when the aspect ratio of the input image is 16: 9, the projection range of the projector 102-1 and the projection range of the projector 102-2 are superimposed on the captured image of the viewpoint camera 611. A maximum rectangular area of 16: 9 is set in the existing area, and this is set as the image presentation position 641.
  • the correction vector derivation unit 184 obtains the correction vector corresponding to each pixel of the projector 102 as shown in FIG. 25.
  • the correction vector derivation unit 184 obtains a three-dimensional point position for a certain pixel of interest of the projector 102 of interest.
  • the three-dimensional point coordinate values X, Y, and Z of the sensing points around the pixel of interest for which the three-dimensional points have already been obtained are obtained by Bicubic interpolation.
  • the correction vector derivation unit 184 converts the three-dimensional point of the pixel of interest obtained here into a cylindrical coordinate system centered on the viewpoint camera 611, and the conversion point on the cylindrical coordinate system is the image presentation position. Find where it is located inside and convert it to the coordinate system of the input image.
  • the coordinate values obtained here are the correction vectors (u, v) in the pixel of interest of the projector. By performing this process on all the pixels of the two projectors 102, the correction vector on the cylindrical screen of each projector 102 is obtained.
  • the portable terminal device 101 can perform automatic projective geometry correction on the cylindrical surface screen.
  • the process of adjusting the projection area can be realized by adjusting the image presentation position in the same manner as in the case of a flat screen. Therefore, also in the case of this cylindrical screen, the portable terminal device 101 (projection imaging system 100) can perform projection correction without modeling the projection unit, as in the case of the above-mentioned flat screen. , It is possible to suppress a decrease in the accuracy of projection correction. Further, since the user does not need to consider the keystone correction in this correction, the portable terminal device 101 (projection imaging system 100) can suppress a decrease in the accuracy of the projection correction.
  • the projection surface may be a spherical surface. That is, the screen 120 may be a spherical screen instead of a flat screen.
  • the corresponding point detection by the corresponding point detection unit 181 and the camera posture estimation by the camera posture estimation unit 182 are the same as in the case of the flat screen described in the first embodiment.
  • the image to be presented will be in an equirectangular format that is supposed to be projected on a spherical screen or the like.
  • the screen reconstruction unit 183 is a three-dimensional point obtained from the position and orientation of the camera estimated in step S102 and the corresponding point information detected in step S101, as shown in A of FIG. 26.
  • the most matching spherical surface as shown in B of FIG. 26 is obtained, and this is referred to as a pseudo-spherical surface screen 701.
  • the RANSAC method is applied in order to suppress the influence of outliers mixed in the three-dimensional point cloud.
  • the screen reconstruction unit 183 sets the center of the viewpoint camera 711 at the center of this spherical surface, and looks at the direction toward the center of gravity of the three-dimensional point cloud estimated from the center.
  • the screen reconstruction unit 183 determines the roll direction by using the same method as in the case of the flat screen.
  • the roll adjustment can be performed automatically even if the projector is installed on a horizontal plane. It is used only as the initial value in the roll direction.
  • the vertical direction of the posture of the front camera can be set as the initial roll value here. In this way, the viewpoint camera 711 corresponding to the temporary spherical screen 701 is set.
  • step S104 the correction vector derivation unit 184 has the position and orientation of the camera estimated in the process of step S102, the corresponding point information detected in the process of step S101, and the pseudospherical screen obtained in the process of step S103.
  • the correction vector of each projector 102 is obtained based on the position and orientation of the viewpoint camera 711 and the viewpoint camera 711.
  • the correction vector derivation unit 184 estimates how the projection of the projector is projected on the regular distance cylindrical coordinate system of the pseudospherical screen 701.
  • the pixels corresponding to the outer peripheral position of the projector 102 are obtained by using the sensing points in the vicinity, and the positions of the three-dimensional points are obtained by the same processing as the interpolation of the missing points in the correction vector calculation of the flat screen, and then the third.
  • the 3D points into a regular-distance cylindrical coordinate system based on the viewpoint camera position
  • the outer peripheral region of the projector on the regular-distance cylindrical coordinate system is estimated.
  • the coordinates are expressed in units of angle radians in both the vertical and horizontal directions.
  • the correction vector derivation unit 184 performs this processing for each projector 102. This makes it possible to estimate how it will appear on the spherical screen. It should be noted that this process is performed when the projector 102 is not modeled or cannot be modeled. If the model can be modeled, the outer peripheral position of the projector 102 on the viewpoint camera image can be obtained by using the internal variables and the external variables, as in the case of the flat screen.
  • the correction vector derivation unit 184 sets a region included in the outer periphery of the two projectors 102 previously obtained in the regular distance cylindrical coordinate system as the presentation range of the input image.
  • which region in the equirectangular format is presented depends on the projection range on the spherical screen.
  • the correction vector derivation unit 184 obtains the correction vector corresponding to each pixel of the projector as shown in FIG.
  • the correction vector derivation unit 184 obtains a three-dimensional point position for a certain pixel of interest of the projector of interest.
  • the three-dimensional point coordinate values X, Y, and Z of the sensing points around the pixel of interest for which the three-dimensional points have already been obtained are obtained by Bicubic interpolation.
  • the correction vector derivation unit 184 converts the three-dimensional point of the pixel of interest obtained here into a regular-distance cylindrical coordinate system centered on the viewpoint camera 711, and the conversion point on the regular-distance cylindrical coordinate system is obtained. Find where it is located and convert it to the coordinate system of the input image.
  • the coordinate values obtained here are the correction vectors (u, v) in the pixel of interest of the projector. By performing this process on all the pixels of the two projectors 102, the correction vector on the spherical screen of each projector 102 is obtained.
  • the portable terminal device 101 can perform projection correction without modeling the projection unit, as in the case of the above-mentioned flat screen. It is possible to suppress a decrease in the accuracy of projection correction. Further, since the user does not need to consider the keystone correction in this correction, the portable terminal device 101 (projection imaging system 100) can suppress a decrease in the accuracy of the projection correction.
  • the present technology includes any configuration or a module using a processor (for example, a video processor) as a system LSI (Large Scale Integration), a module using a plurality of processors, or the like (for example, video) mounted on an arbitrary device or a device constituting the system. It can also be implemented as a module), a unit using a plurality of modules (for example, a video unit), a set in which other functions are added to the unit (for example, a video set), or the like (that is, a partial configuration of a device).
  • a processor for example, a video processor
  • system LSI Large Scale Integration
  • a module using a plurality of processors, or the like for example, video mounted on an arbitrary device or a device constituting the system.
  • It can also be implemented as a module), a unit using a plurality of modules (for example, a video unit), a set in which other functions are added to the unit (for example, a video set), or the like (
  • this technology can also be applied to a network system composed of a plurality of devices.
  • it can be applied to cloud services that provide services related to images (moving images) to arbitrary terminals such as computers, AV (AudioVisual) devices, portable information processing terminals, and IoT (Internet of Things) devices.
  • AV AudioVisual
  • IoT Internet of Things
  • Systems, equipment, processing departments, etc. to which this technology is applied should be used in any field such as transportation, medical care, crime prevention, agriculture, livestock industry, mining, beauty, factories, home appliances, weather, nature monitoring, etc. Can be done. The use is also arbitrary.
  • this technology can be applied to systems and devices used for providing ornamental contents and the like.
  • the present technology can be applied to systems and devices used for traffic such as traffic condition supervision and automatic driving control.
  • the present technology can be applied to systems and devices used for security purposes.
  • the present technology can be applied to a system or device used for automatic control of a machine or the like.
  • the present technology can be applied to systems and devices used for agriculture and livestock industry.
  • the present technology can also be applied to systems and devices for monitoring natural conditions such as volcanoes, forests and oceans, and wildlife. Further, for example, the present technology can be applied to systems and devices used for sports.
  • the present technology includes any configuration that constitutes a device or system, for example, a processor (for example, a video processor) as a system LSI (Large Scale Integration), a module that uses a plurality of processors (for example, a video module), and a plurality of modules. It can also be implemented as a unit using the above (for example, a video unit), a set (for example, a video set) in which other functions are added to the unit, or the like (that is, a partial configuration of the device).
  • a processor for example, a video processor
  • system LSI Large Scale Integration
  • modules that uses a plurality of processors
  • modules for example, a video module
  • the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Therefore, a plurality of devices housed in separate housings and connected via a network, and a device in which a plurality of modules are housed in one housing are both systems. ..
  • the configuration described as one device (or processing unit) may be divided and configured as a plurality of devices (or processing units).
  • the configurations described above as a plurality of devices (or processing units) may be collectively configured as one device (or processing unit).
  • a configuration other than the above may be added to the configuration of each device (or each processing unit).
  • a part of the configuration of one device (or processing unit) may be included in the configuration of another device (or other processing unit). ..
  • this technology can have a cloud computing configuration in which one function is shared by a plurality of devices via a network and processed jointly.
  • the above-mentioned program can be executed in any device.
  • the device may have necessary functions (functional blocks, etc.) so that necessary information can be obtained.
  • each step described in the above flowchart can be executed by one device or can be shared and executed by a plurality of devices.
  • the plurality of processes included in the one step can be executed by one device or shared by a plurality of devices.
  • a plurality of processes included in one step can be executed as processes of a plurality of steps.
  • the processes described as a plurality of steps can be collectively executed as one step.
  • the processes of the steps for describing the program may be executed in chronological order in the order described in the present specification, or may be called in parallel or called. It may be executed individually at the required timing such as when. That is, as long as there is no contradiction, the processes of each step may be executed in an order different from the above-mentioned order. Further, the processing of the step for describing this program may be executed in parallel with the processing of another program, or may be executed in combination with the processing of another program.
  • the information processing apparatus obtains the three-dimensional point position of the pixel of interest based on the known three-dimensional point position corresponding to the peripheral pixels of the pixel of interest. .. (3) The information processing apparatus according to (1) or (2), wherein the correction vector derivation unit derives the correction vector for all the pixels of the projection unit. (4) The correction vector derivation unit sets the presentation position of the input image in the light receiving region, and converts the projection point into the coordinate system of the input image by using the presentation position (1) to. The information processing apparatus according to any one of (3).
  • the correction vector derivation unit estimates a range corresponding to the projected image projected by each of the plurality of projection units in the light receiving region, and the range corresponding to each of the plurality of projection units is superimposed.
  • (7) Further includes a projection control unit that projects a corrected image, which is an image generated by correcting the input image using the correction vector derived by the correction vector derivation unit, onto the projection unit (1). ) To (6).
  • a projection area adjusting unit for adjusting the projection area of the input image.
  • the information processing apparatus according to any one of (1) to (7), wherein the correction vector derivation unit derives the correction vector using a parameter indicating an adjustment result of the projection area by the projection area adjustment unit.
  • the projection area adjusting unit adjusts the projection area based on a user instruction input based on the user interface, and derives the parameter.
  • the parameter includes a parameter related to the zoom of the projection area, a parameter related to the shift of the projection area, and a parameter related to the roll of the projection area.
  • a projection plane viewpoint setting unit for setting the projection plane based on the three-dimensional point position and setting the viewpoint corresponding to the derived projection plane is further provided.
  • the correction vector derivation unit projects the three-dimensional point position of the pixel of interest onto the light receiving region of the virtual imaging unit corresponding to the viewpoint set by the projection plane viewpoint setting unit (1) to (10).
  • the information processing device described in any of the above.
  • (12) The information processing apparatus according to (11), wherein the projection surface is a flat surface.
  • the information processing apparatus according to (11), wherein the projection surface is a cylindrical surface.
  • the information processing apparatus according to (11), wherein the projection surface is a spherical surface.
  • the position and the posture are estimated for each of the plurality of captured images obtained by capturing the projected images at different positions and postures, and the three-dimensional point position is derived based on the estimated position and the posture. Further equipped with a position / posture estimation unit, The information processing according to any one of (11) to (14), wherein the projection plane viewpoint setting unit sets the projection surface and the viewpoint based on the three-dimensional point position derived by the position / orientation estimation unit. Device. (16) The information processing apparatus according to (15), wherein the position / orientation estimation unit estimates the relative position and the relative posture of the captured image of interest with respect to the captured image as a reference.
  • the information processing apparatus adjusts the scale of the three-dimensional point position based on the plurality of relative positions and the relative attitudes.
  • a corresponding point detecting unit for detecting corresponding points included in the plurality of captured images is provided.
  • the position / orientation estimation unit estimates the position and the posture based on the corresponding point detected by the corresponding point detecting unit, and derives the three-dimensional point position from any of (15) to (17).
  • the captured image includes a plurality of pattern images of different colors projected by the plurality of projection units.
  • the information processing apparatus according to (18), wherein the corresponding point detection unit separates a plurality of the pattern images and detects the corresponding points.
  • the projection of the three-dimensional point position which is the projection position on the three-dimensional space corresponding to the pixel of interest of the projection unit that projects the input image, at the position and orientation as the viewpoint of the projection image projected by the projection unit.
  • 100 projection imaging system 101 portable terminal device, 102 projector, 151 information processing unit, 152 imaging unit, 181 corresponding point detection unit, 182 camera posture estimation unit, 183 screen reconstruction unit, 184 correction vector derivation unit, 185 projection control Unit, 186 projection area adjustment unit, 201 information processing unit, 202 projection unit, 231 correction vector acquisition unit, 232 image acquisition unit, 233 correction image generation unit

Abstract

The present disclosure relates to an information processing device and method which make it possible to suppress a reduction in projection correction accuracy. A three-dimensional point position, which is projection position, in a three-dimensional space, corresponding to a pixel of interest of a projecting unit that projects an input image, is projected onto a light receiving region of a virtual imaging unit, which is a virtual imaging unit for imaging a projected image projected by the projecting unit, using a position and attitude such that the three-dimensional point position is the viewpoint of the projected image, and a correction vector corresponding to the pixel of interest is derived by converting a projection point, indicating the position at which the three-dimensional point position has been projected onto the light receiving region, into the coordinate system of the input image. The present disclosure is applicable, for example, to information processing devices, projecting devices, imaging devices, projection/imaging devices, projection/imaging control devices, and image projection/imaging systems.

Description

情報処理装置および方法Information processing equipment and methods
 本開示は、情報処理装置および方法に関し、特に、投影補正の精度の低減を抑制することができるようにした情報処理装置および方法に関する。 The present disclosure relates to an information processing device and a method, and more particularly to an information processing device and a method capable of suppressing a decrease in the accuracy of projection correction.
 従来、複数のプロジェクタを用いて画像を投影するシステムがあった。このようなシステムにおいては、各プロジェクタの投影画像を適切に繋げることにより、1つの画像を投影することができる。そのために、カメラ等により投影画像がセンシングされ、そのセンシング結果を用いて各プロジェクタにおいて投影する画像に対する補正が行われていた(例えば特許文献1参照)。 Conventionally, there was a system that projects an image using multiple projectors. In such a system, one image can be projected by appropriately connecting the projected images of the respective projectors. Therefore, the projected image is sensed by a camera or the like, and the image projected by each projector is corrected by using the sensing result (see, for example, Patent Document 1).
特開2016-14720号公報Japanese Unexamined Patent Publication No. 2016-14720
 しかしながら、この方法では、プロジェクタ、カメラ、スクリーンをそれぞれモデル化して推定を行うため、そのモデル推定の精度が低いと最終的な補正画像の精度が低減するおそれがあった。 However, in this method, the projector, camera, and screen are modeled and estimated, so if the accuracy of the model estimation is low, the accuracy of the final corrected image may decrease.
 本開示は、このような状況に鑑みてなされたものであり、投影補正の精度の低減を抑制することができるようにするものである。 This disclosure is made in view of such a situation, and makes it possible to suppress a decrease in the accuracy of projection correction.
 本技術の一側面の情報処理装置は、入力画像を投影する投影部の着目画素に対応する3次元空間上の投影位置である3次元点位置を、前記投影部により投影された投影画像の視点とする位置および姿勢で前記投影画像を撮像する仮想の撮像部である仮想撮像部の受光領域に射影し、前記受光領域における前記3次元点位置が射影された位置を示す射影点を、前記入力画像の座標系に変換することで、前記着目画素に対応する補正ベクトルを導出する補正ベクトル導出部を備える情報処理装置である。 The information processing apparatus on one aspect of the present technology sets the three-dimensional point position, which is the projection position on the three-dimensional space corresponding to the pixel of interest of the projection unit that projects the input image, to the viewpoint of the projection image projected by the projection unit. The input points are projected onto the light receiving area of the virtual imaging unit, which is a virtual imaging unit that captures the projected image at the position and orientation, and the projection points indicating the positions where the three-dimensional point positions in the light receiving region are projected are input. It is an information processing apparatus including a correction vector derivation unit that derives a correction vector corresponding to the pixel of interest by converting it into an image coordinate system.
 本技術の一側面の情報処理方法は、入力画像を投影する投影部の着目画素に対応する3次元空間上の投影位置である3次元点位置を、前記投影部により投影された投影画像の視点とする位置および姿勢で前記投影画像を撮像する仮想の撮像部である仮想撮像部の受光領域に射影し、前記受光領域における前記3次元点位置が射影された位置を示す射影点を、前記入力画像の座標系に変換することで、前記着目画素に対応する補正ベクトルを導出する情報処理方法である。 In the information processing method of one aspect of the present technology, the three-dimensional point position, which is the projection position in the three-dimensional space corresponding to the pixel of interest of the projection unit that projects the input image, is the viewpoint of the projection image projected by the projection unit. The input points are projected onto the light receiving area of the virtual imaging unit, which is a virtual imaging unit that captures the projected image at the position and orientation, and the projection points indicating the positions where the three-dimensional point positions in the light receiving region are projected are input. This is an information processing method for deriving a correction vector corresponding to the pixel of interest by converting to the coordinate system of an image.
 本技術の一側面の情報処理装置および方法においては、入力画像を投影する投影部の着目画素に対応する3次元空間上の投影位置である3次元点位置が、その投影部により投影された投影画像の視点とする位置および姿勢でその投影画像を撮像する仮想の撮像部である仮想撮像部の受光領域に射影され、その受光領域におけるその3次元点位置が射影された位置を示す射影点が、その入力画像の座標系に変換されることで、その着目画素に対応する補正ベクトルが導出される。 In the information processing apparatus and method of one aspect of the present technology, the 3D point position, which is the projection position in the 3D space corresponding to the pixel of interest of the projection unit that projects the input image, is the projection projected by the projection unit. A projection point that is projected onto the light receiving area of the virtual imaging unit, which is a virtual imaging unit that captures the projected image at the position and orientation as the viewpoint of the image, and the three-dimensional point position in the light receiving region is projected. , The correction vector corresponding to the pixel of interest is derived by converting to the coordinate system of the input image.
投影補正の様子の例を説明する図である。It is a figure explaining the example of the state of the projection correction. 投影撮像システムの主な構成例を示すブロック図である。It is a block diagram which shows the main configuration example of a projection imaging system. 携帯型端末装置の主な構成例を示すブロック図である。It is a block diagram which shows the main configuration example of a portable terminal apparatus. 情報処理部が実現する主な機能の例を示す機能ブロック図である。It is a functional block diagram which shows the example of the main function realized by an information processing unit. プロジェクタの主な構成例を示すブロック図である。It is a block diagram which shows the main configuration example of a projector. 情報処理部が実現する主な機能の例を示す機能ブロック図である。It is a functional block diagram which shows the example of the main function realized by an information processing unit. 投影補正処理の流れの例を説明するフローチャートである。It is a flowchart explaining the example of the flow of a projection correction process. 対応点検出について説明する図である。It is a figure explaining the correspondence point detection. 対応点検出について説明する図である。It is a figure explaining the correspondence point detection. カメラ姿勢推定について説明する図である。It is a figure explaining the camera posture estimation. カメラ姿勢推定について説明する図である。It is a figure explaining the camera posture estimation. カメラ姿勢推定について説明する図である。It is a figure explaining the camera posture estimation. 平面スクリーンの再構成について説明する図である。It is a figure explaining the reconstruction of a flat screen. 平面スクリーンの再構成について説明する図である。It is a figure explaining the reconstruction of a flat screen. 補正ベクトル導出について説明する図である。It is a figure explaining the correction vector derivation. 補正ベクトル導出について説明する図である。It is a figure explaining the correction vector derivation. 補正ベクトル導出について説明する図である。It is a figure explaining the correction vector derivation. 補正ベクトル導出について説明する図である。It is a figure explaining the correction vector derivation. 投影領域調整について説明する図である。It is a figure explaining the projection area adjustment. 投影領域調整について説明する図である。It is a figure explaining the projection area adjustment. 円筒面スクリーンの再構成について説明する図である。It is a figure explaining the reconstruction of a cylindrical surface screen. 円筒面スクリーンの再構成について説明する図である。It is a figure explaining the reconstruction of a cylindrical surface screen. 補正ベクトル導出について説明する図である。It is a figure explaining the correction vector derivation. 補正ベクトル導出について説明する図である。It is a figure explaining the correction vector derivation. 補正ベクトル導出について説明する図である。It is a figure explaining the correction vector derivation. 球面スクリーンの再構成について説明する図である。It is a figure explaining the reconstruction of a spherical screen. 球面スクリーンの再構成について説明する図である。It is a figure explaining the reconstruction of a spherical screen. 補正ベクトル導出について説明する図である。It is a figure explaining the correction vector derivation. 補正ベクトル導出について説明する図である。It is a figure explaining the correction vector derivation. 補正ベクトル導出について説明する図である。It is a figure explaining the correction vector derivation.
 以下、本開示を実施するための形態(以下実施の形態とする)について説明する。なお、説明は以下の順序で行う。
 1.投影補正
 2.第1の実施の形態(平面スクリーンの場合)
 3.第2の実施の形態(円筒面スクリーンの場合)
 4.第3の実施の形態(球面スクリーンの場合)
 5.付記
Hereinafter, embodiments for carrying out the present disclosure (hereinafter referred to as embodiments) will be described. The explanation will be given in the following order.
1. 1. Projection correction 2. First Embodiment (in the case of a flat screen)
3. 3. Second embodiment (in the case of a cylindrical screen)
4. Third embodiment (in the case of a spherical screen)
5. Addendum
 <1.投影補正>
  <補正の精度>
 従来の一般的な投影撮像システム(例えば、Scalable Display Technologies)では、3次元的な位置・姿勢・形状推定をおこなっておらず、自動で投影補正するためには、スクリーンに正対する位置にカメラあるいはプロジェクタを固定配置する必要があった。また、投影補正後の画像のサイズ・位置・傾き等を調整する際には、図1のAに示されるように、投影画像の4隅の位置それぞれを調整者が補正する必要があり、キーストーン調整も同時に目視で行うこととなり、煩雑な作業が必要になるだけでなく、その補正精度が調整者の能力に依存するおそれがあった。
<1. Projection correction>
<Correction accuracy>
Conventional general projection imaging systems (for example, Flexible Display Technologies) do not perform three-dimensional position / orientation / shape estimation, and in order to automatically correct the projection, a camera or a camera is placed at a position facing the screen. The projector had to be fixedly placed. Further, when adjusting the size, position, inclination, etc. of the image after projection correction, it is necessary for the coordinator to correct each of the positions of the four corners of the projected image as shown in A of FIG. Since the stone adjustment is also performed visually at the same time, not only complicated work is required, but also the correction accuracy may depend on the ability of the coordinator.
 また、特許文献1に記載の方法の場合、プロジェクタ、カメラ、スクリーンをそれぞれモデル化して推定を行うため、その補正精度がそのモデルの精度に依存するおそれがあった。そのため、モデル推定の精度が低いと最終的な補正画像の精度が低減するおそれがあった。 Further, in the case of the method described in Patent Document 1, since the projector, the camera, and the screen are modeled and estimated, the correction accuracy may depend on the accuracy of the model. Therefore, if the accuracy of the model estimation is low, the accuracy of the final corrected image may be reduced.
  <3次元点位置を用いた補正ベクトル導出>
 そこで、入力画像を投影する投影部の着目画素に対応する3次元空間上の投影位置である3次元点位置を、その投影部により投影された投影画像の視点とする位置および姿勢でその投影画像を撮像する仮想の撮像部である仮想撮像部の受光領域に射影し、その受光領域におけるその3次元点位置が射影された位置を示す射影点を、その入力画像の座標系に変換することで、その着目画素に対応する補正ベクトルを導出する。
<Derivation of correction vector using 3D point position>
Therefore, the projected image is the position and orientation of the three-dimensional point position, which is the projection position on the three-dimensional space corresponding to the pixel of interest of the projection unit that projects the input image, as the viewpoint of the projection image projected by the projection unit. By projecting onto the light receiving area of the virtual imaging unit, which is a virtual imaging unit that captures images, and converting the projection point indicating the position where the three-dimensional point position in the light receiving area is projected into the coordinate system of the input image. , A correction vector corresponding to the target pixel is derived.
 例えば、情報処理装置において、入力画像を投影する投影部の着目画素に対応する3次元空間上の投影位置である3次元点位置を、その投影部により投影された投影画像の視点とする位置および姿勢でその投影画像を撮像する仮想の撮像部である仮想撮像部の受光領域に射影し、その受光領域におけるその3次元点位置が射影された位置を示す射影点を、その入力画像の座標系に変換することで、その着目画素に対応する補正ベクトルを導出する補正ベクトル導出部を備えるようにする。 For example, in an information processing apparatus, a position in which a three-dimensional point position, which is a projection position in a three-dimensional space corresponding to a pixel of interest of a projection unit that projects an input image, is used as a viewpoint of a projection image projected by the projection unit and a position The coordinate system of the input image is a projection point that is projected onto the light receiving area of the virtual imaging unit, which is a virtual imaging unit that captures the projected image in the posture, and the projection point indicating the position where the three-dimensional point position in the light receiving region is projected. By converting to, a correction vector derivation unit for deriving a correction vector corresponding to the pixel of interest is provided.
 このようにすることにより、投影部のモデル化を必要とせずに投影補正が可能になる。つまり、モデルの精度に依存しない精度での投影補正が可能になる。すなわち、投影補正の精度の低減の抑制が可能になる。また、例えば、図1のBに示されるように、補正画像を保持したままの、ズーム・シフト・ロールのみによる調整が可能になる。つまり、ユーザがキーストーン補正を考慮して調整する必要がない。すなわち、投影補正の精度の低減の抑制が可能になる。 By doing so, projection correction becomes possible without the need for modeling the projection unit. That is, it is possible to perform projection correction with an accuracy that does not depend on the accuracy of the model. That is, it is possible to suppress a decrease in the accuracy of projection correction. Further, for example, as shown in B of FIG. 1, adjustment can be made only by the zoom shift roll while holding the corrected image. That is, the user does not need to consider and adjust the keystone correction. That is, it is possible to suppress a decrease in the accuracy of projection correction.
 <2.第1の実施の形態>
  <投影撮像システム>
 図2は、本技術を適用した情報処理システムの一実施の形態である投影撮像システムの主な構成例を示すブロック図である。図2において、投影撮像システム100は、携帯型端末装置101、プロジェクタ102-1、およびプロジェクタ102-2を有し、画像をスクリーン120に投影したり、スクリーン120を撮像したりするシステムである。
<2. First Embodiment>
<Projection imaging system>
FIG. 2 is a block diagram showing a main configuration example of a projection imaging system which is an embodiment of an information processing system to which the present technology is applied. In FIG. 2, the projection imaging system 100 includes a portable terminal device 101, a projector 102-1, and a projector 102-2, and is a system that projects an image on a screen 120 or images a screen 120.
 携帯型端末装置101、プロジェクタ102-1、およびプロジェクタ102-2は、通信路110を介して互いに通信可能に接続される。この通信路110は、任意であり、有線であってもよいし、無線であってもよい。例えば、携帯型端末装置101、プロジェクタ102-1、およびプロジェクタ102-2は、この通信路110を介して、制御信号や画像データ等を授受することができる。 The portable terminal device 101, the projector 102-1, and the projector 102-2 are connected to each other so as to be able to communicate with each other via the communication path 110. The communication path 110 is arbitrary and may be wired or wireless. For example, the portable terminal device 101, the projector 102-1, and the projector 102-2 can send and receive control signals, image data, and the like via the communication path 110.
 携帯型端末装置101は、例えば、スマートフォン、タブレット端末、ノート型パーソナルコンピュータ等といった、ユーザが携帯可能なデバイスである。携帯型端末装置101は、通信機能と情報処理機能と撮像機能を有する。例えば、携帯型端末装置101は、プロジェクタ102-1およびプロジェクタ102-2による画像投影を制御しうる。また、携帯型端末装置101は、プロジェクタ102-1およびプロジェクタ102-2の投影補正を行うことができる。さらに、携帯型端末装置101は、プロジェクタ102-1やプロジェクタ102-2がスクリーン120に投影した投影画像を撮像しうる。 The portable terminal device 101 is a user-portable device such as a smartphone, a tablet terminal, a notebook personal computer, or the like. The portable terminal device 101 has a communication function, an information processing function, and an image pickup function. For example, the portable terminal device 101 can control the image projection by the projector 102-1 and the projector 102-2. Further, the portable terminal device 101 can perform projection correction of the projector 102-1 and the projector 102-2. Further, the portable terminal device 101 can capture a projected image projected on the screen 120 by the projector 102-1 or the projector 102-2.
 プロジェクタ102-1およびプロジェクタ102-2は、画像を投影する投影装置である。プロジェクタ102-1およびプロジェクタ102-2は、互いに同様の装置である。以下において、プロジェクタ102-1およびプロジェクタ102-2を互いに区別して説明する必要が無い場合、プロジェクタ102と称する。例えば、プロジェクタ102は、携帯型端末装置101の制御に従って、入力画像をスクリーン120に投影することができる。 Projector 102-1 and projector 102-2 are projection devices that project images. The projector 102-1 and the projector 102-2 are similar devices to each other. In the following, when it is not necessary to distinguish the projector 102-1 and the projector 102-2 from each other, the projector 102 is referred to as a projector 102. For example, the projector 102 can project an input image onto the screen 120 under the control of the portable terminal device 101.
 プロジェクタ102-1およびプロジェクタ102-2は、互いに協働して画像を投影しうる。例えば、プロジェクタ102-1およびプロジェクタ102-2は、画像を互いに同位置に投影し、投影画像の高輝度化を実現することができる。また、プロジェクタ102-1およびプロジェクタ102-2は、互いの投影画像が隣接して並ぶように画像を投影し、2つの投影画像で1つの画像を形成し、投影画像の大画面化(高解像度化)を実現することができる。また、プロジェクタ102-1およびプロジェクタ102-2は、互いの投影画像の一部を重畳させたり、一方の投影画像内に他方の投影画像を内包させたりするように画像を投影することもできる。このように協働して投影を行うことにより、プロジェクタ102-1およびプロジェクタ102-2は、高輝度化や大画面化だけでなく、例えば、投影画像のハイダイナミックレンジ化や、高フレームレート化等も実現することもできる。 Projector 102-1 and projector 102-2 can project images in cooperation with each other. For example, the projector 102-1 and the projector 102-2 can project an image at the same position with each other to realize high brightness of the projected image. Further, the projector 102-1 and the projector 102-2 project an image so that the projected images of each other are arranged side by side, and the two projected images form one image, and the projected image is enlarged (high resolution). Can be realized. Further, the projector 102-1 and the projector 102-2 can project an image so as to superimpose a part of each other's projected images or to include the other projected image in one projected image. By collaborating in this way to project, the projector 102-1 and projector 102-2 not only have higher brightness and larger screens, but also have, for example, higher dynamic range and higher frame rate of the projected image. Etc. can also be realized.
 このような画像投影において、プロジェクタ102は、携帯型端末装置101の制御の下、投影する画像を幾何補正し、投影画像が正しい位置で重畳されるようにすることができる。 In such image projection, the projector 102 can geometrically correct the projected image under the control of the portable terminal device 101 so that the projected image is superimposed at the correct position.
 例えば、図2に示されるように、プロジェクタ102-1は、入力画像を、画素領域121-1において補正画像122-1のように幾何補正して投影する。また、プロジェクタ102-2は、入力画像を、画素領域121-2において補正画像122-2のように幾何補正して投影する。 For example, as shown in FIG. 2, the projector 102-1 projects the input image in the pixel region 121-1 with geometric correction like the corrected image 122-1. Further, the projector 102-2 projects the input image in the pixel region 121-2 with geometric correction like the corrected image 122-2.
 スクリーン120においては、画素領域121-1の画像がプロジェクタ102-1により投影画像123-1のように投影される。また、画素領域121-2の画像がプロジェクタ102-2により投影画像123-2のように投影される。この投影画像123-1と投影画像123-2とが重畳する部分において、補正画像122-1と補正画像122-2とが、投影画像124のように、互いに同位置に、歪まないように(矩形の状態で)投影される。 On the screen 120, the image of the pixel area 121-1 is projected by the projector 102-1 like the projected image 123-1. Further, the image of the pixel area 121-2 is projected by the projector 102-2 like the projected image 123-2. In the portion where the projected image 123-1 and the projected image 123-2 overlap, the corrected image 122-1 and the corrected image 122-2 are not distorted at the same position as the projected image 124 (as in the projected image 124). It is projected (in a rectangular shape).
 スクリーン120は、例えば、投影面が平面により形成される平面スクリーンである。 The screen 120 is, for example, a flat screen in which the projection surface is formed by a flat surface.
 このような投影撮像システム100において、携帯型端末装置101は、プロジェクタ102の投影補正を3次元的に行うことができる。 In such a projection imaging system 100, the portable terminal device 101 can perform projection correction of the projector 102 three-dimensionally.
 なお、図2においては、投影撮像システム100は、携帯型端末装置101が1台と、プロジェクタ102が2台とにより構成されるが、各装置の数は任意であり、この例に限定されない。例えば、投影撮像システム100が、携帯型端末装置101を複数有していてもよいし、プロジェクタ102を3台以上有していてもよい。また、携帯型端末装置101がいずれかのプロジェクタ102と一体的に構成されてもよい。 Note that, in FIG. 2, the projection imaging system 100 is composed of one portable terminal device 101 and two projectors 102, but the number of each device is arbitrary and is not limited to this example. For example, the projection imaging system 100 may have a plurality of portable terminal devices 101, or may have three or more projectors 102. Further, the portable terminal device 101 may be integrally configured with any of the projectors 102.
  <携帯型端末装置>
 図3は、本技術を適用した情報処理装置の一実施の形態である携帯型端末装置101の主な構成例を示す図である。図3に示されるように、携帯型端末装置101は、情報処理部151、撮像部152、入力部161、出力部162、記憶部163、通信部164、およびドライブ165を有する。
<Portable terminal device>
FIG. 3 is a diagram showing a main configuration example of a portable terminal device 101, which is an embodiment of an information processing device to which the present technology is applied. As shown in FIG. 3, the portable terminal device 101 includes an information processing unit 151, an image pickup unit 152, an input unit 161, an output unit 162, a storage unit 163, a communication unit 164, and a drive 165.
 情報処理部151は、例えば、CPU(Central Processing Unit)、ROM(Read Only Memory)、RAM(Random Access Memory)等を有し、それらを用いてアプリケーションプログラム(ソフトウエア)を実行することにより、各種機能を実現しうるコンピュータである。例えば、情報処理部151は、投影補正に関する処理を行うアプリケーションプログラム(ソフトウエア)をインストールし、実行しうる。ここでコンピュータには、専用のハードウエアに組み込まれているコンピュータや、各種のプログラムをインストールすることで、各種の機能を実行することが可能な、例えば汎用のパーソナルコンピュータ等が含まれる。 The information processing unit 151 has, for example, a CPU (Central Processing Unit), a ROM (Read Only Memory), a RAM (Random Access Memory), and the like, and can be used to execute various application programs (software). It is a computer that can realize the function. For example, the information processing unit 151 can install and execute an application program (software) that performs processing related to projection correction. Here, the computer includes a computer embedded in dedicated hardware and, for example, a general-purpose personal computer capable of executing various functions by installing various programs.
 撮像部152は、光学系やイメージセンサ等を有し、被写体を撮像して撮像画像を生成しうる。撮像部152は、生成した撮像画像を情報処理部151に供給しうる。 The image pickup unit 152 has an optical system, an image sensor, and the like, and can capture an image of a subject to generate an image. The image pickup unit 152 can supply the generated captured image to the information processing unit 151.
 入力部161は、例えば、キーボード、マウス、マイクロホン、タッチパネル、入力端子等の入力デバイスを有し、それらの入力デバイスを介して入力された情報を情報処理部151に供給しうる。 The input unit 161 has, for example, input devices such as a keyboard, a mouse, a microphone, a touch panel, and an input terminal, and can supply information input via those input devices to the information processing unit 151.
 出力部162は、例えば、ディスプレイ(表示部)、スピーカ等(音声出力部)、出力端子等の出力デバイスを有し、情報処理部151から供給された情報を、それらの出デバイスを介して出力しうる。 The output unit 162 has, for example, an output device such as a display (display unit), a speaker (audio output unit), and an output terminal, and outputs information supplied from the information processing unit 151 via those output devices. Can be done.
 記憶部163は、例えば、ハードディスク、RAMディスク、不揮発性のメモリ等の記憶媒体を有し、情報処理部151から供給された情報を、その記憶媒体に記憶しうる。記憶部163は、その記憶媒体に記憶されている情報を読み出し、情報処理部151に供給しうる。 The storage unit 163 has, for example, a storage medium such as a hard disk, a RAM disk, or a non-volatile memory, and can store the information supplied from the information processing unit 151 in the storage medium. The storage unit 163 can read out the information stored in the storage medium and supply it to the information processing unit 151.
 通信部164は、例えば、ネットワークインタフェースを有し、他の装置から送信された情報を受信し、その受信した情報を情報処理部151に供給しうる。通信部164は、情報処理部151から供給された情報を他の装置宛てに送信しうる。 The communication unit 164 has, for example, a network interface, can receive information transmitted from another device, and can supply the received information to the information processing unit 151. The communication unit 164 can transmit the information supplied from the information processing unit 151 to another device.
 ドライブ165は、磁気ディスク、光ディスク、光磁気ディスク、または半導体メモリ等のリムーバブル記録媒体171のインタフェースを有し、自身に装着されたリムーバブル記録媒体171に記録されている情報を読み出し、情報処理部151に供給しうる。ドライブ165は、情報処理部151から供給された情報を、自身に装着された書き込み可能なリムーバブル記録媒体171に記録しうる。 The drive 165 has an interface of a removable recording medium 171 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, reads information recorded on the removable recording medium 171 mounted on the drive 165, and reads information from the information processing unit 151. Can be supplied to. The drive 165 can record the information supplied from the information processing unit 151 on the writable removable recording medium 171 attached to the drive 165.
 情報処理部151は、例えば、記憶部163に記憶されているアプリケーションプログラムをロードして実行する。その際、情報処理部151は、各種の処理を実行する上において必要なデータ等も適宜記憶することができる。このアプリケーションプログラムやデータ等は、例えば、パッケージメディア等としてのリムーバブル記録媒体171に記録して提供することができる。その場合、このアプリケーションプログラムやデータ等は、リムーバブル記録媒体171が装着されたドライブ165により読み出され、情報処理部151を介して記憶部163にインストールされる。また、このアプリケーションプログラムは、ローカルエリアネットワーク、インターネット、デジタル衛星放送といった、有線または無線の伝送媒体を介して提供することもできる。その場合、このアプリケーションプログラムやデータ等は、通信部164により受信され、情報処理部151を介して記憶部163にインストールされる。また、このアプリケーションプログラムやデータ等は、情報処理部151内のROMや記憶部163に、あらかじめインストールしておくこともできる。 The information processing unit 151 loads and executes, for example, the application program stored in the storage unit 163. At that time, the information processing unit 151 can appropriately store data and the like necessary for executing various processes. The application program, data, and the like can be recorded and provided on a removable recording medium 171 as a package media or the like, for example. In that case, the application program, data, and the like are read out by the drive 165 equipped with the removable recording medium 171 and installed in the storage unit 163 via the information processing unit 151. The application program can also be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting. In that case, the application program, data, and the like are received by the communication unit 164 and installed in the storage unit 163 via the information processing unit 151. Further, the application program, data, and the like can be installed in advance in the ROM and the storage unit 163 in the information processing unit 151.
  <携帯型端末装置の機能ブロック>
 情報処理部151がアプリケーションプログラムを実行することにより実現される機能を機能ブロックとして図4に示す。図4に示されるように、情報処理部151は、アプリケーションプログラムを実行することにより、機能ブロックとして、対応点検出部181、カメラ姿勢推定部182、スクリーン再構成部183、補正ベクトル導出部184、投影制御部185、および投影領域調整部186を有することができる。
<Functional block of portable terminal device>
FIG. 4 shows a function realized by the information processing unit 151 executing an application program as a functional block. As shown in FIG. 4, the information processing unit 151 executes the application program to execute the corresponding point detection unit 181 and the camera posture estimation unit 182, the screen reconstruction unit 183, and the correction vector derivation unit 184 as functional blocks. It can have a projection control unit 185 and a projection area adjustment unit 186.
 対応点検出部181は、スクリーン120に投影された投影画像の撮像画像に基づいて、各撮像画像について対応点の検出を行う。対応点検出部181は、検出した対応点を示す対応点情報をカメラ姿勢推定部182に供給する。 The corresponding point detection unit 181 detects the corresponding point for each captured image based on the captured image of the projected image projected on the screen 120. The corresponding point detection unit 181 supplies the corresponding point information indicating the detected corresponding point to the camera posture estimation unit 182.
 カメラ姿勢推定部182は、その対応点情報に基づいて、撮像画像に対応するカメラの位置や姿勢(つまり、携帯型端末装置101(の撮像部152)が撮像を行った位置や姿勢)を推定する。また、カメラ姿勢推定部182は、その推定したカメラの位置および姿勢に基づいて、プロジェクタ102の着目画素に対応する3次元空間上の投影位置である3次元点位置を導出する。カメラ姿勢推定部182は、その推定した位置や姿勢、3次元点位置等を示すカメラ姿勢情報を、対応点情報とともにスクリーン再構成部183に供給する。 The camera posture estimation unit 182 estimates the position and posture of the camera corresponding to the captured image (that is, the position and posture taken by the portable terminal device 101 (imaging unit 152)) based on the corresponding point information. do. Further, the camera posture estimation unit 182 derives a three-dimensional point position, which is a projection position in the three-dimensional space corresponding to the pixel of interest of the projector 102, based on the estimated position and posture of the camera. The camera posture estimation unit 182 supplies the camera posture information indicating the estimated position and posture, the three-dimensional point position, and the like to the screen reconstruction unit 183 together with the corresponding point information.
 スクリーン再構成部183は、その対応点情報およびカメラ姿勢情報に基づいて、プロジェクタ102が画像を投影する投影面(仮想のスクリーン)を設定する。この場合、スクリーン120が平面スクリーンであるため、スクリーン再構成部183は、平面の投影面を設定する。また、スクリーン再構成部183は、その投影面に投影された投影画像を見るための視点(つまり、投影面に対応する視点)を設定し、その視点において投影画像を見る際の視界を示す仮想のカメラである視点カメラ(つまり、投影面に対応する視点カメラ)を設定する。スクリーン再構成部183は、このように設定した投影面に関する情報を含む仮スクリーン情報と、このように設定した視点カメラに関する情報である視点カメラ情報とを生成し、カメラ姿勢情報等とともに、補正ベクトル導出部184に供給する。 The screen reconstruction unit 183 sets a projection surface (virtual screen) on which the projector 102 projects an image based on the corresponding point information and the camera posture information. In this case, since the screen 120 is a flat screen, the screen reconstruction unit 183 sets a flat projection plane. Further, the screen reconstruction unit 183 sets a viewpoint for viewing the projected image projected on the projection surface (that is, a viewpoint corresponding to the projection surface), and a virtual view showing the view when the projected image is viewed at that viewpoint. Set the viewpoint camera (that is, the viewpoint camera corresponding to the projection plane). The screen reconstruction unit 183 generates temporary screen information including information about the projection surface set in this way and viewpoint camera information which is information about the viewpoint camera set in this way, and together with camera attitude information and the like, a correction vector. It is supplied to the out-licensing unit 184.
 補正ベクトル導出部184は、それらの情報等に基づいて、入力画像の各画素をどのように補正するかを示す補正ベクトルを導出する。つまり、補正ベクトル導出部184は、入力画像を投影する投影部の着目画素に対応する3次元空間上の投影位置である3次元点位置を、その投影部により投影された投影画像の視点とする位置および姿勢でその投影画像を撮像する仮想の撮像部である仮想撮像部の受光領域に射影し、その受光領域におけるその3次元点位置が射影された位置を示す射影点を、その入力画像の座標系に変換することで、その着目画素に対応する補正ベクトルを導出する。 The correction vector derivation unit 184 derives a correction vector indicating how to correct each pixel of the input image based on the information and the like. That is, the correction vector derivation unit 184 sets the three-dimensional point position, which is the projection position on the three-dimensional space corresponding to the pixel of interest of the projection unit that projects the input image, as the viewpoint of the projection image projected by the projection unit. The projection point of the input image is projected onto the light receiving area of the virtual imaging unit, which is a virtual imaging unit that captures the projected image at the position and orientation, and the projection point indicating the position where the three-dimensional point position in the light receiving region is projected. By converting to a coordinate system, a correction vector corresponding to the pixel of interest is derived.
 また、補正ベクトル導出部184は、あらかじめ用意された、投影補正の仕方を示す投影補正情報を取得し、その投影補正情報に基づいて補正ベクトルを導出することもできる。補正ベクトル導出部184は、投影領域調整部186から供給される、投影面における投影画像が投影される領域である投影領域(の位置・大きさ・傾き等)の制御に関するパラメータである投影位置パラメータを取得し、その投影位置パラメータに基づいて、補正ベクトルを導出することもできる。補正ベクトル導出部184は、導出した補正ベクトルを投影制御部185に供給する。 Further, the correction vector derivation unit 184 can also acquire the projection correction information prepared in advance indicating the method of projection correction, and derive the correction vector based on the projection correction information. The correction vector derivation unit 184 is a projection position parameter which is a parameter related to the control of the projection area (position, size, inclination, etc.) which is the area where the projection image is projected on the projection surface, which is supplied from the projection area adjustment unit 186. Can also be obtained and a correction vector can be derived based on the projection position parameter. The correction vector derivation unit 184 supplies the derived correction vector to the projection control unit 185.
 投影制御部185は、その補正ベクトルを制御対象のプロジェクタ102に供給する。また、投影制御部185は、補正画像の投影指示をそのプロジェクタ102に供給し、補正画像を投影させる。 The projection control unit 185 supplies the correction vector to the projector 102 to be controlled. Further, the projection control unit 185 supplies a projection instruction of the corrected image to the projector 102 to project the corrected image.
 投影領域調整部186は、例えば、ユーザインタフェース(UI(User Interface))画像を生成して出力部162に供給し、そのUI画像をモニタに表示させる。また、投影領域調整部186は、入力部161において受け付けられた、そのUI画像に対するユーザ指示を取得する。投影領域調整部186は、そのユーザ指示に基づいて、投影面における投影画像が投影される領域である投影領域の位置、大きさ、傾き等を制御する。例えば、投影領域調整部186は、投影領域のシフト、ズーム、ロール等の指示を受け付けるUI画像をモニタに表示させ、入力された、投影領域のシフト、ズーム、ロール等に関するユーザ指示を取得する。投影領域調整部186は、そのユーザ指示に対応する投影位置パラメータ、すなわち、指示されたとおりに投影領域をシフト、ズーム、ロールさせる投影位置パラメータを、補正ベクトル導出部184に供給する。 The projection area adjustment unit 186 generates, for example, a user interface (UI (User Interface)) image, supplies it to the output unit 162, and displays the UI image on the monitor. Further, the projection area adjustment unit 186 acquires the user instruction for the UI image received by the input unit 161. The projection area adjustment unit 186 controls the position, size, inclination, and the like of the projection area, which is the area on the projection surface on which the projection image is projected, based on the user's instruction. For example, the projection area adjustment unit 186 displays a UI image that accepts instructions such as shift, zoom, and roll of the projection area on the monitor, and acquires the input user instructions regarding shift, zoom, roll, and the like of the projection area. The projection area adjusting unit 186 supplies the correction vector deriving unit 184 with projection position parameters corresponding to the user's instructions, that is, projection position parameters for shifting, zooming, and rolling the projection area as instructed.
  <プロジェクタ>
 図5は、本技術を適用した情報処理装置の一実施の形態であるプロジェクタ102の主な構成例を示す図である。図5に示されるように、プロジェクタ102は、情報処理部201、投影部202、入力部211、出力部212、記憶部213、通信部214、およびドライブ215を有する。
<Projector>
FIG. 5 is a diagram showing a main configuration example of the projector 102, which is an embodiment of an information processing apparatus to which the present technology is applied. As shown in FIG. 5, the projector 102 has an information processing unit 201, a projection unit 202, an input unit 211, an output unit 212, a storage unit 213, a communication unit 214, and a drive 215.
 情報処理部201は、例えば、CPU、ROM、RAM等を有し、それらを用いてアプリケーションプログラム(ソフトウエア)を実行することにより、各種機能を実現しうるコンピュータである。例えば、情報処理部201は、画像投影に関する処理を行うアプリケーションプログラム(ソフトウエア)をインストールし、実行しうる。ここでコンピュータには、専用のハードウエアに組み込まれているコンピュータや、各種のプログラムをインストールすることで、各種の機能を実行することが可能な、例えば汎用のパーソナルコンピュータ等が含まれる。 The information processing unit 201 is a computer that has, for example, a CPU, ROM, RAM, etc., and can realize various functions by executing an application program (software) using them. For example, the information processing unit 201 may install and execute an application program (software) that performs processing related to image projection. Here, the computer includes a computer embedded in dedicated hardware and, for example, a general-purpose personal computer capable of executing various functions by installing various programs.
 投影部202は、光学デバイスや光源等を有し、情報処理部201により制御されて、所望の画像を投影しうる。例えば、投影部202は、情報処理部201から供給される画像を投影しうる。 The projection unit 202 has an optical device, a light source, and the like, and can be controlled by the information processing unit 201 to project a desired image. For example, the projection unit 202 can project an image supplied from the information processing unit 201.
 入力部211は、例えば、キーボード、マウス、マイクロホン、タッチパネル、入力端子等の入力デバイスを有し、それらの入力デバイスを介して入力された情報を情報処理部201に供給しうる。 The input unit 211 has, for example, input devices such as a keyboard, mouse, microphone, touch panel, and input terminal, and can supply information input via those input devices to the information processing unit 201.
 出力部212は、例えば、ディスプレイ(表示部)、スピーカ等(音声出力部)、出力端子等の出力デバイスを有し、情報処理部201から供給された情報を、それらの出デバイスを介して出力しうる。 The output unit 212 has, for example, an output device such as a display (display unit), a speaker (audio output unit), and an output terminal, and outputs information supplied from the information processing unit 201 via those output devices. Can be done.
 記憶部213は、例えば、ハードディスク、RAMディスク、不揮発性のメモリ等の記憶媒体を有し、情報処理部201から供給された情報を、その記憶媒体に記憶しうる。記憶部213は、その記憶媒体に記憶されている情報を読み出し、情報処理部201に供給しうる。 The storage unit 213 has a storage medium such as a hard disk, a RAM disk, or a non-volatile memory, and can store the information supplied from the information processing unit 201 in the storage medium. The storage unit 213 can read out the information stored in the storage medium and supply it to the information processing unit 201.
 通信部214は、例えば、ネットワークインタフェースを有し、他の装置から送信された情報を受信し、その受信した情報を情報処理部201に供給しうる。通信部214は、情報処理部201から供給された情報を他の装置宛てに送信しうる。 The communication unit 214 has, for example, a network interface, can receive information transmitted from another device, and can supply the received information to the information processing unit 201. The communication unit 214 may transmit the information supplied from the information processing unit 201 to another device.
 ドライブ215は、磁気ディスク、光ディスク、光磁気ディスク、または半導体メモリ等のリムーバブル記録媒体221のインタフェースを有し、自身に装着されたリムーバブル記録媒体221に記録されている情報を読み出し、情報処理部201に供給しうる。ドライブ215は、情報処理部201から供給された情報を、自身に装着された書き込み可能なリムーバブル記録媒体221に記録しうる。 The drive 215 has an interface of a removable recording medium 221 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, reads information recorded on the removable recording medium 221 mounted on the drive 215, and reads out information processing unit 201. Can be supplied to. The drive 215 can record the information supplied from the information processing unit 201 on the writable removable recording medium 221 attached to the drive 215.
 情報処理部201は、例えば、記憶部213に記憶されているアプリケーションプログラムをロードして実行する。その際、情報処理部201は、各種の処理を実行する上において必要なデータ等も適宜記憶することができる。このアプリケーションプログラムやデータ等は、例えば、パッケージメディア等としてのリムーバブル記録媒体221に記録して提供することができる。その場合、このアプリケーションプログラムやデータ等は、リムーバブル記録媒体221が装着されたドライブ215により読み出され、情報処理部201を介して記憶部213にインストールされる。また、このアプリケーションプログラムは、ローカルエリアネットワーク、インターネット、デジタル衛星放送といった、有線または無線の伝送媒体を介して提供することもできる。その場合、このアプリケーションプログラムやデータ等は、通信部214により受信され、情報処理部201を介して記憶部213にインストールされる。また、このアプリケーションプログラムやデータ等は、情報処理部201内のROMや記憶部213に、あらかじめインストールしておくこともできる。 The information processing unit 201 loads and executes, for example, the application program stored in the storage unit 213. At that time, the information processing unit 201 can appropriately store data and the like necessary for executing various processes. The application program, data, and the like can be recorded and provided on a removable recording medium 221 as a package media or the like, for example. In that case, the application program, data, and the like are read out by the drive 215 equipped with the removable recording medium 221 and installed in the storage unit 213 via the information processing unit 201. The application program can also be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting. In that case, the application program, data, and the like are received by the communication unit 214 and installed in the storage unit 213 via the information processing unit 201. Further, the application program, data, and the like can be installed in advance in the ROM or the storage unit 213 in the information processing unit 201.
  <プロジェクタの機能ブロック>
 情報処理部201がアプリケーションプログラムを実行することにより実現される機能を機能ブロックとして図6に示す。図6に示されるように、情報処理部201は、アプリケーションプログラムを実行することにより、機能ブロックとして、補正ベクトル取得部231、画像取得部232、および補正画像生成部233を有することができる。
<Projector function block>
FIG. 6 shows a function realized by the information processing unit 201 executing an application program as a functional block. As shown in FIG. 6, the information processing unit 201 can have a correction vector acquisition unit 231, an image acquisition unit 232, and a correction image generation unit 233 as functional blocks by executing an application program.
 補正ベクトル取得部231は、携帯型端末装置101から供給される補正ベクトルを取得し、補正画像生成部233に供給する。 The correction vector acquisition unit 231 acquires the correction vector supplied from the portable terminal device 101 and supplies it to the correction image generation unit 233.
 画像取得部232は、入力画像を取得し、補正画像生成部233に供給する。 The image acquisition unit 232 acquires the input image and supplies it to the correction image generation unit 233.
 補正画像生成部233は、その補正ベクトルを用いて入力画像を補正し、補正画像を生成する。補正画像生成部233は、その補正画像を投影部202に供給して投影させる。 The corrected image generation unit 233 corrects the input image using the correction vector and generates the corrected image. The corrected image generation unit 233 supplies the corrected image to the projection unit 202 and projects the corrected image.
 以上のように、携帯型端末装置101は、プロジェクタ102(の投影部202)のモデル化を行わずに投影補正を行うことができる。つまり、携帯型端末装置101は、モデルの精度に依存しない精度で投影補正を行うことができる。すなわち、携帯型端末装置101は、投影補正の精度の低減を抑制することができる。また、携帯型端末装置101は、補正画像を保持したまま、ズーム・シフト・ロールのみにより、投影領域の位置、大きさ、傾き等の調整を行うことができる。つまり、この補正においてユーザがキーストーン補正を考慮する必要がない。すなわち、携帯型端末装置101は、投影補正の精度の低減を抑制することができる。換言するに、投影撮像システム100は、投影補正の精度の低減を抑制することができる。 As described above, the portable terminal device 101 can perform projection correction without modeling the projector 102 (projection unit 202). That is, the portable terminal device 101 can perform projection correction with an accuracy that does not depend on the accuracy of the model. That is, the portable terminal device 101 can suppress a decrease in the accuracy of projection correction. Further, the portable terminal device 101 can adjust the position, size, tilt, and the like of the projection area only by the zoom shift roll while holding the corrected image. That is, the user does not need to consider the keystone correction in this correction. That is, the portable terminal device 101 can suppress a decrease in the accuracy of projection correction. In other words, the projection imaging system 100 can suppress a decrease in the accuracy of projection correction.
  <投影補正処理の流れ>
 携帯型端末装置101の情報処理部151により実行される投影補正処理の流れの例を、図7のフローチャートを参照して説明する。
<Flow of projection correction processing>
An example of the flow of the projection correction process executed by the information processing unit 151 of the portable terminal device 101 will be described with reference to the flowchart of FIG. 7.
 投影補正処理が開始されると、対応点検出部181は、ステップS101において、対応点の検出を行う。撮像部152のカメラ内部変数(例えば、焦点距離、主点位置、レンズ歪等)は、キャリブレーション済とする。 When the projection correction process is started, the corresponding point detection unit 181 detects the corresponding point in step S101. The internal variables of the camera of the image pickup unit 152 (for example, focal length, principal point position, lens distortion, etc.) are calibrated.
 例えば、図8に示されるように、プロジェクタ102-1およびプロジェクタ102-2が、所定のパターン画像であるセンシングパターンを投影する。手持ちカメラ(携帯型端末装置101の撮像部152)が3か所からその投影画像を撮像する。携帯型端末装置101が、それらの撮像画像をデコードすることで、それらの撮像画像上でのプロジェクタ画素の対応点を取得する。 For example, as shown in FIG. 8, the projector 102-1 and the projector 102-2 project a sensing pattern which is a predetermined pattern image. A handheld camera (imaging unit 152 of the portable terminal device 101) captures the projected image from three places. The portable terminal device 101 decodes the captured images to acquire the corresponding points of the projector pixels on the captured images.
 このセンシングに利用するパターン画像は、互いに異なる色の互いに同一のStructured Lightパターンにより構成される。例えばプロジェクタ102-1が投影するパターン画像301は、赤地に黒のドットを等間隔に配置したパターンを有し、プロジェクタ102-2が投影するパターン画像302は、青地に黒のドットを等間隔に配置したパターンを有するようにしてもよい。 The pattern image used for this sensing is composed of the same Structured Light patterns of different colors. For example, the pattern image 301 projected by the projector 102-1 has a pattern in which black dots are arranged at equal intervals on a red background, and the pattern image 302 projected by the projector 102-2 has black dots arranged at equal intervals on a blue background. It may have a pattern arranged in.
 プロジェクタ102-1およびプロジェクタ102-2は、これらのパターン画像を同時にスクリーン120上に投影する。その際、各パターン画像の投影画像は互いに重畳してもよい。 The projector 102-1 and the projector 102-2 simultaneously project these pattern images on the screen 120. At that time, the projected images of the pattern images may be superimposed on each other.
 手持ちカメラ(携帯型端末装置101の撮像部152)は、ユーザにより移動されて、3か所からこの投影画像を撮像する。例えば、撮像部152は、その位置をユーザにより制御されて、スクリーン120上の投影領域に対して左方向、正面、右方向からの3か所から撮像する(図8の左視点カメラ311、正面カメラ312、右視点カメラ313)。なお、この撮像は、互いに異なる複数の位置において行われればよい。したがって、撮影位置の数は2以上であれば任意である。一般的に、撮影位置の数(撮影回数)が多い程、より正確にスクリーン再構成を行うことができる。逆に、撮影位置の数(撮影回数)が少ない程、処理の負荷の増大を抑制することができる。また、各撮影位置は任意である。一般的に、例えば上述の例のように、撮影位置がより広範囲(互いの撮影の向きの違いがより大きな位置)である方が、より正確にスクリーン再構成を行うことができる。なお、上述の例の正面カメラ312の位置は、正面カメラ312の位置は、左視点カメラ311と右視点カメラ312との間であればよく、スクリーン120の真正面(スクリーン120と正対する位置)であってもよいが、真正面でなくてもよい。例えば、正面カメラ312の位置は、ユーザが手動で位置させたスクリーン120の略正面(真正面付近)であってもよい。 The handheld camera (imaging unit 152 of the portable terminal device 101) is moved by the user to capture the projected image from three places. For example, the image pickup unit 152, whose position is controlled by the user, takes images from three locations from the left direction, the front direction, and the right direction with respect to the projection area on the screen 120 (left viewpoint camera 311 in FIG. 8, front view). Camera 312, right viewpoint camera 313). It should be noted that this imaging may be performed at a plurality of positions different from each other. Therefore, the number of shooting positions is arbitrary as long as it is 2 or more. In general, the larger the number of shooting positions (number of shootings), the more accurately the screen can be reconstructed. On the contrary, as the number of shooting positions (number of shootings) is smaller, the increase in processing load can be suppressed. Moreover, each shooting position is arbitrary. In general, the screen can be reconstructed more accurately when the shooting position is wider (the position where the difference in the shooting orientation is larger than each other), for example, as in the above example. The position of the front camera 312 in the above example may be such that the position of the front camera 312 is between the left viewpoint camera 311 and the right viewpoint camera 312, and is directly in front of the screen 120 (position facing the screen 120). It may be there, but it does not have to be directly in front of it. For example, the position of the front camera 312 may be substantially in front of (near directly in front of) the screen 120 manually positioned by the user.
 したがって、これらの撮像画像には、プロジェクタ102-1により投影された投影画像と、プロジェクタ102-2により投影された投影画像との両方を含み得る。 Therefore, these captured images may include both the projected image projected by the projector 102-1 and the projected image projected by the projector 102-2.
 対応点検出部181は、このような撮像画像をデコードし、撮像画像に含まれる、互いに異なるプロジェクタ102により投影された複数のパターン画像を分離する。このパターンの分離方法は任意である。例えば、複数のプロジェクタ102から互いに異なる色情報を付与して投影された投影画像の混合像を撮像した撮像画像の色情報と、撮像画像の色情報と投影画像および背景の色情報との関係を示す色モデルに基づき、色情報毎の分離画像を生成してもよい。 The corresponding point detection unit 181 decodes such a captured image and separates a plurality of pattern images included in the captured image and projected by different projectors 102. The method of separating this pattern is arbitrary. For example, the relationship between the color information of the captured image obtained by capturing the mixed image of the projected images projected by giving different color information from the plurality of projectors 102, and the color information of the captured image and the color information of the projected image and the background. A separated image for each color information may be generated based on the color model shown.
 色モデルは、投影部202と撮像画像を取得する撮像部152の分光特性に応じて変化した投影画像の色情報と、撮像部152で撮像された混合像に生じる減衰を示す減衰係数、および背景の色情報をパラメータとして用いている。そこで、撮像画像の色情報と色モデルによって推定した色情報との差が最小となるパラメータを用いて、色モデルに基づき色情報毎の分離画像を生成する。 The color model includes color information of the projected image changed according to the spectral characteristics of the projection unit 202 and the image pickup unit 152 that acquires the image capture image, a attenuation coefficient indicating the attenuation that occurs in the mixed image captured by the image pickup unit 152, and a background. The color information of is used as a parameter. Therefore, a separated image for each color information is generated based on the color model by using a parameter that minimizes the difference between the color information of the captured image and the color information estimated by the color model.
 なお、ここで利用するStructured Lightパターンは、色分離と1回の撮像でのデコードが可能なものであればどのようなパターンであっても良い。また、カメラを手持ちではなく、3脚等で固定設置した場合には、Gray Codeのような時間方向に複数枚のパターンの情報を使ってデコードするようなパターンでも良い。カメラが固定された場合は、色分離の処理も不要でプロジェクタ102-1とプロジェクタ102-2は時間的に別のタイミングでパターン画像を投影しても良い。 The Structured Light pattern used here may be any pattern as long as it is capable of color separation and decoding in one imaging. Further, when the camera is fixedly installed on a tripod or the like instead of being held by hand, a pattern such as Gray Code that decodes using information of a plurality of patterns in the time direction may be used. When the camera is fixed, the color separation process is not required, and the projector 102-1 and the projector 102-2 may project the pattern image at different timings in time.
 対応点検出部181は、以上のような複数の撮像画像に基づいて、対応点を検出する。例えば、図9において、スクリーン120の投影画像321は、プロジェクタ102-1により投影されたパターン画像301の投影画像である。また、投影画像322は、プロジェクタ102-2により投影されたパターン画像302の投影画像である。また、撮像画像331は、左視点カメラ311が、投影画像321および投影画像322が投影されたスクリーン120を撮像して生成した撮像画像である。撮像画像332は、正面カメラ312が、投影画像321および投影画像322が投影されたスクリーン120を撮像して生成した撮像画像である。撮像画像333は、右視点カメラ313が、投影画像321および投影画像322が投影されたスクリーン120を撮像して生成した撮像画像である。 The corresponding point detection unit 181 detects the corresponding point based on the plurality of captured images as described above. For example, in FIG. 9, the projected image 321 of the screen 120 is a projected image of the pattern image 301 projected by the projector 102-1. Further, the projected image 322 is a projected image of the pattern image 302 projected by the projector 102-2. Further, the captured image 331 is an captured image generated by the left viewpoint camera 311 capturing the screen 120 on which the projected image 321 and the projected image 322 are projected. The captured image 332 is an captured image generated by the front camera 312 capturing the projected image 321 and the screen 120 on which the projected image 322 is projected. The captured image 333 is an captured image generated by the right viewpoint camera 313 taking an image of the screen 120 on which the projected image 321 and the projected image 322 are projected.
 対応点検出部181は、この撮像画像331、撮像画像332、および撮像画像333のそれぞれに含まれる、互いに対応する点、つまり、パターン画像301またはパターン画像302の所定の位置を表示する画素を、対応点として検出する(例えば図中白丸)。 The corresponding point detection unit 181 displays a pixel included in each of the captured image 331, the captured image 332, and the captured image 333, which displays points corresponding to each other, that is, a predetermined position of the pattern image 301 or the pattern image 302. Detect as a corresponding point (for example, white circle in the figure).
 ステップS102において、カメラ姿勢推定部182は、上述の対応点検出処理で得られた3つの撮像画像間の2次元の対応点情報を基に、3次元的に辻褄の合う3か所の左視点カメラ311乃至右視点カメラ313のそれぞれの位置および姿勢を推定する。 In step S102, the camera posture estimation unit 182 is based on the two-dimensional correspondence point information between the three captured images obtained by the above-mentioned correspondence point detection process, and the left viewpoints at three three-dimensionally matching points. The positions and orientations of the cameras 311 to the right viewpoint camera 313 are estimated.
 まず2つの撮像画像の対応点情報に着目する。例えば左視点カメラ311が生成した撮像画像331と正面カメラ312が生成した撮像画像332に着目した場合、図10のAに示されるように、左視点の撮像画像331の対応点から正面視点の撮像画像332の対応点へと変換するホモグラフィ行列(H12)を求める。このホモグラフィは、ロバスト推定のアルゴリズムであるRANSAC(Random Sample Consensus)により求めることで、対応点に外れ値が存在しても大きな影響を受けないようにする。 First, we focus on the corresponding point information of the two captured images. For example, when focusing on the captured image 331 generated by the left viewpoint camera 311 and the captured image 332 generated by the front camera 312, as shown in FIG. 10A, the front viewpoint is captured from the corresponding point of the left viewpoint captured image 331. The homography matrix (H 12 ) that transforms into the corresponding points of the image 332 is obtained. This homography is obtained by RANSAC (Random Sample Consensus), which is a robust estimation algorithm, so that even if there are outliers at the corresponding points, they are not significantly affected.
 このホモグラフィ行列をRT分解することで、左視点カメラ311を基準とした正面カメラ312の相対位置および相対姿勢が導出される。RT分解の方法は例えば「画像電子学会誌/40巻(2011)3号 p.421-427」に記載の方法を用いる。このときスケールは不定なのでなんらかのルールによってスケールを決定する。 By RT decomposition of this homography matrix, the relative position and relative posture of the front camera 312 with respect to the left viewpoint camera 311 are derived. For the RT decomposition method, for example, the method described in "Journal of the Society of Image Electronics and Electronics / Vol. 40 (2011) No. 3, p.421-427" is used. At this time, the scale is indefinite, so the scale is determined by some rule.
 図10のBに示されるように、ここで得られた左視点カメラ311を基準にした正面カメラ312の位置および姿勢とそれらの対応点情報とを用いて三角測量することで、それら対応点の3次元点を求める。ここで、三角測量により3次元点を求める際に対応する光線同士が交差しない場合があるが、その場合は対応する光線同士が最も近くなる点同士を結ぶ線分の中点をその3次元点とする。 As shown in B of FIG. 10, the position and orientation of the front camera 312 with respect to the left viewpoint camera 311 obtained here and the corresponding point information thereof are used for triangulation to obtain the corresponding points. Find the 3D point. Here, when finding a three-dimensional point by triangulation, the corresponding rays may not intersect each other. In that case, the midpoint of the line segment connecting the points where the corresponding rays are closest to each other is the three-dimensional point. And.
 次に、図11のAに示されるように、正面カメラ312と右視点カメラ313に着目し、同様の処理を行うことで、正面カメラ312に対する右視点カメラ313の相対位置および相対姿勢を求める。このときも正面カメラ312と右視点カメラ313のスケールは不定なのでなんらかのルールによってスケールを決める。また、正面カメラ312と右視点カメラ313の位置および姿勢とそれらの対応点情報とを用いて三角測量することにより、それら対応点の3次元点を求める。 Next, as shown in A of FIG. 11, the front camera 312 and the right viewpoint camera 313 are focused on, and the same processing is performed to obtain the relative position and the relative posture of the right viewpoint camera 313 with respect to the front camera 312. At this time as well, the scales of the front camera 312 and the right viewpoint camera 313 are indefinite, so the scale is determined by some rule. Further, the three-dimensional points of the corresponding points are obtained by performing triangulation using the positions and postures of the front camera 312 and the right viewpoint camera 313 and their corresponding point information.
 次に、図11のBに示されるように、左視点カメラ311と正面カメラ312から求めた対応点の3次元点の、カメラ(左視点カメラ311または正面カメラ312)からの平均距離と、正面カメラ312と右視点カメラ313から求めた対応点の3次元点の、カメラ(正面カメラ312または右視点カメラ313)からの平均距離が一致するように正面カメラ312と右視点カメラ313のスケールを修正する。スケールの修正は正面カメラ312と右視点カメラ313の並進成分ベクトルの長さを変えることで行う。 Next, as shown in B of FIG. 11, the average distance from the camera (left-view camera 311 or front camera 312) of the three-dimensional points of the corresponding points obtained from the left-view camera 311 and the front camera 312, and the front. Corrected the scales of the front camera 312 and the right viewpoint camera 313 so that the average distances from the camera (front camera 312 or right viewpoint camera 313) of the three-dimensional points of the corresponding points obtained from the camera 312 and the right viewpoint camera 313 match. do. The scale is modified by changing the lengths of the translational component vectors of the front camera 312 and the right viewpoint camera 313.
 最後に、図12に示されるように、左視点カメラ311を基準として固定し、正面カメラ312と右視点カメラ313の位置および姿勢を、内部パラメータ、外部パラメータ、および世界座標点群を最適化するバンドル調整(Bundle Adjustment)により最適化する。このとき評価値は、対応点の3次元点から対応する各3本の光線までの距離の2乗和とし、これが最も小さくなるように最適化を行う。なお、3本の光線による3次元の対応点は、左視点カメラ311と正面カメラ312の対応する光線の三角測量点と、正面カメラ312と右視点カメラ313の対応する光線の三角測量点と、右視点カメラ313と左視点カメラ311の対応する光線の三角測量点との重心位置とする。これにより、3か所のカメラ(つまり、左視点カメラ311、正面カメラ312、および右視点カメラ313)の位置および姿勢が推定される。 Finally, as shown in FIG. 12, the left viewpoint camera 311 is fixed as a reference, and the positions and orientations of the front camera 312 and the right viewpoint camera 313 are optimized for the internal parameters, the external parameters, and the world coordinate point cloud. Optimize by Bundle Adjustment. At this time, the evaluation value is the sum of squares of the distances from the three-dimensional points of the corresponding points to the corresponding three light rays, and optimization is performed so that this is the smallest. The three-dimensional corresponding points of the three rays are the triangular survey points of the corresponding rays of the left viewpoint camera 311 and the front camera 312, and the triangular survey points of the corresponding rays of the front camera 312 and the right viewpoint camera 313. The position of the center of gravity between the right-viewpoint camera 313 and the triangular survey point of the corresponding light beam of the left-viewpoint camera 311. As a result, the positions and orientations of the three cameras (that is, the left viewpoint camera 311 and the front camera 312, and the right viewpoint camera 313) are estimated.
 ステップS103において、スクリーン再構成部183は、以上のように推定された各カメラの位置および姿勢に基づいて、スクリーンを再構成する。 In step S103, the screen reconstructing unit 183 reconstructs the screen based on the position and orientation of each camera estimated as described above.
 まず、図13のAに示されるような、ステップS102の処理で推定された各カメラ(左視点カメラ311、正面カメラ312、および右視点カメラ313)の位置および姿勢と、ステップS101の処理で取得された対応点情報とから求めた3次元点群(3次元点341の集合)に対して、図13のBに示されるような、最も合致する平面を求め、これを仮平面スクリーン351とする。ここで平面を求める際には、3次元点群に混在する外れ値の影響を抑えるために、RANSACの手法を用いる。 First, the position and orientation of each camera (left viewpoint camera 311, front camera 312, and right viewpoint camera 313) estimated by the process of step S102 as shown in A of FIG. 13 and acquired by the process of step S101. For the three-dimensional point cloud (set of three-dimensional points 341) obtained from the corresponding point information obtained, the most matching plane as shown in B of FIG. 13 is obtained, and this is referred to as a temporary plane screen 351. .. Here, when finding a plane, the RANSAC method is used in order to suppress the influence of outliers mixed in the three-dimensional point cloud.
 次に、図14のAや図14のBに示されるように、仮平面スクリーン351に正対する方向(仮平面スクリーン351の法線方向)のある一定距離離れた位置である視点に仮想的な視点カメラ361を設定する。視点とは、仮平面スクリーン351に投影された投影画像を見る位置のモデルとして設定された位置である。視点カメラ361は、その視点から見た場合の投影画像の様子を示す撮像画像を得るための仮想的なカメラである。この視点カメラ361は、後述の補正ベクトル計算処理の中で、この視点カメラ361から期待する補正画像が見えるように補正ベクトルを求めるための基準として用いられる。 Next, as shown in A of FIG. 14 and B of FIG. 14, a virtual viewpoint is located at a certain distance in the direction facing the temporary flat screen 351 (normal direction of the temporary flat screen 351). Set the viewpoint camera 361. The viewpoint is a position set as a model of a position for viewing the projected image projected on the temporary flat screen 351. The viewpoint camera 361 is a virtual camera for obtaining a captured image showing the state of the projected image when viewed from the viewpoint. The viewpoint camera 361 is used as a reference for obtaining a correction vector so that the corrected image expected from the viewpoint camera 361 can be seen in the correction vector calculation process described later.
 また、図14のBに示されるように、視点カメラ361の垂直方向の角度は、プロジェクタ102から投影される画像中の同一列にある3次元対応点群を直線近似したときに、その直線群の平均の方向とする。プロジェクタ102が水平面に設置されている場合、ここで求めた視点カメラ361の垂直方向が、実際のスクリーン120が設置されている世界の鉛直方向と一致し、自動で補正画像のロール方向の調整を行うことができる。なお、仮平面スクリーン351は、視点カメラ361の位置を決めるためだけに利用する。 Further, as shown in B of FIG. 14, the vertical angle of the viewpoint camera 361 is a straight line group when the three-dimensional corresponding point group in the same row in the image projected from the projector 102 is linearly approximated. In the direction of the average of. When the projector 102 is installed on a horizontal plane, the vertical direction of the viewpoint camera 361 obtained here matches the vertical direction of the world where the actual screen 120 is installed, and the roll direction of the corrected image is automatically adjusted. It can be carried out. The temporary flat screen 351 is used only for determining the position of the viewpoint camera 361.
 ステップS104において、補正ベクトル導出部184は、ステップS102の処理で推定されたカメラの位置姿勢と、ステップS101の処理で取得された対応点情報と、ステップS103の処理で求めた視点カメラ361の位置および姿勢とに基づいて、それぞれのプロジェクタ102の補正ベクトルを導出する。 In step S104, the correction vector derivation unit 184 has the position and orientation of the camera estimated in the process of step S102, the corresponding point information acquired in the process of step S101, and the position of the viewpoint camera 361 obtained in the process of step S103. And the correction vector of each projector 102 is derived based on the posture.
 補正ベクトル導出部184は、例えば図15に示されるように、センシング点に欠落があった場合に、その欠落点を補間する。欠落センシング点の周辺のプロジェクタ画素中のセンシング点群373と、それらに対応する3次元点群を平面近似した平面374上にそれらの3次元点を射影した2次元点群とのホモグラフィを求め、プロジェクタ画素中で着目している欠落センシング点371をそのホモグラフィで変換することで、それに対応する3次元点372を求め、これを欠落していた3次元点の補間点とする。 The correction vector derivation unit 184 interpolates the missing points when the sensing points are missing, for example, as shown in FIG. Homography of the sensing point group 373 in the projector pixels around the missing sensing point and the two-dimensional point group obtained by projecting those three-dimensional points on the plane 374 that is a plane approximation of the corresponding three-dimensional point group is obtained. By converting the missing sensing point 371 of interest in the projector pixel by its homography, the corresponding three-dimensional point 372 is obtained, and this is used as the interpolation point of the missing three-dimensional point.
 次に、図16に示されるように、視点カメラ361が生成する撮像画像390上に、プロジェクタ102による投影がどのように映るかを推定する。補正ベクトル導出部184は、プロジェクタ102が投影するパターン画像301やパターン画像302の外周位置に対応する画素を近傍のセンシング点を利用し、上述した欠落点の補間と同様の処理により、3次元点の位置を求める。そして、補正ベクトル導出部184は、その3次元点を視点カメラ361の受光領域(つまり、視点カメラ361が生成する撮像画像)に射影することで視点カメラ361により生成された撮像画像上での各プロジェクタ102の外周領域(各プロジェクタ102が投影する範囲)を推定する。 Next, as shown in FIG. 16, it is estimated how the projection by the projector 102 is projected on the captured image 390 generated by the viewpoint camera 361. The correction vector derivation unit 184 uses the sensing points in the vicinity of the pixels corresponding to the outer peripheral positions of the pattern image 301 and the pattern image 302 projected by the projector 102, and performs the same processing as the interpolation of the missing points described above to perform three-dimensional points. Find the position of. Then, the correction vector derivation unit 184 projects each of the three-dimensional points onto the light receiving region of the viewpoint camera 361 (that is, the captured image generated by the viewpoint camera 361) on the captured image generated by the viewpoint camera 361. The outer peripheral region of the projector 102 (the range projected by each projector 102) is estimated.
 補正ベクトル導出部184は、このような処理をプロジェクタ102-1と、プロジェクタ102-2とのそれぞれについて行う。これにより、補正ベクトル導出部184は、視点カメラ361が生成した撮像画像上で2台のプロジェクタ102が投影した投影画像がどう映るか、すなわちスクリーン120上でどのように映るかを推定することができる。 The correction vector derivation unit 184 performs such processing for each of the projector 102-1 and the projector 102-2. As a result, the correction vector derivation unit 184 can estimate how the projected images projected by the two projectors 102 appear on the captured image generated by the viewpoint camera 361, that is, how they appear on the screen 120. can.
 なお、この処理はプロジェクタ102をモデル化しない、または、モデル化できない場合に行う処理である。プロジェクタ102をモデル化できている場合は、その内部変数、外部変数を利用して視点カメラ361が生成した撮像画像上での各プロジェクタ102の投影範囲(外周位置)を求めることができる。なお、プロジェクタ102のモデル化のための内部変数推定に関しては、事前にキャリブレーションする方法や、光学的ズーム・シフト調整を持つプロジェクタにおいてそのズーム・シフトに対応した内部変数をオンラインで取得できるような事前の仕組みを導入する方法、オンラインで内部変数自体を推定する方法等が考えられる。 Note that this process is performed when the projector 102 is not modeled or cannot be modeled. When the projector 102 can be modeled, the projection range (outer peripheral position) of each projector 102 on the captured image generated by the viewpoint camera 361 can be obtained by using its internal variables and external variables. Regarding the internal variable estimation for modeling the projector 102, a method of calibrating in advance and an internal variable corresponding to the zoom / shift can be acquired online in a projector having an optical zoom / shift adjustment. A method of introducing a mechanism in advance, a method of estimating the internal variable itself online, etc. can be considered.
 次に、補正ベクトル導出部184は、図17に示されるように、入力画像を視点カメラ361の撮像画像上(すなわち、スクリーン120上)のどこに提示するかを求める。補正ベクトル導出部184は、2台のプロジェクタ102が投影する画像の外周内に同時に含まれる領域(つまり、両画像が重畳する領域)内において、入力画像と同じアスペクト比の矩形領域を設定する。例えば、入力画像402のアスペクト比が16:9の場合、補正ベクトル導出部184は、視点カメラ361が生成する撮像画像390におけるプロジェクタ102-1の投影範囲391と、プロジェクタ102-2の投影範囲392が互いに重なっている領域内において、16:9の最大矩形領域を設定し、これを画像提示位置401とする。 Next, the correction vector derivation unit 184 determines where on the captured image of the viewpoint camera 361 (that is, on the screen 120) the input image is presented, as shown in FIG. The correction vector derivation unit 184 sets a rectangular region having the same aspect ratio as the input image in a region simultaneously included in the outer periphery of the images projected by the two projectors 102 (that is, a region in which both images are superimposed). For example, when the aspect ratio of the input image 402 is 16: 9, the correction vector derivation unit 184 has a projection range 391 of the projector 102-1 and a projection range 392 of the projector 102-2 in the captured image 390 generated by the viewpoint camera 361. A maximum rectangular area of 16: 9 is set in the area where the images overlap each other, and this is set as the image presentation position 401.
 この画像提示位置を決める投影位置パラメータ(ズーム、シフト、ロール)を初期値とする(例えば、ズーム:1.0、シフト:(0.0,0.0)、ロール:0.0とする)。なお、この投影位置パラメータを変更することで、それに合わせて最終的なスクリーン上の補正画像のサイズ、位置、傾きも変化する。この投影位置パラメータは後述する投影領域調整処理によって変更されうる。 The initial values are the projection position parameters (zoom, shift, roll) that determine the image presentation position (for example, zoom: 1.0, shift: (0.0, 0.0), roll: 0.0). .. By changing this projection position parameter, the size, position, and tilt of the final corrected image on the screen also change accordingly. This projection position parameter can be changed by the projection area adjustment process described later.
 次に、補正ベクトル導出部184は、図18に示されるように、プロジェクタの各画素に対応する補正ベクトルを求める。まず、補正ベクトル導出部184は、着目するプロジェクタ102のある着目画素について3次元点位置を求める。ここでは、既に3次元点が求まっている着目画素周辺4x4のセンシング点の3次元点座標値(X,Y,Z)それぞれをBicubic補間することで求める。なお、この補間の方法は一例であって、この例に限定されない。例えば、バイリニア(Bilinear)補間を適用してもよい。 Next, the correction vector derivation unit 184 obtains the correction vector corresponding to each pixel of the projector as shown in FIG. First, the correction vector derivation unit 184 obtains a three-dimensional point position for a certain pixel of interest of the projector 102 of interest. Here, each of the three-dimensional point coordinate values (X, Y, Z) of the sensing points of the 4x4 around the pixel of interest for which the three-dimensional points have already been obtained is obtained by Bicubic interpolation. Note that this interpolation method is an example and is not limited to this example. For example, Bilinear interpolation may be applied.
 次に、補正ベクトル導出部184は、求めた着目画素の3次元点を視点カメラ361の受光領域(画素領域)に射影する。視点カメラ361の受光領域(撮像画像390)におけるその射影点が画像提示位置401の中でどこに位置するかを求め、入力画像402の座標系に変換する。ここで求めた座標値(u,v)がプロジェクタ102の着目画素における補正ベクトルとなる。つまり、このプロジェクタ102の着目画素に、求めた補正ベクトルの画素位置の入力画素値を格納することで期待する補正画を生成することができる。補正ベクトル導出部184は、この処理を2つのプロジェクタ102の全画素について行うことで、プロジェクタ102-1、プロジェクタ102-2それぞれの補正ベクトルを求める。 Next, the correction vector derivation unit 184 projects the obtained three-dimensional point of the pixel of interest onto the light receiving region (pixel region) of the viewpoint camera 361. The position of the projection point in the light receiving region (captured image 390) of the viewpoint camera 361 in the image presentation position 401 is obtained and converted into the coordinate system of the input image 402. The coordinate values (u, v) obtained here become the correction vector in the pixel of interest of the projector 102. That is, the expected correction image can be generated by storing the input pixel value of the pixel position of the obtained correction vector in the pixel of interest of the projector 102. The correction vector derivation unit 184 performs this processing for all the pixels of the two projectors 102 to obtain the correction vectors for each of the projectors 102-1 and the projector 102-2.
 ステップS105において、投影制御部185は、補正ベクトルをプロジェクタ102-1およびプロジェクタ102-2に供給し、その補正ベクトルを用いて生成される補正画像を投影させる。 In step S105, the projection control unit 185 supplies the correction vector to the projector 102-1 and the projector 102-2, and projects the corrected image generated by using the correction vector.
 プロジェクタ102の補正画像生成部233は、この補正ベクトルを用いて、画像取得部232により取得された入力画像を補正し、補正画像を生成する。各プロジェクタ102においてこのように補正画像を生成し、投影部202からスクリーン120上に投影することで、補正された投影画像(補正画像の投影画像)がスクリーン120に提示される。 The corrected image generation unit 233 of the projector 102 corrects the input image acquired by the image acquisition unit 232 using this correction vector, and generates a corrected image. By generating the corrected image in this way in each projector 102 and projecting it from the projection unit 202 onto the screen 120, the corrected projected image (projected image of the corrected image) is presented on the screen 120.
 例えば、プロジェクタ102-1の補正画像生成部233は、補正ベクトル取得部231により取得された(携帯型端末装置101から供給された)プロジェクタ102-1用の補正ベクトルを利用して、画像取得部232により取得された入力画像に対応する補正画像を生成する。プロジェクタ102-2の補正画像生成部233は、補正ベクトル取得部231により取得された(携帯型端末装置101から供給された)プロジェクタ102-1用の補正ベクトルを利用して、画像取得部232により取得された入力画像に対応する補正画像を生成する。なお、これらの処理は互いに並行して実行しうる。 For example, the correction image generation unit 233 of the projector 102-1 uses the correction vector for the projector 102-1 (supplied from the portable terminal device 101) acquired by the correction vector acquisition unit 231 to obtain an image. A corrected image corresponding to the input image acquired by 232 is generated. The correction image generation unit 233 of the projector 102-2 uses the correction vector for the projector 102-1 (supplied from the portable terminal device 101) acquired by the correction vector acquisition unit 231 by the image acquisition unit 232. Generates a corrected image corresponding to the acquired input image. It should be noted that these processes can be executed in parallel with each other.
 プロジェクタ102-1の投影部202は、このように生成された補正画像をスクリーン120上に投影する。プロジェクタ102-2の投影部202は、このように生成された補正画像をスクリーン120上に投影する。なお、これらの処理は互いに並行して実行することができる。 The projection unit 202 of the projector 102-1 projects the corrected image thus generated on the screen 120. The projection unit 202 of the projector 102-2 projects the corrected image thus generated on the screen 120. It should be noted that these processes can be executed in parallel with each other.
 ステップS106において、投影領域調整部186は、投影領域(スクリーン120における画像投影位置)の位置、大きさ、傾き等を調整するか否かを判定する。例えば、ユーザは、スクリーン120に投影された補正画像の投影画像を目視して、その投影画像が期待通りの位置、大きさ、傾きであるか否かを判定し、その判定結果に基づくユーザ指示を入力部161に入力する。投影領域調整部186は、その受け付けたユーザ指示に基づいて投影領域の位置、大きさ、傾き等を調整するかを判定する。 In step S106, the projection area adjusting unit 186 determines whether or not to adjust the position, size, inclination, etc. of the projection area (image projection position on the screen 120). For example, the user visually observes the projected image of the corrected image projected on the screen 120, determines whether or not the projected image has the expected position, size, and inclination, and gives a user instruction based on the determination result. Is input to the input unit 161. The projection area adjustment unit 186 determines whether to adjust the position, size, inclination, etc. of the projection area based on the received user instruction.
 投影領域の位置、大きさ、傾き等を調整すると判定された場合、処理はステップS107に進む。ステップS107において、投影領域調整部186は、出力部162を制御してUI画像を表示させ、入力部161を制御してユーザ指示を受け付けさせる。 If it is determined to adjust the position, size, tilt, etc. of the projection area, the process proceeds to step S107. In step S107, the projection area adjusting unit 186 controls the output unit 162 to display the UI image, and controls the input unit 161 to accept the user instruction.
 例えば、投影領域調整部186は、図19のAに示されるような、ズームパラメータの拡大・縮小に対応するキー、シフトパラメータの上下左右に対応するキー、ロールパラメータの傾き角に対応するキー等のように、各種キー502を配置したUI画像501をモニタに表示させ、ユーザが操作できるようにする。また、投影領域調整部186は、入力部161を制御して、ユーザによるこのUI画像501に対する操作を受け付ける。投影領域調整部186は、受け付けたユーザ指示に対応する投影位置パラメータを補正ベクトル導出部184に供給し、補正ベクトルを生成させ、その補正ベクトルを用いて補正画像を生成させ、その補正画像を投影させる。このようにして、ユーザが入力した指示と連動して、スクリーン120上の補正画像が拡大・縮小、上下左右移動、左右回転する。このとき、3次元的な推定を行っているので、補正画像自体が矩形であること(キーストーン補正)は保証されたまま、容易に調整を行うことができる。 For example, the projection area adjustment unit 186 has a key corresponding to enlargement / reduction of the zoom parameter, a key corresponding to the up / down / left / right of the shift parameter, a key corresponding to the tilt angle of the roll parameter, and the like, as shown in A of FIG. The UI image 501 in which various keys 502 are arranged is displayed on the monitor so that the user can operate it. Further, the projection area adjustment unit 186 controls the input unit 161 to receive an operation on the UI image 501 by the user. The projection area adjustment unit 186 supplies the projection position parameter corresponding to the received user instruction to the correction vector derivation unit 184, generates a correction vector, generates a correction image using the correction vector, and projects the correction image. Let me. In this way, the corrected image on the screen 120 is enlarged / reduced, moved up / down / left / right, and rotated left / right in conjunction with the instruction input by the user. At this time, since the three-dimensional estimation is performed, adjustment can be easily performed while guaranteeing that the corrected image itself is rectangular (keystone correction).
 また、図19のBのように、スクリーン120を模した領域512と、その領域512内に表示される投影画像を模した画像513とが表示され、ユーザがこの画像513を指等で操作して、シフト、ズーム、ロール等を行うことができるUI画像511をモニタに表示させてもよい。このようなUI画像により、より直感的に投影サイズ、投影位置、投影角度を調整することができる。 Further, as shown in FIG. 19B, an area 512 imitating the screen 120 and an image 513 imitating the projected image displayed in the area 512 are displayed, and the user operates the image 513 with a finger or the like. Then, a UI image 511 capable of shifting, zooming, rolling, etc. may be displayed on the monitor. With such a UI image, the projection size, projection position, and projection angle can be adjusted more intuitively.
 ステップS108において、投影領域調整部186は、受け付けたユーザ指示に基づいて投影位置パラメータを調整し、それを補正ベクトル導出部184に供給する。例えば、ユーザ指示によりズームパラメータが更新されることにより、投影領域調整部186は、図20のAに示されるように、投影画像523から投影画像524のように投影領域を縮小させたり、投影画像524から投影画像523のように投影領域を拡大させたりする。つまり、投影領域調整部186は、投影領域がユーザ指示に応じた大きさとなるように投影位置パラメータを設定し、補正ベクトル導出部184に供給する。また、ユーザ指示によりシフトパラメータが更新されることにより、投影領域調整部186は、図20のBに示されるように、投影画像525から投影画像526またはその逆のように投影領域を移動させる。つまり、投影領域調整部186は、投影領域がユーザ指示に応じた位置となるように投影位置パラメータを設定し、補正ベクトル導出部184に供給する。さらに、ユーザ指示によりロールパラメータが更新されることにより、投影領域調整部186は、図20のCに示されるように、投影画像527から投影画像528またはその逆のように投影領域を回転させる。つまり、投影領域調整部186は、投影領域がユーザ指示に応じた傾きとなるように投影位置パラメータを設定し、補正ベクトル導出部184に供給する。 In step S108, the projection area adjustment unit 186 adjusts the projection position parameter based on the received user instruction, and supplies it to the correction vector derivation unit 184. For example, when the zoom parameter is updated according to the user's instruction, the projection area adjustment unit 186 may reduce the projection area from the projection image 523 to the projection image 524 or reduce the projection area as shown in FIG. 20A. The projection area is enlarged from 524 like the projection image 523. That is, the projection area adjustment unit 186 sets the projection position parameter so that the projection area has a size according to the user's instruction, and supplies the projection area parameter to the correction vector derivation unit 184. Further, by updating the shift parameter according to the user's instruction, the projection area adjusting unit 186 moves the projection area from the projection image 525 to the projection image 526 or vice versa, as shown in FIG. 20B. That is, the projection area adjustment unit 186 sets the projection position parameter so that the projection area is at a position according to the user's instruction, and supplies the projection area parameter to the correction vector derivation unit 184. Further, by updating the roll parameter according to the user's instruction, the projection area adjusting unit 186 rotates the projection area from the projection image 527 to the projection image 528 or vice versa, as shown in FIG. 20C. That is, the projection area adjustment unit 186 sets the projection position parameter so that the projection area has an inclination according to the user's instruction, and supplies it to the correction vector derivation unit 184.
 ステップS108の処理が終了すると処理はステップS104に戻る。ステップS104において、補正ベクトル導出部184は、その更新された投影位置パラメータを用いて補正ベクトルを導出する。なお、プロジェクタが水平に設置されている前提であれば、ステップS103の処理によるロール方向の自動調整により、手動でのロール方向調整は不要である。 When the process of step S108 is completed, the process returns to step S104. In step S104, the correction vector derivation unit 184 derives the correction vector using the updated projection position parameter. Assuming that the projector is installed horizontally, the roll direction is automatically adjusted by the process of step S103, so that the roll direction cannot be manually adjusted.
 ステップS106において、ユーザの所望の投影領域が設定され、投影領域の位置、大きさ、傾き等を調整しないと判定された場合、投影補正処理が終了する。 In step S106, when the user's desired projection area is set and it is determined that the position, size, inclination, etc. of the projection area are not adjusted, the projection correction process ends.
 このように投影補正を行うことにより、プロジェクタ102(の投影部202)のモデル化を行わずに投影補正を行うことができるので、携帯型端末装置101(投影撮像システム100)は、投影補正の精度の低減を抑制することができる。また、投影領域の調整においてユーザがキーストーン補正を考慮する必要がないので、携帯型端末装置101(投影撮像システム100)は、投影補正の精度の低減を抑制することができる。 By performing the projection correction in this way, the projection correction can be performed without modeling the projector 102 (projection unit 202), so that the portable terminal device 101 (projection imaging system 100) can perform the projection correction. It is possible to suppress a decrease in accuracy. Further, since the user does not need to consider the keystone correction in the adjustment of the projection area, the portable terminal device 101 (projection imaging system 100) can suppress the decrease in the accuracy of the projection correction.
 <3.第2の実施の形態>
  <円筒面スクリーン>
 なお、投影面は、曲面であってもよく、平面に限定されない。例えば、円筒面であってもよい。つまり、スクリーン120は、平面スクリーンではなく、円筒面スクリーンであってもよい。
<3. Second Embodiment>
<Cylindrical screen>
The projection surface may be a curved surface and is not limited to a flat surface. For example, it may be a cylindrical surface. That is, the screen 120 may be a cylindrical screen instead of a flat screen.
 この場合も、対応点検出部181による対応点検出と、カメラ姿勢推定部182によるカメラ姿勢推定とは、第1の実施の形態において説明した平面スクリーンの場合と同様である。以下において、提示する画像のアスペクト比は16:9とする、つまり、画像のフォーマットは、平面スクリーンの場合と同一とする。 Also in this case, the corresponding point detection by the corresponding point detection unit 181 and the camera posture estimation by the camera posture estimation unit 182 are the same as in the case of the flat screen described in the first embodiment. In the following, the aspect ratio of the presented image is 16: 9, that is, the format of the image is the same as that of the flat screen.
  <スクリーン再構成>
 ステップS103において、スクリーン再構成部183は、図21のAに示されるような、ステップS102において推定されたカメラの位置および姿勢と、ステップS101において検出された対応点情報とから求めた3次元点群に対して、図21のBに示されるような、最も合致する円筒面を求め、これを仮円筒面スクリーン601とする。この円筒面を求める際、3次元点群に混在する外れ値の影響を抑えるためにRANSACの手法を適用する。
<Screen reconstruction>
In step S103, the screen reconstruction unit 183 is a three-dimensional point obtained from the position and orientation of the camera estimated in step S102 and the corresponding point information detected in step S101, as shown in A of FIG. 21. For the group, the most matching cylindrical surface as shown in B of FIG. 21 is obtained, and this is referred to as a temporary cylindrical surface screen 601. When determining this cylindrical surface, the RANSAC method is applied in order to suppress the influence of outliers mixed in the three-dimensional point cloud.
 次に、スクリーン再構成部183は、図22のAに示されるように、推定された3次元点群の重心から仮円筒面スクリーン601への垂線方向延長線上の円筒面スクリーン半径の位置を視点とし、その視点に視点カメラ611を設定する。視点カメラ611の向きは推定3次元点群重心の方向とする。また、図22のBに示されるように、ロール方向は平面スクリーンの場合と同様の手法を用いて求める。このようにして、仮円筒面スクリーン601に対応する視点カメラ611が設定される。 Next, the screen reconstruction unit 183 views the position of the cylindrical surface screen radius on the perpendicular extension line from the estimated center of gravity of the three-dimensional point cloud to the temporary cylindrical surface screen 601 as shown in A of FIG. Then, the viewpoint camera 611 is set at that viewpoint. The direction of the viewpoint camera 611 is the direction of the center of gravity of the estimated three-dimensional point cloud. Further, as shown in B of FIG. 22, the roll direction is determined by using the same method as in the case of the flat screen. In this way, the viewpoint camera 611 corresponding to the temporary cylindrical screen 601 is set.
  <補正ベクトルの導出>
 ステップS104において、補正ベクトル導出部184は、ステップS102の処理で推定されたカメラの位置および姿勢と、ステップS101の処理で検出された対応点情報と、ステップS103の処理で求められた仮円筒面スクリーン601と視点カメラ611の位置および姿勢とに基づいて、それぞれのプロジェクタ102の補正ベクトルを求める。
<Derivation of correction vector>
In step S104, the correction vector derivation unit 184 has the position and orientation of the camera estimated in the process of step S102, the corresponding point information detected in the process of step S101, and the temporary cylindrical surface obtained in the process of step S103. The correction vector of each projector 102 is obtained based on the position and orientation of the screen 601 and the viewpoint camera 611.
 まず、補正ベクトル導出部184は、図23に示されるように、仮円筒面スクリーン601の円筒座標系上で、プロジェクタ102の投影がどのように映るかを推定する。ここでは、プロジェクタ102の外周位置に対応する画素を近傍のセンシング点を利用し、平面スクリーンの補正ベクトル計算中の欠落点の補間と同様の処理により3次元点の位置を求めた後に、その3次元点を、視点カメラ611の位置を基準とする円筒座標系に変換することで、円筒座標系上でのプロジェクタ102の外周領域(視点カメラ611の撮像画像630における外周領域631および外周領域632)を推定する。なお、この円筒座標系では、垂直方向水平方向共に[mm]の距離の単位の座標系で表現することとする。 First, the correction vector derivation unit 184 estimates how the projection of the projector 102 is projected on the cylindrical coordinate system of the temporary cylindrical surface screen 601 as shown in FIG. 23. Here, the pixels corresponding to the outer peripheral position of the projector 102 are obtained by using the sensing points in the vicinity, and the positions of the three-dimensional points are obtained by the same processing as the interpolation of the missing points in the correction vector calculation of the flat screen, and then the third. By converting the dimensional points into a cylindrical coordinate system based on the position of the viewpoint camera 611, the outer peripheral region of the projector 102 on the cylindrical coordinate system (the outer peripheral region 631 and the outer peripheral region 632 in the captured image 630 of the viewpoint camera 611). To estimate. In this cylindrical coordinate system, the coordinate system is expressed in the unit of the distance of [mm] in both the vertical and horizontal directions.
 補正ベクトル導出部184は、この処理を各プロジェクタ102について行う。これにより、円筒面スクリーン上でどのように映るかが推定できる。なお、この処理はプロジェクタ102をモデル化しない、またはできない場合に行う処理である。モデル化できている場合には、その内部変数、外部変数を利用して視点カメラ611の撮像画像上でのプロジェクタ102の外周位置を求めることができるのは平面スクリーンの場合と同様である。 The correction vector derivation unit 184 performs this processing for each projector 102. This makes it possible to estimate how it will appear on the cylindrical screen. It should be noted that this process is performed when the projector 102 is not modeled or cannot be modeled. If the model can be modeled, the outer peripheral position of the projector 102 on the captured image of the viewpoint camera 611 can be obtained by using the internal variables and the external variables, as in the case of the flat screen.
 次に、補正ベクトル導出部184は、図24に示されるように、入力画像642を円筒座標系上(=円筒面スクリーン上)のどこに提示するかを求める。補正ベクトル導出部184は、円筒座標系で先に求めた2台のプロジェクタ外周内に同時に含まれる領域中に入力画像と同じアスペクト比の矩形領域を設定する。例えば、補正ベクトル導出部184は、入力画像のアスペクト比が16:9の場合、視点カメラ611の撮像画像上でプロジェクタ102-1の投影範囲と、プロジェクタ102-2の投影範囲とが重畳している領域の中で16:9の最大矩形領域を設定し、これを画像提示位置641とする。 Next, the correction vector derivation unit 184 determines where on the cylindrical coordinate system (= on the cylindrical surface screen) the input image 642 is presented, as shown in FIG. 24. The correction vector derivation unit 184 sets a rectangular region having the same aspect ratio as the input image in the region simultaneously included in the outer circumferences of the two projectors previously obtained in the cylindrical coordinate system. For example, in the correction vector derivation unit 184, when the aspect ratio of the input image is 16: 9, the projection range of the projector 102-1 and the projection range of the projector 102-2 are superimposed on the captured image of the viewpoint camera 611. A maximum rectangular area of 16: 9 is set in the existing area, and this is set as the image presentation position 641.
 次に、補正ベクトル導出部184は、図25に示されるように、プロジェクタ102の各画素に対応する補正ベクトルを求める。まず、補正ベクトル導出部184は、着目するプロジェクタ102のある着目画素について3次元点位置を求める。ここでは、平面スクリーンの場合と同様、既に3次元点が求まっている着目画素周辺4x4のセンシング点の3次元点座標値X,Y,ZそれぞれをBicubic補間することで求める。 Next, the correction vector derivation unit 184 obtains the correction vector corresponding to each pixel of the projector 102 as shown in FIG. 25. First, the correction vector derivation unit 184 obtains a three-dimensional point position for a certain pixel of interest of the projector 102 of interest. Here, as in the case of the flat screen, the three-dimensional point coordinate values X, Y, and Z of the sensing points around the pixel of interest for which the three-dimensional points have already been obtained are obtained by Bicubic interpolation.
 次に、補正ベクトル導出部184は、ここで求めた着目画素の3次元点を、視点カメラ611を中心とする円筒座標系に変換し、円筒座標系上でのその変換点が画像提示位置の中でどこに位置するかを求め、入力画像の座標系に変換する。ここで求めた座標値がプロジェクタの着目画素における補正ベクトル(u,v)となる。この処理を2つのプロジェクタ102の全画素について行うことで、各プロジェクタ102の円筒面スクリーンにおける補正ベクトルを求める。 Next, the correction vector derivation unit 184 converts the three-dimensional point of the pixel of interest obtained here into a cylindrical coordinate system centered on the viewpoint camera 611, and the conversion point on the cylindrical coordinate system is the image presentation position. Find where it is located inside and convert it to the coordinate system of the input image. The coordinate values obtained here are the correction vectors (u, v) in the pixel of interest of the projector. By performing this process on all the pixels of the two projectors 102, the correction vector on the cylindrical screen of each projector 102 is obtained.
 以上の処理により、携帯型端末装置101は、円筒面スクリーンにおける自動投影幾何補正を行うことができる。投影領域調整の処理は平面スクリーンのときの同じように画像提示位置の調整により実現できる。したがって、この円筒面スクリーンの場合も、携帯型端末装置101(投影撮像システム100)は、上述した平面スクリーンの場合と同様に、投影部のモデル化を行わずに投影補正を行うことができるので、投影補正の精度の低減を抑制することができる。また、この補正においてユーザがキーストーン補正を考慮する必要がないので、携帯型端末装置101(投影撮像システム100)は、投影補正の精度の低減を抑制することができる。 By the above processing, the portable terminal device 101 can perform automatic projective geometry correction on the cylindrical surface screen. The process of adjusting the projection area can be realized by adjusting the image presentation position in the same manner as in the case of a flat screen. Therefore, also in the case of this cylindrical screen, the portable terminal device 101 (projection imaging system 100) can perform projection correction without modeling the projection unit, as in the case of the above-mentioned flat screen. , It is possible to suppress a decrease in the accuracy of projection correction. Further, since the user does not need to consider the keystone correction in this correction, the portable terminal device 101 (projection imaging system 100) can suppress a decrease in the accuracy of the projection correction.
 <4.第3の実施の形態>
  <球面スクリーン>
 また、投影面は、球面であってもよい。つまり、スクリーン120は、平面スクリーンではなく、球面スクリーンであってもよい。
<4. Third Embodiment>
<Spherical screen>
Further, the projection surface may be a spherical surface. That is, the screen 120 may be a spherical screen instead of a flat screen.
 この場合も、対応点検出部181による対応点検出と、カメラ姿勢推定部182によるカメラ姿勢推定とは、第1の実施の形態において説明した平面スクリーンの場合と同様である。以下において、提示する画像は、球面スクリーン等に投影することを想定している正距円筒フォーマットとする。 Also in this case, the corresponding point detection by the corresponding point detection unit 181 and the camera posture estimation by the camera posture estimation unit 182 are the same as in the case of the flat screen described in the first embodiment. In the following, the image to be presented will be in an equirectangular format that is supposed to be projected on a spherical screen or the like.
  <スクリーン再構成>
 ステップS103において、スクリーン再構成部183は、図26のAに示されるような、ステップS102において推定されたカメラの位置および姿勢と、ステップS101において検出された対応点情報とから求めた3次元点群に対して、図26のBに示されるような、最も合致する球面を求め、これを仮球面スクリーン701とする。この球面を求める際、3次元点群に混在する外れ値の影響を抑えるためにRANSACの手法を適用する。
<Screen reconstruction>
In step S103, the screen reconstruction unit 183 is a three-dimensional point obtained from the position and orientation of the camera estimated in step S102 and the corresponding point information detected in step S101, as shown in A of FIG. 26. For the group, the most matching spherical surface as shown in B of FIG. 26 is obtained, and this is referred to as a pseudo-spherical surface screen 701. When obtaining this sphere, the RANSAC method is applied in order to suppress the influence of outliers mixed in the three-dimensional point cloud.
 次に、スクリーン再構成部183は、図27のAに示されるように、この球面の中心に視点カメラ711の中心を設定し、その中心から推定した3次元点群の重心に向かう方向を視点カメラ711の向きとする。また、図27のBに示されるように、スクリーン再構成部183は、ロール方向を、平面スクリーンの場合と同様の手法を用いて求める。ただし、平面スクリーンや円筒面スクリーンの場合と異なり、プロジェクタが水平面に設置されていたとしてもロール調整を自動で行える保証はない。あくまでもロール方向の初期値として用いる。あるいは、正面カメラの姿勢の垂直方向をここでのロール初期値と設定することもできる。このようにして、仮球面スクリーン701に対応する視点カメラ711が設定される。 Next, as shown in A of FIG. 27, the screen reconstruction unit 183 sets the center of the viewpoint camera 711 at the center of this spherical surface, and looks at the direction toward the center of gravity of the three-dimensional point cloud estimated from the center. The orientation of the camera 711. Further, as shown in B of FIG. 27, the screen reconstruction unit 183 determines the roll direction by using the same method as in the case of the flat screen. However, unlike the case of a flat screen or a cylindrical screen, there is no guarantee that the roll adjustment can be performed automatically even if the projector is installed on a horizontal plane. It is used only as the initial value in the roll direction. Alternatively, the vertical direction of the posture of the front camera can be set as the initial roll value here. In this way, the viewpoint camera 711 corresponding to the temporary spherical screen 701 is set.
  <補正ベクトルの導出>
 ステップS104において、補正ベクトル導出部184は、ステップS102の処理で推定されたカメラの位置および姿勢と、ステップS101の処理で検出された対応点情報と、ステップS103の処理で求められた仮球面スクリーン701と視点カメラ711の位置および姿勢とに基づいて、それぞれのプロジェクタ102の補正ベクトルを求める。
<Derivation of correction vector>
In step S104, the correction vector derivation unit 184 has the position and orientation of the camera estimated in the process of step S102, the corresponding point information detected in the process of step S101, and the pseudospherical screen obtained in the process of step S103. The correction vector of each projector 102 is obtained based on the position and orientation of the viewpoint camera 711 and the viewpoint camera 711.
 まず、補正ベクトル導出部184は、図28に示されるように、仮球面スクリーン701の正距円筒座標系上で、プロジェクタの投影がどのように映るかを推定する。ここでは、プロジェクタ102の外周位置に対応する画素を近傍のセンシング点を利用し、平面スクリーンの補正ベクトル計算中の欠落点の補間と同様の処理により3次元点の位置を求めた後に、その3次元点を、視点カメラ位置を基準とする正距円筒座標系に変換することで、正距円筒座標系上でのプロジェクタ外周領域を推定する。なお、この正距円筒座標系では、垂直方向水平方向共に角度radianの単位の座標系で表現することとする。 First, as shown in FIG. 28, the correction vector derivation unit 184 estimates how the projection of the projector is projected on the regular distance cylindrical coordinate system of the pseudospherical screen 701. Here, the pixels corresponding to the outer peripheral position of the projector 102 are obtained by using the sensing points in the vicinity, and the positions of the three-dimensional points are obtained by the same processing as the interpolation of the missing points in the correction vector calculation of the flat screen, and then the third. By converting the 3D points into a regular-distance cylindrical coordinate system based on the viewpoint camera position, the outer peripheral region of the projector on the regular-distance cylindrical coordinate system is estimated. In this regular-distance cylindrical coordinate system, the coordinates are expressed in units of angle radians in both the vertical and horizontal directions.
 補正ベクトル導出部184は、この処理を各プロジェクタ102について行う。これにより、球面スクリーン上でどのように映るかが推定できる。なお、この処理はプロジェクタ102をモデル化しない、またはできない場合に行う処理である。モデル化できている場合には、その内部変数、外部変数を利用して視点カメラ画像上でのプロジェクタ102の外周位置を求めることができるのは平面スクリーンの時と同様である。 The correction vector derivation unit 184 performs this processing for each projector 102. This makes it possible to estimate how it will appear on the spherical screen. It should be noted that this process is performed when the projector 102 is not modeled or cannot be modeled. If the model can be modeled, the outer peripheral position of the projector 102 on the viewpoint camera image can be obtained by using the internal variables and the external variables, as in the case of the flat screen.
 次に、補正ベクトル導出部184は、図29に示されるように、入力画像を正距円筒座標系上(=球面スクリーン上)のどこに提示するかを求める。補正ベクトル導出部184は、正距円筒座標系で先に求めた2台のプロジェクタ102の外周内に同時に含まれる領域を入力画像の提示範囲とする。入力画像が16:9平面画像フォーマットの時と異なり、正距円筒フォーマット中のどの領域を提示するかは球面スクリーン上の投影範囲によって変わってくる。もちろん意図的に、正距円筒フォーマットの物理的意味を無視した投影をすることも可能である。また、2台のプロジェクタが重畳されていない領域も含めて投影することも可能だし、仮に16:9平面画像フォーマットを入力した場合でもなんらかのルールを決めれば補正画の投影は可能である。 Next, the correction vector derivation unit 184 determines where on the regular distance cylindrical coordinate system (= on the spherical screen) the input image is presented, as shown in FIG. 29. The correction vector derivation unit 184 sets a region included in the outer periphery of the two projectors 102 previously obtained in the regular distance cylindrical coordinate system as the presentation range of the input image. Unlike when the input image is in the 16: 9 planar image format, which region in the equirectangular format is presented depends on the projection range on the spherical screen. Of course, it is also possible to intentionally make a projection that ignores the physical meaning of the equirectangular format. It is also possible to project the area including the area where the two projectors are not superimposed, and even if a 16: 9 planar image format is input, it is possible to project the corrected image if some rules are determined.
 次に、補正ベクトル導出部184は、図30に示されるように、プロジェクタの各画素に対応する補正ベクトルを求める。まず、補正ベクトル導出部184は、着目するプロジェクタのある着目画素について3次元点位置を求める。ここでは、平面スクリーンの場合と同様、既に3次元点が求まっている着目画素周辺4x4のセンシング点の3次元点座標値X,Y,ZそれぞれをBicubic補間することで求める。 Next, the correction vector derivation unit 184 obtains the correction vector corresponding to each pixel of the projector as shown in FIG. First, the correction vector derivation unit 184 obtains a three-dimensional point position for a certain pixel of interest of the projector of interest. Here, as in the case of the flat screen, the three-dimensional point coordinate values X, Y, and Z of the sensing points around the pixel of interest for which the three-dimensional points have already been obtained are obtained by Bicubic interpolation.
 次に、補正ベクトル導出部184は、ここで求めた着目画素の3次元点を、視点カメラ711を中心とする正距円筒座標系に変換し、正距円筒座標系上でのその変換点がどこに位置するかを求め、入力画像の座標系に変換する。ここで求めた座標値がプロジェクタの着目画素における補正ベクトル(u,v)となる。この処理を2つのプロジェクタ102の全画素について行うことで、各プロジェクタ102の球面スクリーンにおける補正ベクトルを求める。 Next, the correction vector derivation unit 184 converts the three-dimensional point of the pixel of interest obtained here into a regular-distance cylindrical coordinate system centered on the viewpoint camera 711, and the conversion point on the regular-distance cylindrical coordinate system is obtained. Find where it is located and convert it to the coordinate system of the input image. The coordinate values obtained here are the correction vectors (u, v) in the pixel of interest of the projector. By performing this process on all the pixels of the two projectors 102, the correction vector on the spherical screen of each projector 102 is obtained.
 以上の処理により、球面スクリーンにおける自動投影幾何補正を行うことができる。投影領域調整の処理は平面スクリーンのときの同じように画像提示位置の調整により実現できる。したがって、この球面スクリーンの場合も、携帯型端末装置101(投影撮像システム100)は、上述した平面スクリーンの場合と同様に、投影部のモデル化を行わずに投影補正を行うことができるので、投影補正の精度の低減を抑制することができる。また、この補正においてユーザがキーストーン補正を考慮する必要がないので、携帯型端末装置101(投影撮像システム100)は、投影補正の精度の低減を抑制することができる。 By the above processing, automatic projective geometry correction on a spherical screen can be performed. The process of adjusting the projection area can be realized by adjusting the image presentation position in the same manner as in the case of a flat screen. Therefore, also in the case of this spherical screen, the portable terminal device 101 (projection imaging system 100) can perform projection correction without modeling the projection unit, as in the case of the above-mentioned flat screen. It is possible to suppress a decrease in the accuracy of projection correction. Further, since the user does not need to consider the keystone correction in this correction, the portable terminal device 101 (projection imaging system 100) can suppress a decrease in the accuracy of the projection correction.
 <5.付記>
  <ハードウエア>
 上述した一連の処理は、ソフトウエア(アプリケーションプログラム)により実行させることもできるし、ハードウエアにより実行させることもできる。
<5. Addendum>
<Hardware>
The series of processes described above can be executed by software (application program) or by hardware.
  <本技術の適用対象>
 また、本技術は、任意の装置またはシステムを構成する装置に搭載するあらゆる構成、例えば、システムLSI(Large Scale Integration)等としてのプロセッサ(例えばビデオプロセッサ)、複数のプロセッサ等を用いるモジュール(例えばビデオモジュール)、複数のモジュール等を用いるユニット(例えばビデオユニット)、ユニットにさらにその他の機能を付加したセット(例えばビデオセット)等(すなわち、装置の一部の構成)として実施することもできる。
<Applicable target of this technology>
In addition, the present technology includes any configuration or a module using a processor (for example, a video processor) as a system LSI (Large Scale Integration), a module using a plurality of processors, or the like (for example, video) mounted on an arbitrary device or a device constituting the system. It can also be implemented as a module), a unit using a plurality of modules (for example, a video unit), a set in which other functions are added to the unit (for example, a video set), or the like (that is, a partial configuration of a device).
 さらに、本技術は、複数の装置により構成されるネットワークシステムにも適用することもできる。例えば、コンピュータ、AV(Audio Visual)機器、携帯型情報処理端末、IoT(Internet of Things)デバイス等の任意の端末に対して、画像(動画像)に関するサービスを提供するクラウドサービスに適用することもできる。 Furthermore, this technology can also be applied to a network system composed of a plurality of devices. For example, it can be applied to cloud services that provide services related to images (moving images) to arbitrary terminals such as computers, AV (AudioVisual) devices, portable information processing terminals, and IoT (Internet of Things) devices. can.
 なお、本技術を適用したシステム、装置、処理部等は、例えば、交通、医療、防犯、農業、畜産業、鉱業、美容、工場、家電、気象、自然監視等、任意の分野に利用することができる。また、その用途も任意である。 Systems, equipment, processing departments, etc. to which this technology is applied should be used in any field such as transportation, medical care, crime prevention, agriculture, livestock industry, mining, beauty, factories, home appliances, weather, nature monitoring, etc. Can be done. The use is also arbitrary.
 例えば、本技術は、観賞用コンテンツ等の提供の用に供されるシステムやデバイスに適用することができる。また、例えば、本技術は、交通状況の監理や自動運転制御等、交通の用に供されるシステムやデバイスにも適用することができる。さらに、例えば、本技術は、セキュリティの用に供されるシステムやデバイスにも適用することができる。また、例えば、本技術は、機械等の自動制御の用に供されるシステムやデバイスに適用することができる。さらに、例えば、本技術は、農業や畜産業の用に供されるシステムやデバイスにも適用することができる。また、本技術は、例えば火山、森林、海洋等の自然の状態や野生生物等を監視するシステムやデバイスにも適用することができる。さらに、例えば、本技術は、スポーツの用に供されるシステムやデバイスにも適用することができる。 For example, this technology can be applied to systems and devices used for providing ornamental contents and the like. Further, for example, the present technology can be applied to systems and devices used for traffic such as traffic condition supervision and automatic driving control. Further, for example, the present technology can be applied to systems and devices used for security purposes. Further, for example, the present technology can be applied to a system or device used for automatic control of a machine or the like. Further, for example, the present technology can be applied to systems and devices used for agriculture and livestock industry. The present technology can also be applied to systems and devices for monitoring natural conditions such as volcanoes, forests and oceans, and wildlife. Further, for example, the present technology can be applied to systems and devices used for sports.
  <その他>
 本技術の実施の形態は、上述した実施の形態に限定されるものではなく、本技術の要旨を逸脱しない範囲において種々の変更が可能である。
<Others>
The embodiment of the present technology is not limited to the above-described embodiment, and various changes can be made without departing from the gist of the present technology.
 例えば、本技術は、装置またはシステムを構成するあらゆる構成、例えば、システムLSI(Large Scale Integration)等としてのプロセッサ(例えばビデオプロセッサ)、複数のプロセッサ等を用いるモジュール(例えばビデオモジュール)、複数のモジュール等を用いるユニット(例えばビデオユニット)、ユニットにさらにその他の機能を付加したセット(例えばビデオセット)等(すなわち、装置の一部の構成)として実施することもできる。 For example, the present technology includes any configuration that constitutes a device or system, for example, a processor (for example, a video processor) as a system LSI (Large Scale Integration), a module that uses a plurality of processors (for example, a video module), and a plurality of modules. It can also be implemented as a unit using the above (for example, a video unit), a set (for example, a video set) in which other functions are added to the unit, or the like (that is, a partial configuration of the device).
 なお、本明細書において、システムとは、複数の構成要素(装置、モジュール(部品)等)の集合を意味し、全ての構成要素が同一筐体中にあるか否かは問わない。したがって、別個の筐体に収納され、ネットワークを介して接続されている複数の装置、および、1つの筐体の中に複数のモジュールが収納されている1つの装置は、いずれも、システムである。 In the present specification, the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Therefore, a plurality of devices housed in separate housings and connected via a network, and a device in which a plurality of modules are housed in one housing are both systems. ..
 また、例えば、1つの装置(または処理部)として説明した構成を分割し、複数の装置(または処理部)として構成するようにしてもよい。逆に、以上において複数の装置(または処理部)として説明した構成をまとめて1つの装置(または処理部)として構成されるようにしてもよい。また、各装置(または各処理部)の構成に上述した以外の構成を付加するようにしてももちろんよい。さらに、システム全体としての構成や動作が実質的に同じであれば、ある装置(または処理部)の構成の一部を他の装置(または他の処理部)の構成に含めるようにしてもよい。 Further, for example, the configuration described as one device (or processing unit) may be divided and configured as a plurality of devices (or processing units). On the contrary, the configurations described above as a plurality of devices (or processing units) may be collectively configured as one device (or processing unit). Further, of course, a configuration other than the above may be added to the configuration of each device (or each processing unit). Further, if the configuration and operation of the entire system are substantially the same, a part of the configuration of one device (or processing unit) may be included in the configuration of another device (or other processing unit). ..
 また、例えば、本技術は、1つの機能を、ネットワークを介して複数の装置で分担、共同して処理するクラウドコンピューティングの構成をとることができる。 Further, for example, this technology can have a cloud computing configuration in which one function is shared by a plurality of devices via a network and processed jointly.
 また、例えば、上述したプログラムは、任意の装置において実行することができる。その場合、その装置が、必要な機能(機能ブロック等)を有し、必要な情報を得ることができるようにすればよい。 Further, for example, the above-mentioned program can be executed in any device. In that case, the device may have necessary functions (functional blocks, etc.) so that necessary information can be obtained.
 また、例えば、上述のフローチャートで説明した各ステップは、1つの装置で実行する他、複数の装置で分担して実行することができる。さらに、1つのステップに複数の処理が含まれる場合には、その1つのステップに含まれる複数の処理は、1つの装置で実行する他、複数の装置で分担して実行することができる。換言するに、1つのステップに含まれる複数の処理を、複数のステップの処理として実行することもできる。逆に、複数のステップとして説明した処理を1つのステップとしてまとめて実行することもできる。 Further, for example, each step described in the above flowchart can be executed by one device or can be shared and executed by a plurality of devices. Further, when a plurality of processes are included in one step, the plurality of processes included in the one step can be executed by one device or shared by a plurality of devices. In other words, a plurality of processes included in one step can be executed as processes of a plurality of steps. On the contrary, the processes described as a plurality of steps can be collectively executed as one step.
 なお、コンピュータが実行するプログラムは、プログラムを記述するステップの処理が、本明細書で説明する順序に沿って時系列に実行されるようにしても良いし、並列に、あるいは呼び出しが行われたとき等の必要なタイミングで個別に実行されるようにしても良い。つまり、矛盾が生じない限り、各ステップの処理が上述した順序と異なる順序で実行されるようにしてもよい。さらに、このプログラムを記述するステップの処理が、他のプログラムの処理と並列に実行されるようにしても良いし、他のプログラムの処理と組み合わせて実行されるようにしても良い。 In the program executed by the computer, the processes of the steps for describing the program may be executed in chronological order in the order described in the present specification, or may be called in parallel or called. It may be executed individually at the required timing such as when. That is, as long as there is no contradiction, the processes of each step may be executed in an order different from the above-mentioned order. Further, the processing of the step for describing this program may be executed in parallel with the processing of another program, or may be executed in combination with the processing of another program.
 なお、本明細書において複数説明した本技術は、矛盾が生じない限り、それぞれ独立に単体で実施することができる。もちろん、任意の複数の本技術を併用して実施することもできる。例えば、いずれかの実施の形態において説明した本技術の一部または全部を、他の実施の形態において説明した本技術の一部または全部と組み合わせて実施することもできる。また、上述した任意の本技術の一部または全部を、上述していない他の技術と併用して実施することもできる。 It should be noted that the present techniques described above in this specification can be independently implemented independently as long as there is no contradiction. Of course, any plurality of the present technologies can be used in combination. For example, some or all of the techniques described in any of the embodiments may be combined with some or all of the techniques described in other embodiments. In addition, a part or all of any of the above-mentioned techniques may be carried out in combination with other techniques not described above.
 なお、本明細書に記載された効果はあくまで例示であって限定されるものではなく、他の効果があってもよい。 It should be noted that the effects described in the present specification are merely examples and are not limited, and other effects may be obtained.
 なお、本技術は以下のような構成も取ることができる。
 (1) 入力画像を投影する投影部の着目画素に対応する3次元空間上の投影位置である3次元点位置を、前記投影部により投影された投影画像の視点とする位置および姿勢で前記投影画像を撮像する仮想の撮像部である仮想撮像部の受光領域に射影し、前記受光領域における前記3次元点位置が射影された位置を示す射影点を、前記入力画像の座標系に変換することで、前記着目画素に対応する補正ベクトルを導出する補正ベクトル導出部
 を備える情報処理装置。
 (2) 前記補正ベクトル導出部は、前記着目画素の周辺画素に対応する既知の前記3次元点位置に基づいて、前記着目画素の前記3次元点位置を求める
 (1)に記載の情報処理装置。
 (3) 前記補正ベクトル導出部は、前記投影部の全ての画素について、前記補正ベクトルを導出する
 (1)または(2)に記載の情報処理装置。
 (4) 前記補正ベクトル導出部は、前記受光領域における前記入力画像の提示位置を設定し、前記提示位置を利用して、前記射影点を、前記入力画像の座標系に変換する
 (1)乃至(3)のいずれかに記載の情報処理装置。
 (5) 前記補正ベクトル導出部は、前記受光領域における、複数の前記投影部のそれぞれが投影した投影画像に対応する範囲を推定し、複数の前記投影部のそれぞれに対応する前記範囲が重畳する領域に、前記提示位置を設定する
 (4)に記載の情報処理装置。
 (6) 前記補正ベクトル導出部は、前記投影部の、前記3次元点位置が欠落した画素について、前記3次元点位置を補間する
 (5)に記載の情報処理装置。
 (7) 前記補正ベクトル導出部により導出された前記補正ベクトルを用いて前記入力画像を補正することにより生成された画像である補正画像を、前記投影部に投影させる投影制御部をさらに備える
 (1)乃至(6)のいずれかに記載の情報処理装置。
 (8) 前記入力画像の投影領域を調整する投影領域調整部をさらに備え、
 前記補正ベクトル導出部は、投影領域調整部による前記投影領域の調整結果を示すパラメータを用いて前記補正ベクトルを導出する
 (1)乃至(7)のいずれかに記載の情報処理装置。
 (9) 前記投影領域調整部は、ユーザインタフェースに基づいて入力されたユーザ指示に基づいて前記投影領域を調整し、前記パラメータを導出する
 (8)に記載の情報処理装置。
 (10) 前記パラメータは、前記投影領域のズームに関するパラメータ、前記投影領域のシフトに関するパラメータ、および前記投影領域のロールに関するパラメータを含む
 (9)に記載の情報処理装置。
 (11) 前記3次元点位置に基づいて、投影面を設定し、導出した前記投影面に対応する前記視点を設定する投影面視点設定部をさらに備え、
 前記補正ベクトル導出部は、前記投影面視点設定部により設定された前記視点に対応する前記仮想撮像部の受光領域に、前記着目画素の前記3次元点位置を射影する
 (1)乃至(10)のいずれかに記載の情報処理装置。
 (12) 前記投影面は、平面である
 (11)に記載の情報処理装置。
 (13) 前記投影面は、円筒面である
 (11)に記載の情報処理装置。
 (14) 前記投影面は、球面である
 (11)に記載の情報処理装置。
 (15) 互いに異なる位置および姿勢で前記投影画像を撮像した複数の撮像画像のそれぞれについて前記位置および前記姿勢を推定し、推定した前記位置および前記姿勢に基づいて、前記3次元点位置を導出する位置姿勢推定部をさらに備え、
 前記投影面視点設定部は、前記位置姿勢推定部により導出された前記3次元点位置に基づいて、前記投影面および前記視点を設定する
 (11)乃至(14)のいずれかに記載の情報処理装置。
 (16) 前記位置姿勢推定部は、注目する前記撮像画像の、基準とする前記撮像画像に対する相対位置および相対姿勢を推定する
 (15)に記載の情報処理装置。
 (17) 前記位置姿勢推定部は、複数の前記相対位置および前記相対姿勢に基づいて、前記3次元点位置のスケール調整を行う
 (16)に記載の情報処理装置。
 (18) 複数の前記撮像画像に含まれる対応点を検出する対応点検出部をさらに備え、
 前記位置姿勢推定部は、前記対応点検出部により検出された前記対応点に基づいて、前記位置および前記姿勢を推定し、前記3次元点位置を導出する
 (15)乃至(17)のいずれかに記載の情報処理装置。
 (19) 前記撮像画像は、複数の前記投影部により投影された互いに異なる色の複数のパターン画像を含み、
 前記対応点検出部は、複数の前記パターン画像を分離して前記対応点を検出する
 (18)に記載の情報処理装置。
 (20) 入力画像を投影する投影部の着目画素に対応する3次元空間上の投影位置である3次元点位置を、前記投影部により投影された投影画像の視点とする位置および姿勢で前記投影画像を撮像する仮想の撮像部である仮想撮像部の受光領域に射影し、前記受光領域における前記3次元点位置が射影された位置を示す射影点を、前記入力画像の座標系に変換することで、前記着目画素に対応する補正ベクトルを導出する
 情報処理方法。
The present technology can also have the following configurations.
(1) The projection of the three-dimensional point position, which is the projection position in the three-dimensional space corresponding to the pixel of interest of the projection unit that projects the input image, at the position and orientation as the viewpoint of the projection image projected by the projection unit. Projecting onto the light receiving area of the virtual imaging unit, which is a virtual imaging unit that captures an image, and converting the projection point indicating the position where the three-dimensional point position in the light receiving region is projected into the coordinate system of the input image. An information processing device including a correction vector derivation unit that derives a correction vector corresponding to the pixel of interest.
(2) The information processing apparatus according to (1), wherein the correction vector derivation unit obtains the three-dimensional point position of the pixel of interest based on the known three-dimensional point position corresponding to the peripheral pixels of the pixel of interest. ..
(3) The information processing apparatus according to (1) or (2), wherein the correction vector derivation unit derives the correction vector for all the pixels of the projection unit.
(4) The correction vector derivation unit sets the presentation position of the input image in the light receiving region, and converts the projection point into the coordinate system of the input image by using the presentation position (1) to. The information processing apparatus according to any one of (3).
(5) The correction vector derivation unit estimates a range corresponding to the projected image projected by each of the plurality of projection units in the light receiving region, and the range corresponding to each of the plurality of projection units is superimposed. The information processing apparatus according to (4), wherein the presentation position is set in the area.
(6) The information processing apparatus according to (5), wherein the correction vector derivation unit interpolates the three-dimensional point position with respect to the pixel in which the three-dimensional point position is missing in the projection unit.
(7) Further includes a projection control unit that projects a corrected image, which is an image generated by correcting the input image using the correction vector derived by the correction vector derivation unit, onto the projection unit (1). ) To (6).
(8) Further provided with a projection area adjusting unit for adjusting the projection area of the input image.
The information processing apparatus according to any one of (1) to (7), wherein the correction vector derivation unit derives the correction vector using a parameter indicating an adjustment result of the projection area by the projection area adjustment unit.
(9) The information processing apparatus according to (8), wherein the projection area adjusting unit adjusts the projection area based on a user instruction input based on the user interface, and derives the parameter.
(10) The information processing apparatus according to (9), wherein the parameter includes a parameter related to the zoom of the projection area, a parameter related to the shift of the projection area, and a parameter related to the roll of the projection area.
(11) A projection plane viewpoint setting unit for setting the projection plane based on the three-dimensional point position and setting the viewpoint corresponding to the derived projection plane is further provided.
The correction vector derivation unit projects the three-dimensional point position of the pixel of interest onto the light receiving region of the virtual imaging unit corresponding to the viewpoint set by the projection plane viewpoint setting unit (1) to (10). The information processing device described in any of the above.
(12) The information processing apparatus according to (11), wherein the projection surface is a flat surface.
(13) The information processing apparatus according to (11), wherein the projection surface is a cylindrical surface.
(14) The information processing apparatus according to (11), wherein the projection surface is a spherical surface.
(15) The position and the posture are estimated for each of the plurality of captured images obtained by capturing the projected images at different positions and postures, and the three-dimensional point position is derived based on the estimated position and the posture. Further equipped with a position / posture estimation unit,
The information processing according to any one of (11) to (14), wherein the projection plane viewpoint setting unit sets the projection surface and the viewpoint based on the three-dimensional point position derived by the position / orientation estimation unit. Device.
(16) The information processing apparatus according to (15), wherein the position / orientation estimation unit estimates the relative position and the relative posture of the captured image of interest with respect to the captured image as a reference.
(17) The information processing apparatus according to (16), wherein the position / orientation estimation unit adjusts the scale of the three-dimensional point position based on the plurality of relative positions and the relative attitudes.
(18) Further, a corresponding point detecting unit for detecting corresponding points included in the plurality of captured images is provided.
The position / orientation estimation unit estimates the position and the posture based on the corresponding point detected by the corresponding point detecting unit, and derives the three-dimensional point position from any of (15) to (17). The information processing device described in.
(19) The captured image includes a plurality of pattern images of different colors projected by the plurality of projection units.
The information processing apparatus according to (18), wherein the corresponding point detection unit separates a plurality of the pattern images and detects the corresponding points.
(20) The projection of the three-dimensional point position, which is the projection position on the three-dimensional space corresponding to the pixel of interest of the projection unit that projects the input image, at the position and orientation as the viewpoint of the projection image projected by the projection unit. Projecting onto the light receiving area of the virtual image pickup unit, which is a virtual image pickup unit that captures an image, and converting the projection point indicating the position where the three-dimensional point position in the light receiving area is projected into the coordinate system of the input image. An information processing method for deriving a correction vector corresponding to the pixel of interest.
 100 投影撮像システム, 101 携帯型端末装置, 102 プロジェクタ, 151 情報処理部, 152 撮像部, 181 対応点検出部, 182 カメラ姿勢推定部, 183 スクリーン再構成部, 184 補正ベクトル導出部, 185 投影制御部, 186 投影領域調整部, 201 情報処理部, 202 投影部, 231 補正ベクトル取得部, 232 画像取得部, 233 補正画像生成部 100 projection imaging system, 101 portable terminal device, 102 projector, 151 information processing unit, 152 imaging unit, 181 corresponding point detection unit, 182 camera posture estimation unit, 183 screen reconstruction unit, 184 correction vector derivation unit, 185 projection control Unit, 186 projection area adjustment unit, 201 information processing unit, 202 projection unit, 231 correction vector acquisition unit, 232 image acquisition unit, 233 correction image generation unit

Claims (20)

  1.  入力画像を投影する投影部の着目画素に対応する3次元空間上の投影位置である3次元点位置を、前記投影部により投影された投影画像の視点とする位置および姿勢で前記投影画像を撮像する仮想の撮像部である仮想撮像部の受光領域に射影し、前記受光領域における前記3次元点位置が射影された位置を示す射影点を、前記入力画像の座標系に変換することで、前記着目画素に対応する補正ベクトルを導出する補正ベクトル導出部
     を備える情報処理装置。
    The projected image is captured at the position and orientation of the three-dimensional point position, which is the projected position in the three-dimensional space corresponding to the pixel of interest of the projection unit that projects the input image, as the viewpoint of the projected image projected by the projection unit. By projecting onto the light receiving area of the virtual imaging unit, which is a virtual imaging unit, and converting the projection point indicating the position where the three-dimensional point position in the light receiving area is projected into the coordinate system of the input image, the said An information processing device equipped with a correction vector derivation unit that derives a correction vector corresponding to the pixel of interest.
  2.  前記補正ベクトル導出部は、前記着目画素の周辺画素に対応する既知の前記3次元点位置に基づいて、前記着目画素の前記3次元点位置を求める
     請求項1に記載の情報処理装置。
    The information processing apparatus according to claim 1, wherein the correction vector derivation unit obtains the three-dimensional point position of the pixel of interest based on the known three-dimensional point position corresponding to the peripheral pixels of the pixel of interest.
  3.  前記補正ベクトル導出部は、前記投影部の全ての画素について、前記補正ベクトルを導出する
     請求項1に記載の情報処理装置。
    The information processing device according to claim 1, wherein the correction vector derivation unit derives the correction vector for all the pixels of the projection unit.
  4.  前記補正ベクトル導出部は、前記受光領域における前記入力画像の提示位置を設定し、前記提示位置を利用して、前記射影点を、前記入力画像の座標系に変換する
     請求項1に記載の情報処理装置。
    The information according to claim 1, wherein the correction vector derivation unit sets a presentation position of the input image in the light receiving region, and converts the projection point into a coordinate system of the input image by using the presentation position. Processing equipment.
  5.  前記補正ベクトル導出部は、前記受光領域における、複数の前記投影部のそれぞれが投影した投影画像に対応する範囲を推定し、複数の前記投影部のそれぞれに対応する前記範囲が重畳する領域に、前記提示位置を設定する
     請求項4に記載の情報処理装置。
    The correction vector derivation unit estimates a range corresponding to the projected image projected by each of the plurality of projection units in the light receiving region, and in the region where the range corresponding to each of the plurality of projection units is superimposed. The information processing apparatus according to claim 4, wherein the presentation position is set.
  6.  前記補正ベクトル導出部は、前記投影部の、前記3次元点位置が欠落した画素について、前記3次元点位置を補間する
     請求項5に記載の情報処理装置。
    The information processing device according to claim 5, wherein the correction vector derivation unit interpolates the three-dimensional point position with respect to the pixel in which the three-dimensional point position is missing in the projection unit.
  7.  前記補正ベクトル導出部により導出された前記補正ベクトルを用いて前記入力画像を補正することにより生成された画像である補正画像を、前記投影部に投影させる投影制御部をさらに備える
     請求項1に記載の情報処理装置。
    The first aspect of the present invention further comprises a projection control unit that projects a corrected image, which is an image generated by correcting the input image using the correction vector derived by the correction vector derivation unit, onto the projection unit. Information processing equipment.
  8.  前記入力画像の投影領域を調整する投影領域調整部をさらに備え、
     前記補正ベクトル導出部は、投影領域調整部による前記投影領域の調整結果を示すパラメータを用いて前記補正ベクトルを導出する
     請求項1に記載の情報処理装置。
    Further, a projection area adjusting unit for adjusting the projection area of the input image is provided.
    The information processing apparatus according to claim 1, wherein the correction vector derivation unit derives the correction vector using a parameter indicating an adjustment result of the projection area by the projection area adjustment unit.
  9.  前記投影領域調整部は、ユーザインタフェースに基づいて入力されたユーザ指示に基づいて前記投影領域を調整し、前記パラメータを導出する
     請求項8に記載の情報処理装置。
    The information processing apparatus according to claim 8, wherein the projection area adjusting unit adjusts the projection area based on a user instruction input based on the user interface, and derives the parameter.
  10.  前記パラメータは、前記投影領域のズームに関するパラメータ、前記投影領域のシフトに関するパラメータ、および前記投影領域のロールに関するパラメータを含む
     請求項9に記載の情報処理装置。
    The information processing apparatus according to claim 9, wherein the parameter includes a parameter relating to zooming in the projection area, a parameter relating to shift of the projection region, and a parameter relating to roll of the projection region.
  11.  前記3次元点位置に基づいて、投影面を設定し、導出した前記投影面に対応する前記視点を設定する投影面視点設定部をさらに備え、
     前記補正ベクトル導出部は、前記投影面視点設定部により設定された前記視点に対応する前記仮想撮像部の受光領域に、前記着目画素の前記3次元点位置を射影する
     請求項1に記載の情報処理装置。
    A projection plane viewpoint setting unit for setting a projection plane based on the three-dimensional point position and setting the viewpoint corresponding to the derived projection plane is further provided.
    The information according to claim 1, wherein the correction vector derivation unit projects the three-dimensional point position of the pixel of interest on the light receiving region of the virtual imaging unit corresponding to the viewpoint set by the projection plane viewpoint setting unit. Processing equipment.
  12.  前記投影面は、平面である
     請求項11に記載の情報処理装置。
    The information processing apparatus according to claim 11, wherein the projection surface is a plane.
  13.  前記投影面は、円筒面である
     請求項11に記載の情報処理装置。
    The information processing apparatus according to claim 11, wherein the projection surface is a cylindrical surface.
  14.  前記投影面は、球面である
     請求項11に記載の情報処理装置。
    The information processing apparatus according to claim 11, wherein the projection surface is a spherical surface.
  15.  互いに異なる位置および姿勢で前記投影画像を撮像した複数の撮像画像のそれぞれについて前記位置および前記姿勢を推定し、推定した前記位置および前記姿勢に基づいて、前記3次元点位置を導出する位置姿勢推定部をさらに備え、
     前記投影面視点設定部は、前記位置姿勢推定部により導出された前記3次元点位置に基づいて、前記投影面および前記視点を設定する
     請求項11に記載の情報処理装置。
    Position-orientation estimation that estimates the position and the posture for each of the plurality of captured images obtained by capturing the projected image at different positions and postures, and derives the three-dimensional point position based on the estimated position and the posture. With more parts,
    The information processing device according to claim 11, wherein the projection plane viewpoint setting unit sets the projection surface and the viewpoint based on the three-dimensional point position derived by the position / orientation estimation unit.
  16.  前記位置姿勢推定部は、注目する前記撮像画像の、基準とする前記撮像画像に対する相対位置および相対姿勢を推定する
     請求項15に記載の情報処理装置。
    The information processing device according to claim 15, wherein the position / orientation estimation unit estimates the relative position and relative posture of the captured image of interest with respect to the captured image as a reference.
  17.  前記位置姿勢推定部は、複数の前記相対位置および前記相対姿勢に基づいて、前記3次元点位置のスケール調整を行う
     請求項16に記載の情報処理装置。
    The information processing apparatus according to claim 16, wherein the position / orientation estimation unit adjusts the scale of the three-dimensional point position based on the plurality of relative positions and the relative attitudes.
  18.  複数の前記撮像画像に含まれる対応点を検出する対応点検出部をさらに備え、
     前記位置姿勢推定部は、前記対応点検出部により検出された前記対応点に基づいて、前記位置および前記姿勢を推定し、前記3次元点位置を導出する
     請求項15に記載の情報処理装置。
    Further, a corresponding point detection unit for detecting corresponding points included in the plurality of captured images is provided.
    The information processing device according to claim 15, wherein the position / orientation estimation unit estimates the position and the posture based on the corresponding point detected by the corresponding point detecting unit, and derives the three-dimensional point position.
  19.  前記撮像画像は、複数の前記投影部により投影された互いに異なる色の複数のパターン画像を含み、
     前記対応点検出部は、複数の前記パターン画像を分離して前記対応点を検出する
     請求項18に記載の情報処理装置。
    The captured image includes a plurality of pattern images of different colors projected by the plurality of projection units.
    The information processing device according to claim 18, wherein the corresponding point detection unit separates a plurality of the pattern images and detects the corresponding points.
  20.  入力画像を投影する投影部の着目画素に対応する3次元空間上の投影位置である3次元点位置を、前記投影部により投影された投影画像の視点とする位置および姿勢で前記投影画像を撮像する仮想の撮像部である仮想撮像部の受光領域に射影し、前記受光領域における前記3次元点位置が射影された位置を示す射影点を、前記入力画像の座標系に変換することで、前記着目画素に対応する補正ベクトルを導出する
     情報処理方法。
    The projected image is captured at the position and orientation of the three-dimensional point position, which is the projected position in the three-dimensional space corresponding to the pixel of interest of the projection unit that projects the input image, as the viewpoint of the projected image projected by the projection unit. By projecting onto the light receiving area of the virtual imaging unit, which is a virtual imaging unit, and converting the projection point indicating the position where the three-dimensional point position in the light receiving area is projected into the coordinate system of the input image, the said An information processing method that derives a correction vector corresponding to the pixel of interest.
PCT/JP2021/029625 2020-08-24 2021-08-11 Information processing device and method WO2022044806A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020140608 2020-08-24
JP2020-140608 2020-08-24

Publications (1)

Publication Number Publication Date
WO2022044806A1 true WO2022044806A1 (en) 2022-03-03

Family

ID=80352361

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/029625 WO2022044806A1 (en) 2020-08-24 2021-08-11 Information processing device and method

Country Status (1)

Country Link
WO (1) WO2022044806A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002062842A (en) * 2000-08-11 2002-02-28 Nec Corp Projection video correction system and its method
JP2005092363A (en) * 2003-09-12 2005-04-07 Nippon Hoso Kyokai <Nhk> Image generation device and image generation program
JP2014134611A (en) * 2013-01-09 2014-07-24 Ricoh Co Ltd Geometric distortion correction device, projector, and geometric distortion correction method
JP2015154219A (en) * 2014-02-13 2015-08-24 株式会社バンダイナムコエンターテインメント image generation system and program
JP2019220887A (en) * 2018-06-21 2019-12-26 キヤノン株式会社 Image processing system, image processing method, and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002062842A (en) * 2000-08-11 2002-02-28 Nec Corp Projection video correction system and its method
JP2005092363A (en) * 2003-09-12 2005-04-07 Nippon Hoso Kyokai <Nhk> Image generation device and image generation program
JP2014134611A (en) * 2013-01-09 2014-07-24 Ricoh Co Ltd Geometric distortion correction device, projector, and geometric distortion correction method
JP2015154219A (en) * 2014-02-13 2015-08-24 株式会社バンダイナムコエンターテインメント image generation system and program
JP2019220887A (en) * 2018-06-21 2019-12-26 キヤノン株式会社 Image processing system, image processing method, and program

Similar Documents

Publication Publication Date Title
US10602126B2 (en) Digital camera device for 3D imaging
TWI253006B (en) Image processing system, projector, information storage medium, and image processing method
CN110809786B (en) Calibration device, calibration chart, chart pattern generation device, and calibration method
WO2017217411A1 (en) Image processing device, image processing method, and recording medium
TWI393072B (en) Multi-sensor array module with wide viewing angle; image calibration method, operating method and application for the same
JPWO2019049421A1 (en) CALIBRATION DEVICE, CALIBRATION SYSTEM, AND CALIBRATION METHOD
US9436973B2 (en) Coordinate computation device and method, and an image processing device and method
US10893259B2 (en) Apparatus and method for generating a tiled three-dimensional image representation of a scene
US11074715B2 (en) Image processing apparatus and method
CN110580718A (en) image device correction method, and related image device and arithmetic device
KR102176963B1 (en) System and method for capturing horizontal parallax stereo panorama
US11483528B2 (en) Information processing apparatus and information processing method
JP2017092756A (en) Image processing system, image processing method, image projecting system and program
US20210124174A1 (en) Head mounted display, control method for head mounted display, information processor, display device, and program
TW201824178A (en) Image processing method for immediately producing panoramic images
JP2010217984A (en) Image detector and image detection method
JP2019029721A (en) Image processing apparatus, image processing method, and program
WO2022044806A1 (en) Information processing device and method
JP2005275789A (en) Three-dimensional structure extraction method
WO2020255766A1 (en) Information processing device, information processing method, program, projection device, and information processing system
WO2022044807A1 (en) Information processing device and method
CN115004683A (en) Imaging apparatus, imaging method, and program
JP2015220662A (en) Information processing apparatus, method for the same, and program
CN111480335B (en) Image processing device, image processing method, program, and projection system
JP6071364B2 (en) Image processing apparatus, control method thereof, and control program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21861234

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21861234

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP