WO2005084017A1 - Multiprojection system - Google Patents

Multiprojection system Download PDF

Info

Publication number
WO2005084017A1
WO2005084017A1 PCT/JP2005/001337 JP2005001337W WO2005084017A1 WO 2005084017 A1 WO2005084017 A1 WO 2005084017A1 JP 2005001337 W JP2005001337 W JP 2005001337W WO 2005084017 A1 WO2005084017 A1 WO 2005084017A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
screen
correction data
marker
projected
Prior art date
Application number
PCT/JP2005/001337
Other languages
French (fr)
Japanese (ja)
Inventor
Takeyuki Ajito
Kazuo Yamaguchi
Takahiro Toyama
Takafumi Kumano
Original Assignee
Olympus Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Olympus Corporation filed Critical Olympus Corporation
Publication of WO2005084017A1 publication Critical patent/WO2005084017A1/en

Links

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B21/00Projectors or projection-type viewers; Accessories therefor
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B21/00Projectors or projection-type viewers; Accessories therefor
    • G03B21/14Details
    • G03B21/26Projecting separately subsidiary matter simultaneously with main image
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B37/00Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe
    • G03B37/04Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe with cameras or projectors providing touching or overlapping fields of view
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/74Projection arrangements for image reproduction, e.g. using eidophor

Definitions

  • the present invention relates to a multi-projection system for forming one large image by pasting images projected on a screen by an image projection device such as a plurality of projectors, and in particular, a position shift of an image projected on a screen.
  • the present invention relates to a multi-projection system in which distortion and color shift are detected by a digital camera and automatically corrected.
  • Patent Document 1 Japanese Patent Application Laid-Open No. 2002-72359
  • Patent Document 2 JP 2003-18503 [0007]
  • an image taken by a digital camera is converted into a cylinder or a sphere according to the screen shape, so that distortion is accurately corrected.
  • the three-dimensional shape of the screen is known in advance, and that the digital camera must be installed at the same Cf position as the observation position.
  • an object of the present invention made in view of the power is to correct the displacement and distortion of the projected image at an arbitrary observation position and the color shift in real time even if the shape of the screen is not known. It is an object of the present invention to provide a multi-projection system. Disclosure of the invention
  • the invention according to claim 1 which achieves the above object, provides a multi-projection system for forming one large image by combining images projected on a screen by a plurality of image projection devices,
  • Image acquisition means for acquiring an image projected on the screen by the image projection device from a different position having a known relative positional relationship to acquire parallax image data; and a parallax image acquired by the image acquisition means.
  • Correction data for estimating the three-dimensional position of each point of the image projected on the screen based on the data and the information on the relative positional relationship, and correcting the image input to the image projection device
  • Image correction data calculating means for calculating
  • Image correction means for correcting an image input to the image projection device based on the correction data calculated by the image correction data calculation means. It is.
  • the invention according to claim 2 is the multi-projection system according to claim 1, wherein the image acquisition means has two digital cameras installed at different positions having a known relative positional relationship.
  • parallax image data is obtained by photographing an image projected on the screen by the two digital cameras.
  • the invention according to claim 3 is the multi-projection system according to claim 1, wherein the image acquisition unit has one digital camera and a moving mechanism that translates the digital camera in parallel. Moving the digital camera in parallel by the moving mechanism to capture parallax image data by capturing an image projected on the screen from a different position having a known relative positional relationship. It is.
  • the invention according to claim 4 is a multi-projection system for forming one large image by pasting images projected on a screen by a plurality of image projection devices,
  • Marker projecting means for projecting a marker at a predetermined angle on the screen, and an image projected on the screen by the image projection device and an image obtained by photographing the marker projected on the screen by the marker projecting means, respectively.
  • Image acquisition means for acquiring data;
  • Support means for fixing a relative positional relationship between the marker projection means and the image acquisition means
  • the three-dimensional position of each point of the image projected on the screen is estimated based on the image data acquired by the image acquiring means and the information on the relative positional relationship, and is input to the image projection device.
  • Image correction data calculating means for calculating correction data for correcting an image to be corrected;
  • An image correcting means for correcting an image input to the image projection device based on the correction data calculated by the image correction data calculating means.
  • the invention according to claim 5 provides a multi-projection system for forming one large image by pasting images projected on a screen by a plurality of image projection devices.
  • a plurality of image acquisition means for acquiring an image data by photographing an image projected on the screen by the image projection device from a location whose relative positional relationship with the image projection device is known
  • the three-dimensional position of each point of the image projected on the screen is estimated based on the image data acquired by the plurality of image acquisition means and the information on the relative positional relationship, and the image projection device Image correction data calculating means for calculating correction data for correcting an image input to the
  • An image correcting means for correcting an image input to the image projection device based on the correction data calculated by the image correction data calculating means.
  • the invention according to claim 6 is a multi-projection system for forming one large image by pasting images projected on a screen by a plurality of image projection devices,
  • marker projecting means for projecting a marker on the screen
  • Image acquisition means for taking an image projected on the screen by the image projection device and a marker projected on the screen by the marker projection means to acquire image data
  • the three-dimensional position of each point of the image projected on the screen is estimated based on the image data acquired by the image acquiring means and the information on the relative positional relationship, and is input to the image projection device.
  • Image correction data calculating means for calculating correction data for correcting an image to be corrected;
  • An image correcting means for correcting an image input to the image projection device based on the correction data calculated by the image correction data calculating means.
  • the invention according to claim 7 is the multi-projection system according to claim 6, wherein the two marker projection units are installed at different positions with a known relative positional relationship. Characterized in that it has a laser pointer.
  • the invention according to claim 8 is the multi-projection system according to claim 6, wherein the marker projecting means has one laser pointer and a moving mechanism that translates the laser pointer in parallel.
  • the laser pointer is moved in parallel by the moving mechanism to project a marker onto the screen with a different positional force whose relative positional relationship is known.
  • the correction data calculating means performs correction based on observer position information with respect to the screen. It is characterized by calculating data.
  • the invention according to claim 10 is the multi-projection system according to claim 9, further comprising a visual inspection sensor for detecting a viewpoint position of the observer and obtaining position information of the observer. It is characterized by the following.
  • FIG. 1 is a diagram showing an overall configuration of a multi-projection system according to a first embodiment of the present invention.
  • FIG. 2 is a view showing a test pattern image input to the projector shown in FIG. 1.
  • FIG. 3 is a block diagram showing a configuration of a correction data calculation unit shown in FIG. 1.
  • FIG. 4 is a block diagram showing a configuration of an image conversion unit shown in FIG. 1.
  • FIG. 5 is a flowchart for explaining a geometric correction data calculation process in the first embodiment.
  • FIG. 6 is a view showing marker projection by the projector shown in FIG. 1 and marker imaging by two cameras.
  • FIG. 7 is a conceptual diagram showing a screen three-dimensional shape estimation method according to the first embodiment.
  • FIG. 8 is a conceptual diagram showing a method for estimating a screen shape with a small number of markers.
  • FIG. 9 is a diagram illustrating a state in which a marker projection image centering on the observation viewpoint is created from the estimated screen shape.
  • FIG. 10 is a diagram showing a state in which a marker projection image is created on a projection plane having a wide viewing angle centered on an observation viewpoint from an estimated screen shape.
  • FIG. 11 is a diagram showing an overall configuration of a multi-projection system according to a second embodiment of the present invention.
  • FIG. 12 is a block diagram showing a configuration of a correction data calculation unit shown in FIG.
  • FIG. 13 is a diagram showing an overall configuration of a multi-projection system according to a third embodiment of the present invention.
  • FIG. 14 is a block diagram showing a configuration of a correction data calculation unit shown in FIG.
  • FIG. 16 is a conceptual diagram showing a screen shape estimation method in the third embodiment.
  • FIG. 17 is a diagram showing an overall configuration of a multi-projection system according to a fourth embodiment of the present invention.
  • FIG. 18 is a block diagram showing a configuration of a correction data calculation unit shown in FIG.
  • FIG. 19 is a diagram showing an overall configuration of a multi-projection system according to a fifth embodiment of the present invention.
  • FIG. 20 is a block diagram showing a configuration of a correction data calculation unit shown in FIG. 19.
  • FIG. 21 is a diagram illustrating an overall configuration of a multi-projection system according to a sixth embodiment of the present invention.
  • FIG. 22 is a diagram for explaining an example of a screen shape calculation method according to the sixth embodiment.
  • FIG. 23 is a diagram showing an overall configuration of a multi-projection system according to a seventh embodiment of the present invention.
  • FIG. 24 is a diagram showing an overall configuration of a multi-projection system according to an eighth embodiment.
  • FIG. 25 is a diagram showing an overall configuration of a multi-projection system according to a ninth embodiment.
  • FIG. 26 is a diagram for describing the screen shape estimation processing in the ninth embodiment.
  • FIG. 27 is a flowchart for explaining the same screen shape estimation process.
  • FIG. 28 is a flowchart for explaining the same geometric correction data calculation process.
  • FIG. 29 is a block diagram illustrating a configuration of a correction data calculation unit illustrated in FIG. 25.
  • FIG. 30 is a diagram showing an overall configuration of a multi-projection system according to a tenth embodiment of the present invention.
  • FIG. 31 is a perspective view showing observation glasses used in the tenth embodiment.
  • FIG. 32 is a view showing a modification of the present invention.
  • FIG. 33 is a diagram showing details of each marker shown in FIG. 2 (a).
  • FIG. 1 shows the overall configuration of a multi-projection system
  • Figs. 2 (a) and (b) show test patterns input to the projector.
  • FIG. 3 is a block diagram showing the configuration of the correction data calculation unit shown in FIG. 1
  • FIG. 4 is a block diagram showing the configuration of the image conversion unit shown in FIG. 1, and
  • FIG. Flowchart for explanation Fig. 6 shows marker projection by the projector and marker shooting by two cameras
  • Fig. 7 is a conceptual diagram showing the method of estimating the three-dimensional shape of the screen
  • Fig. 8 is the screen with a small number of markers.
  • Fig. 9 is a conceptual diagram showing the method of estimating the shape
  • Fig. 9 is a conceptual diagram showing the method of estimating the shape
  • FIG. 9 is a diagram showing how to create a marker projection image centered on the observation viewpoint from the estimated screen shape (marker position), and Figs. 10 (a) and (b) are Estimated Figure 33 (a) and (b) show how a marker projection image is created from a screen shape (marker position) on a wide viewing angle projection plane centered on the observation viewpoint.
  • Figure 2 (a) FIG. 3 is a diagram showing details of each marker.
  • an image is partially overlapped and projected onto a dome-shaped or arch-shaped screen 2 by projectors 1 A and IB, which are image projection devices, respectively.
  • the projectors as the image projection device include a transmission type liquid crystal projector, a reflection type liquid crystal projector, a DLP type projector using a DMD (digital micromirror element), a CRT projection tube display, and a laser projector.
  • A can projector or the like can be used.
  • the images projected by the projector 1A and the projector 1B are not stuck together as they are due to the difference in the color characteristics of the projectors 1A and 1B and the displacement of the installation positions.
  • test pattern image data is input to projector 1A and projector 1B, and a test pattern image is projected on screen 2, and the projected test pattern image is obtained by image acquisition means.
  • a camera (digital camera) 3A, 3B captures an image of the test pattern.
  • the test pattern images projected here include marker images regularly arranged on the screen as shown in Fig. 2 (a) and the color characteristics of projectors 1A and 1B as shown in Fig. 2 (b). There are color signal images with different signal levels for each color of R (red), G (green), and B (blue) to acquire.
  • each marker shown in Fig. 2 (a) is point-symmetric on the XY space coordinates as shown in Fig.
  • a digital camera as an image acquisition means, a monochrome or multi-band digital camera can be applied, and a CCD or CMOS type image sensor can be applied.
  • These acquired test pattern images are sent to a correction data calculation section 4, where correction data for correcting the images input to the projectors 1A and 1B is calculated based on the shot images of the test patterns. .
  • the calculated data is sent to the image conversion unit 5, where the correction data calculated by the correction data calculation unit 4 corrects the content image data input to the projectors 1A and 1B.
  • camera 3A and the camera 3B are fixed to the support member 6 at a predetermined distance d as shown in FIG.
  • camera 3A and camera 3B can both use a wide-angle lens to capture the entire area of the screen, or a fish lens to provide a wider angle of view. Use an eye lens.
  • the parallax image of the test pattern can be obtained by photographing the test pattern images projected on the screen 2 with the cameras 3A and 3B at different viewpoint forces. Further, since the relative positional relationship between the viewpoints is known, it is possible to estimate the three-dimensional position of each point of the image projected on the screen 2 from the parallax image.
  • the correction data calculation unit 4 in the present embodiment includes a camera photographed image data storage unit 11A and a camera photographed image data storage unit 11B, a marker position detection storage unit 12, a screen shape / camera position estimation storage unit 13, an observation position. It has a setting unit 14, a marker position coordinate conversion unit 15, a projector geometric correction data calculation unit 16, a projector gamma correction data calculation unit 17, and a projector color correction matrix calculation unit 18.
  • the camera photographed image data storage unit 11A and the camera photographed image data storage unit 11B store photographed images of various test patterns photographed by the camera 3A and the camera 3B.
  • the marker position detection storage unit 12 detects the position of the marker projected from each of the test patterns captured by the cameras 3A and 3B on the captured image and projected by each projector on the captured image. Stores information.
  • the screen shape 'camera position estimation storage unit 13 estimates the three-dimensional position of each marker from the position information of each marker on the captured image corresponding to the camera 3A and the camera 3B. Calculate the 3D shape and the camera position with respect to the screen 2.
  • the observation position setting unit 14 stores the three-dimensional position information of the observation position set in advance or the observation position arbitrarily set by the user, and sends the information to the marker position coordinate conversion unit 15 in the subsequent stage.
  • the three-dimensional marker position calculated in the screen shape′camera position calculation storage unit and the three-dimensional position information of the observation position set in the observation position setting unit 14 Based on the above, the two-dimensional coordinate position of the marker when the marker is projected on the two-dimensional plane with the observation position as the viewpoint is calculated.
  • the marker position coordinate conversion unit 15 creates Using the two-dimensional coordinates of the marker with the observed observation position as the viewpoint, the geometric coordinate relationship between the projection plane centered on the observation position and the image planes of the projectors 1A and 1B is derived, and the derived geometric coordinate relation Based on the calculated geometric correction data for correcting the displacement and distortion of the projector image, the calculated geometric correction data is output to the image conversion unit 5 at the subsequent stage.
  • the projector gamma correction data calculation unit 17 corrects color unevenness and gamma characteristic unevenness in the screens of the projectors 1A and 1B based on various color signal images captured by the camera 3A (or camera 3B). Gamma correction data is calculated, and the calculated gamma correction data is output to the image conversion unit 5 at the subsequent stage.
  • the projector color correction matrix calculator 18 calculates a color correction matrix for correcting a color difference between the projectors 1A and 1B based on various color signal images captured by the camera 3A (or the camera 3B). Is calculated, and the calculated color correction matrix is output to the image conversion unit 5 at the subsequent stage.
  • the image conversion unit 5 is roughly divided into a correction data storage unit 21 and a correction data operation unit 22.
  • the correction data storage unit 21 includes a geometric correction data storage unit 23, a gamma correction data storage unit 24, and a color correction matrix storage unit 25, and the geometric data calculated by the projector geometric correction data calculation unit 16 of the correction data calculation unit 4.
  • the correction data is stored in the geometric correction data storage unit 23, the projector gamma correction data calculated in the projector gamma correction data calculation unit 17 is stored in the gamma correction data storage unit 24, and the color correction matrix calculated in the projector color correction matrix calculation unit 18 Are stored in the color correction matrix storage unit 25, respectively.
  • the correction data operation unit 22 includes a gamma conversion unit 26, a geometric correction data operation unit 27, a color correction matrix operation unit 28, a gamma correction unit 29, and a gamma correction data operation unit 30.
  • the gamma conversion unit 26 corrects the nonlinear gamma characteristic of the input image (content image data), and then the geometric correction data operation unit 27 inputs the data from the geometric correction data storage unit 23.
  • the geometric correction of the input image is performed using the geometric correction data for each projector.
  • the color correction matrix The color matrix conversion of the RGB signal of the input image is performed using the color correction matrix for each projector input from the positive matrix storage unit 24.
  • the gamma correction section 29 corrects the uniform gamma characteristics over the entire projector screen, and then the gamma correction data action section 30 executes the gamma correction based on the gamma correction data stored in the gamma correction data storage section 25.
  • uniform gamma force deviation (difference) is corrected for each pixel of the projector screen.
  • the gamma correction unit 29 performs a rough gamma correction to some extent on the entire screen, and then, if the gamma correction data for each pixel used in the gamma correction data operation unit 30 can be provided as a difference, the correction data can be obtained. Since the amount of data and the amount of memory can be compressed, costs can be reduced.
  • the content image data corrected by the image conversion unit 5 is output to the subsequent projector 1A and projector 1B.
  • the test pattern image is output to projector 1A and projector 1B without correction.
  • a state (through state) is set (step S1), and FIG.
  • the marker image shown in is input and displayed on projectors 1A and 1B (step S2).
  • the images projected on the screen 2 by the projectors 1A and 1B are photographed by the cameras 3A and 3B respectively (steps S3 and S4), and the photographed parallax images are stored in the camera photographed image data storage unit 11A of the correction data calculation unit 4. , 11B.
  • FIG. 6 shows this state, in which a marker is projected by one of the projectors 1B and a marker is photographed by the two cameras 3A and 3B.
  • the marker position detection storage unit 12 detects a manual force position. Thereafter, the screen shape 'camera position calculation storage unit 13 detects the position of a marker point (corresponding point) on the detected captured image plane corresponding to the same marker between the parallax images (step S5), and the detected position is detected.
  • the overall screen shape is estimated by interpolation and the camera position is estimated (step S6).
  • Figure 7 illustrates this situation, with two cameras 3A, 3
  • This figure conceptually shows a method for estimating the three-dimensional shape of the screen 2 with the marker image captured by B. As shown in Fig. 8, when the number of marker points is small to some extent, the screen shape is estimated with the same accuracy as that estimated at many marker points by giving rough screen shape foresight information. It is also possible.
  • step S7 the user sets the position where the projected image is actually observed.
  • the observation position may be, for example, the center position of the dome without being specified by the user, or may be determined in advance by default.
  • the marker position coordinate conversion unit 15 firstly outputs the marker on the two-dimensional projection plane centered on the viewpoint at the observation position. Calculate the position coordinates (step S8).
  • Figure 9 illustrates this situation.
  • the angle of view of the projection plane is the same as the angle of view of the content image input to the multi-projection system.
  • the content image is a wide-field image captured by a fish-eye lens, as shown in FIG. 10A
  • the projected image is also represented by a coordinate system with a wide viewing angle (for example, 110 degrees to 360 degrees). What is necessary is just to calculate the marker position coordinates on the two-dimensional projection plane.
  • FIG. 10B when the observation position changes arbitrarily, the projection point at the observation position may be calculated.
  • the projector geometric correction data calculation unit 16 calculates geometric correction data for each projector (step S9). Specifically, from the correspondence between the marker position coordinates on the viewpoint image plane at the observation position and the marker position coordinates on the test pattern image input to the projectors 1A and 1B, the coordinates on the viewpoint image plane and the positions on the projector image plane are obtained. The geometric relationship with the coordinates is obtained, and geometric correction data for correcting the input image is calculated based on the geometric relationship so that the image is output without any displacement or distortion on the viewpoint image plane. After that, the calculated geometric correction data for each projector is output to the image conversion unit 5 (step S10), and the processing of calculating the geometric correction data is terminated.
  • the shooting data of various color signal images either camera 3A or camera 3B can be used. All you have to do is take a picture.
  • the color signal image is not limited to the case where the single color signal data of R (red), G (green), and B (blue) shown in Fig. 2 (b) is used as a test pattern.
  • camera 3A can convert the color signal image of blue component into camera 3B.
  • a color signal image of the red component can be obtained at the same time. As described above, if the color signal images are shared and obtained by the cameras 3A and 3B, the shooting time can be reduced.
  • FIG. 11 and 12 show the second embodiment.
  • FIG. 11 is a diagram showing the overall configuration of the multi-projection system
  • FIG. 12 is a block diagram showing the configuration of the correction data calculation unit shown in FIG.
  • one camera (digital camera) is provided so as to be able to move in parallel to a moving stage 31 which is a moving mechanism.
  • the camera 3 is supported, the camera 3 is moved in parallel, and the relative position is sequentially photographed at both ends of a known distance d, thereby obtaining a parallax image as in the first embodiment.
  • the correction data calculation unit 4 is provided with a switching switch 32 for switching and supplying the image projected by the camera 3 to the camera photographed image data storage units 11A and 11B.
  • a switching switch 32 for switching and supplying the image projected by the camera 3 to the camera photographed image data storage units 11A and 11B.
  • the shooting time is increased by the sequential shooting while moving the camera 3, but the same correction can be realized with less equipment, and the cost can be reduced. Down can be achieved.
  • FIG. 13 to 16 show the third embodiment.
  • FIG. 13 is a diagram showing the overall configuration of the multi-projection system
  • FIG. 14 is a block diagram showing the configuration of the correction data calculation unit shown in FIG.
  • FIG. 15 is a flowchart for explaining the geometric correction data calculation processing
  • FIG. 16 is a conceptual diagram showing a method for estimating a screen shape (marker position).
  • one camera 3 projects markers at an equal angle over the entire screen.
  • the screen shape is estimated using a laser pointer 35 as marker projection means.
  • the camera 3 and the laser pointer 35 are fixed to a support member 36 as a support means, with their relative positional relationship fixed.
  • the correction data calculation unit 4 includes a marker position detection storage unit 12, a screen shape 'camera position estimation storage unit as in FIG. 13, an observation position setting unit 14, a marker position coordinate conversion unit 15, a projector geometric correction data calculation unit 16, a projector gamma correction data calculation unit 17, and a projector color correction matrix calculation unit 18
  • the functions of the camera captured image data storage unit 11, the marker position detection storage unit 12, the screen shape 'camera position calculation storage unit 13, and the marker position coordinate conversion unit 15 are different from those of the first embodiment. The functions are the same as in the first embodiment.
  • the camera photographed image data storage unit 11 stores the photographed image of the marker projected on the screen 2 by the laser pointer 35 and the test pattern of the marker image and the color signal image projected by the projectors 1A and 1B. The captured image is stored.
  • the marker one position detection storage unit 12 the position of the marker projected on the screen 2 by the laser pointer 35 and the position of the marker projected by the projectors 1A and 1B are respectively shown. From the captured image.
  • the screen shape / camera position calculation storage unit 13 estimates the screen shape and the camera position from the marker position detected by the laser pointer 35. Further, in the marker position coordinate conversion unit 15, the projector detected in the marker position detection storage unit 12 from the estimated screen shape and camera position and the observation position set in the observation position setting unit 14. The marker position coordinates by 1A and 1B are converted to the marker position coordinates viewed from the observation viewpoint.
  • projector geometric correction data calculation unit 16 calculates geometric correction data in the same manner as in the first embodiment.
  • a marker is projected on the screen 2 by the laser pointer 35 (step S11), the projected marker image is photographed by the camera 3 (step S12), and stored in the camera photographed image data storage unit 11.
  • the marker images are projected onto the screen 2 by the projectors 1A and 1B (step S13), and the projected marker images are similarly photographed by the camera 3 (step S14).
  • the marker position by the laser pointer 35 and the marker position by the projectors 1A and 1B are detected in the marker position detection storage unit 12 based on each of the captured marker images (steps S15 and S16). .
  • step S17 based on the detected marker position by the laser pointer 35, the shape of the screen 2 and the position of the camera 3 are estimated in the screen shape / camera position calculation storage unit 13 (step S17). . Specifically, as shown in FIG. 16, the three-dimensional position of the marker is calculated from the projection angle of the marker by a predetermined laser pointer 35 and the position of the marker on the captured image, and the screen shape and the camera are calculated. Estimate the location.
  • the marker position coordinate conversion unit 15 first calculates the coordinate position of the laser pointer 35 on the projection image from the observation position as a viewpoint (Step S18). 19) Then, from the marker coordinate position at the observation position and the marker coordinate position on the camera image, the coordinate relationship between the observation position and the camera image (Step S20), and then, using the calculated coordinate relationships, the coordinate positions of the markers projected by the projectors 1A and 1B on the captured image are used as the projectors 1A and 1B on the projection plane with the observation position as the viewpoint. (Step S21).
  • the projector geometric correction data calculation unit 16 calculates the geometric correction data for each projector as in the first embodiment (step S22). ), And outputs the calculated geometric correction data to the image conversion unit 5 (step S23), and performs image conversion of the input image based on the geometric correction data. This allows the projectors 1A and 1B to display the input image on the image without distortion while also viewing the observation position.
  • FIG. 17 and 18 show a fourth embodiment of the present invention.
  • FIG. 17 is a diagram showing the overall configuration of the multi-projection system
  • FIG. 18 is a block diagram showing the configuration of the correction data calculation unit shown in FIG. FIG.
  • the present embodiment is different from the first embodiment in that, in addition to cameras 3A and 3B for acquiring a parallax image of the entire screen, a camera (digital Cameras) 3C and 3D are provided, and the geometric correction data for correcting the displacement and distortion of the projectors 1A and 1B is the same as in the first embodiment using parallax images taken by the cameras 3A and 3B.
  • the color correction matrices and gamma correction data of the projectors 1A and 1B are also calculated.
  • color unevenness is corrected with finer accuracy by capturing a color signal image for detecting color unevenness in a small area on screen 2 by cameras 3C and 3D.
  • the correction data calculation unit 4 includes the camera photographed image data storage units 11A and 11B, the marker position detection storage unit 12, the screen shape "camera position" shown in Fig. 3. Estimation storage unit 13, observation position setting unit 14, marker position coordinate conversion unit 15, projector In addition to the geometric correction data calculation unit 16, camera image data storage units 11C and 11D and a projector color correction matrix / gamma correction data calculation unit 19 are provided.
  • Camera photographed image data storage units 11C and 11D store photographed images of test patterns of marker images and color signal images photographed by cameras 3C and 3D.
  • the marker one position detection storage unit 12 detects the marker one coordinate position of the marker image captured by the cameras 3A and 3B, and also detects the marker coordinate position of the marker image captured by the cameras 3C and 3D.
  • the detected marker positions in the captured images of the cameras 3C and 3D are supplied to the projector color correction matrix / gamma correction data calculation unit 19.
  • the projector color correction matrix 'gamma correction data calculation unit 19 stores the marker positions in the captured images of the cameras 3C and 3D detected by the marker position detection storage unit 12 and the camera captured image data storage units 11C and 11D.
  • the color unevenness corresponding to each pixel of the projectors 1A and 1B is detected using the obtained color signal image, and a color correction matrix and gamma correction data for correcting the color unevenness are calculated, and the calculated color is calculated.
  • the correction matrix and the gamma correction data are output to the image conversion unit 5 to convert the input image. As a result, color unevenness can be corrected with finer accuracy, and an image without color unevenness can be displayed on the screen 2.
  • the geometric correction data of the projectors 1A and 1B are the same as in the first embodiment.
  • FIG. 19 and 20 show a fifth embodiment of the present invention.
  • FIG. 19 shows the overall configuration of a multi-projection system
  • FIG. 20 shows the configuration of the correction data calculation unit shown in FIG. It is a block diagram.
  • This embodiment is different from the third embodiment in that the camera 3 is rotatably supported by the support 36, and the camera 3 is rotated by the rotation control unit 41 for each small area on the screen 2.
  • the test pattern images are sequentially photographed, and the correction data is calculated using the test pattern images photographed separately for each small area.
  • the correction data calculation unit 4 includes a rotation angle storage unit 20 for storing rotation angle information of the camera 3 from the rotation control unit 41.
  • the rotation angle information corresponding to each captured image stored in the rotation angle storage unit 20 is supplied to the screen shape 'camera position calculation storage unit 13 and the marker position coordinate conversion unit 15, and the screen shape' camera position It is used in correspondence with each photographed image data when estimating and converting the observation position to the marker position coordinates, thereby calculating respective correction data in the same manner as in the third embodiment.
  • FIGS. 21 and 22 show a sixth embodiment of the present invention.
  • FIG. 21 is a diagram showing the overall configuration of a multi-projection system
  • FIG. 22 is an example of a screen shape calculating method according to the present embodiment.
  • FIG. 21 is a diagram showing the overall configuration of a multi-projection system
  • FIG. 22 is an example of a screen shape calculating method according to the present embodiment.
  • This embodiment is different from the fifth embodiment in that the laser pointer 35 and the camera 3 are fixed to the support jig 36, and the support 36 is rotated by the rotation
  • the marker projection by the laser pointer 35 and the imaging by the camera 3 are performed for each small area, and the correction data is calculated using the test pattern image photographed separately for each small area. Is the same as in the fifth embodiment. With such a configuration, the entire screen can be covered even when the marker projection angle by the laser pointer 35 is narrow and in a range, so that the configuration of the laser pointer 35 can be simplified.
  • each image output from the rotation control unit 41 is obtained in the same manner as in the fifth embodiment.
  • the shape of the entire screen can be estimated from the relative positional relationship between the rotation angles of the captured images.
  • the projectors 1A and 1B project the markers onto the overlapping portions of the photographing areas, and photograph the markers. Based on the marker positions (same point) of projectors 1A and 1B in the image, the screen shape estimated from each captured image may be synthesized. Yes. In this way, even if there is some error in the rotation angle information, the positional relationship can be accurately synthesized.
  • FIG. 23 is a diagram showing the overall configuration of the multi-projection system according to the seventh embodiment of the present invention.
  • a plurality of sets of a camera and a laser pointer whose positions are fixed to each other are used to overlap a small area of the screen 2 with an adjacent small area to form a test pattern image.
  • the projection and its photographing are performed, and correction data is calculated using a test pattern image photographed separately for each of the small areas.
  • the other configurations are the fifth and sixth embodiments. Is the same as Fig. 23 shows two sets, one with camera 3A and laser pointer 35A fixed to support 36A at a distance dl, and the other with camera 3B and laser pointer 35B fixed at support d at a distance d2. This shows the case where is used.
  • the distances dl and d2 can be set arbitrarily, even if they are equal or different.
  • the screen shape estimation according to the present embodiment estimates the shape of the small area of the screen 2 from each marker captured image as described in the sixth embodiment, and then synthesizes as shown in FIG. Then estimate the overall screen shape!
  • FIG. 24 is a diagram showing the overall configuration of the multi-projection system according to the eighth embodiment of the present invention.
  • the present embodiment uses the same number of cameras 3A and 3B as the projectors 1A and 1B without providing a plurality of sets of cameras and laser pointers as in the seventh embodiment. Is estimated. For this reason, the projector 1A and the camera 3A are fixed to the support 36A at a distance dl, the projector 1B and the camera 3B are fixed to the support 36B at a distance d2, and the screen 3 Projection area, that is, the image projected by projector 1A and the image projected by projector 1B overlapping it Part of the image can be captured so that the camera 3B can capture the projection area of the screen 2 by the projector 1B, that is, the image projected by the projector 1B and a part of the image projected by the projector 1A that overlaps it. . Note that the distances dl and d2 can be set arbitrarily or differently, as long as the cameras 3A and 3B can shoot the projection areas by the corresponding projectors 1A and 1B.
  • the marker images are projected by the projectors 1A and 1B and photographed by the cameras 3A and 3B, and the photographed images are combined with the projectors 1A and 1B and the cameras 3A and 3B.
  • the screen shape is estimated using the information indicating the relative positional relationship between the two.
  • Other configurations and operations are the same as those of the seventh embodiment.
  • FIGS. 25 to 29 show a ninth embodiment of the present invention.
  • FIG. 25 is a diagram showing the overall configuration of a multi-projection system
  • FIGS. 26 (a) to (c) explain screen shape estimation processing.
  • FIG. 27 is a flowchart for explaining the screen shape estimation process
  • FIG. 28 is a flowchart for explaining the geometric correction data calculation process
  • FIG. 29 is a block diagram showing the configuration of the correction data calculation unit shown in FIG. FIG.
  • the two laser pointers 35A and 35B are fixed to the support 36 at a distance d, and different positional forces can be applied by the laser pointers 35A and 35B whose relative positional relationship is fixed.
  • Markers are projected on the entire screen 2, and each projected marker is shot separately for each small area of the screen 2 by cameras 3A and 3B, and the shape of the screen 2 is estimated from these shot images. To do.
  • the projection angle ⁇ 'A when the marker is projected from the laser pointer 35A to the same position as the marker projection position by the laser pointer 35B is calculated by interpolation.
  • ⁇ ' ⁇ ( ⁇ / ⁇ ) ⁇ ( ⁇ ⁇ — 0 A) + 0 ⁇
  • the screen shape is a curved surface, it can be obtained with higher accuracy by a higher-order interpolation formula.
  • the laser Calculate the three-dimensional position of the i-th marker by one pointer 35B.
  • FIG. 27 summarizes the above processing, and a detailed description thereof will be omitted because it is redundant with the description of the drawings.
  • the projection angle ⁇ 'A when projecting the marker from the laser pointer 35A to the same position as the marker projection position by the laser pointer 35B, the projection angle is actually calculated.
  • the marker is projected with the laser pointer 35A and the camera is shot again, as a result, if it is misaligned with the marker with the laser pointer 35B, it is corrected again and the more accurate projection angle with the laser pointer 35A is obtained.
  • FIG. 28 shows a process of calculating geometric correction data in the present embodiment, and a detailed description thereof will be omitted because it is the same as that in the drawings. Note that, in FIG. 28, the process indicated by reference symbol S corresponds to the process in FIG.
  • FIG. 29 is a diagram showing detailed blocks of the correction data calculation unit 4 in the present embodiment.
  • This correction data calculation unit 4 is the correction data calculation unit of the third embodiment shown in FIG. 4, instead of the camera captured image data storage unit 11, a camera captured image data storage unit 11 A for storing captured image data from the camera 3A and a camera captured image data storage unit for storing captured image data from the camera 3B 11B is provided, and the other configuration is the same as that of the third embodiment.
  • the correction data calculation unit 4 uses the image for each small area of the screen 2 stored in the camera captured image data storage unit 11A and the camera captured image data storage unit 11B. In addition to calculating the geometric correction data by performing the processing shown in FIGS. 27 and 28, the color characteristic unevenness and the gamma characteristic unevenness of the projectors 1A and 1B are calculated in the same manner as described in the first embodiment. Then, a color correction matrix and gamma correction data that make these uniform over the entire screen are calculated, and the correction data is output to the image conversion unit 5.
  • markers are projected onto the entire screen 2 from different positions by laser pointers 35A and 35B having a fixed relative positional relationship, and each projected marker is projected by a camera.
  • 3A and 3B were used to take pictures of each small area of screen 2, and the shape of screen 2 was estimated from the captured images.Therefore, even if the positions of cameras 3A and 3B were not known, they were fixed. If not, the three-dimensional position of the screen 2 can be estimated, and the degree of freedom in installing the cameras 3A and 3B can be increased.
  • one laser pointer using two laser pointers 35A and 35B is supported on the moving stage so as to be able to move in parallel, and this one laser pointer is used.
  • the correction data can be calculated in the same manner by projecting a marker on the entire screen 2 at both ends of a distance d whose relative position is known by moving in parallel.
  • FIGS. 30 and 31 show a tenth embodiment of the present invention.
  • FIG. 30 is a diagram showing the overall configuration of a multi-projection system
  • FIG. 31 is a perspective view showing an observation scope used in the present embodiment.
  • FIG. 30 is a diagram showing the overall configuration of a multi-projection system
  • FIG. 31 is a perspective view showing an observation scope used in the present embodiment.
  • observation position detection sensors 45 are provided at a plurality of positions on the screen 2, and the viewpoint position of the observer 46 is detected by the observation position detection sensors 45.
  • the position is used as the observation position in the observation position setting unit 14 of the correction data calculation unit 4.
  • a distortion is automatically corrected at the observation point based on the set observation position, and an image without distortion is displayed on the screen 2.
  • the observer 46 is put on observation glasses 48 equipped with an infrared LED 47 as shown in FIG. 31, for example, and the observation position detection sensor 45 is an infrared detection sensor or the like.
  • the observation position detection sensor 45 detects infrared rays from the infrared LED 47 to detect the viewpoint position of the observer 46.
  • the infrared rays emitted from the infrared LED 47 have directivity in the direction of the viewpoint of the observer 46.
  • the shape of the screen 2 and the projector using the camera may be set according to any one of the first to ninth embodiments.
  • the projection position relationship between 1A and 1B is calculated in advance, and distortion correction at an arbitrary observation position is always performed.
  • the observation position can be detected in real time according to the movement of the observer 46 and automatically corrected to an image without distortion. Even in a display system or the like, an image can always be observed without distortion due to the screen shape.
  • the present invention is not limited to the above-described embodiment, but can be variously modified or changed.
  • a curved screen such as an arch screen or a spherical screen in which all 360 ° directions are covered with screens is used to project and display an image on the hemispherical dome screen 2.
  • a flat screen When projecting and displaying an image, as shown in Fig. 32 (a), when projecting an image by front projection on a flat screen 2a, or as shown in Fig. 32 (b), a flat screen
  • the present invention can also be effectively applied to the case where an image is displayed by rear projection on 2a.
  • the number of projectors is not limited to two, and the present invention can be applied to a case where there are three or more projectors.
  • the three-dimensional position of each point of the image projected on the screen that is, the screen position 'shape is estimated, and the positional deviation and distortion of the projected image and the color deviation are corrected. Therefore, an image can be displayed well even if the shape of the screen is not known.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Projection Apparatus (AREA)
  • Stereoscopic And Panoramic Photography (AREA)
  • Transforming Electric Information Into Light Information (AREA)

Abstract

A multiprojection system for projecting a single large image by patching images projected on a screen (2) by image projectors (1A, 1B). The multiprojection system comprises image acquiring means (3A, 3B) for capturing the images projected on the screen (2) by the image projectors (1A, 1B) from different positions their relative positional relationship of which is known and for acquiring parallax image data, image correction data calculating means (4) for estimating the three-dimensional position of each point of the image projected on the screen (2) on the basis of the acquired parallax image data and the information on the relative positional relationship and calculating correction data for correcting the images inputted into the image projectors (1A, 1B), and image correcting means (5) for correcting the images inputted into the image projectors (1A, 1B) according to the calculated correction data.

Description

明 細 書  Specification
マノレチプロジェクシヨンシステム  Manorechi Projection System
技術分野  Technical field
[0001] 本発明は、複数台のプロジェクタ等の画像投射装置によりスクリーン上に投射され た画像を貼りあわせて一つの大きな画像を形成するマルチプロジェクシヨンシステム 、特にスクリーンに投射した画像の位置ズレゃ歪み、さらには色ズレをデジタルカメラ により検出して自動的に補正するようにしたマルチプロジェクシヨンシステムに関する ものである。  The present invention relates to a multi-projection system for forming one large image by pasting images projected on a screen by an image projection device such as a plurality of projectors, and in particular, a position shift of an image projected on a screen. The present invention relates to a multi-projection system in which distortion and color shift are detected by a digital camera and automatically corrected.
背景技術  Background art
[0002] 近年、博物館 ·展示会等におけるショールーム、シアターやプラネタリウム、さらには VRシステム等において、大画面 '高精細の画像表示システムを構築するために、複 数台のプロジェクタによりスクリーン上に画像を貼り合わせて大画面高精細な画像表 示を実現するマルチプロジェクシヨンシステムが適用されている。  [0002] In recent years, in projectors, theaters and planetariums, and VR systems in museums and exhibitions, in order to construct large-screen, high-definition image display systems, multiple projectors are used to display images on the screen. A multi-projection system that realizes high-resolution image display on a large screen by pasting together is applied.
[0003] このようなマルチプロジェクシヨンシステムにおいては、個々のプロジェクタによる画 像の位置ずれや色ずれを調整してスクリーン上にきれいに貼り合わせることが重要で あることから、プロジェクタによりスクリーンに投影された画像の位置ずれや歪み、さら には色ずれを、デジタルカメラにより撮影した画像から自動的に検出して補正するよ うにしている。  [0003] In such a multi-projection system, it is important to adjust the positional shift and color shift of the images by the individual projectors and to bond the images neatly on the screen. The system automatically detects and corrects image misregistration, distortion, and color misregistration from images captured by digital cameras.
[0004] その従来の画像補正方法として、例えば、ドームやアーチ型といった曲面形状をも つスクリーンを用いた場合においても、スクリーン形状による曲面的な歪みを、画像の 円筒変換や球面変換を用いることで、所定の観察位置において歪みのない画像に 補正する方法が知られている (例えば、特許文献 1参照)。  [0004] As a conventional image correction method, for example, even when a screen having a curved surface such as a dome or an arch is used, a curved distortion due to the screen shape can be corrected by using a cylindrical transformation or a spherical transformation of an image. A method of correcting an image without distortion at a predetermined observation position is known (for example, see Patent Document 1).
[0005] また、所望の観察位置にマーカー投射器を配置してスクリーン上にマーカーを投影 し、投影されたマーカーを基準位置として画像補正を行うようにしたものも知られて ヽ る (例えば、特許文献 2参照)。  [0005] Further, there is also known an arrangement in which a marker projector is arranged at a desired observation position, a marker is projected on a screen, and image correction is performed using the projected marker as a reference position (for example, Patent Document 2).
[0006] (特許文献 1)特開 2002-72359公報  (Patent Document 1) Japanese Patent Application Laid-Open No. 2002-72359
(特許文献 2)特開 2003— 18503公報 [0007] し力しながら、上記特許文献 1に開示の画像補正方法にあっては、デジタルカメラ で撮影した画像をスクリーン形状に合わせて円筒変換や球面変換するため、歪みを 正確に補正するためには、スクリーンの立体形状が予め既知であり、さらには、観察 位置と同 Cf立置にデジタルカメラを設置しなければならないと言う制約がある。 (Patent Document 2) JP 2003-18503 [0007] In the image correction method disclosed in Patent Document 1 described above, an image taken by a digital camera is converted into a cylinder or a sphere according to the screen shape, so that distortion is accurately corrected. However, there is a restriction that the three-dimensional shape of the screen is known in advance, and that the digital camera must be installed at the same Cf position as the observation position.
[0008] これに対して、上記特許文献 2に開示の画像補正方法では、マーカー投射器から スクリーン上に投影されたマーカーを基準位置として画像補正を行うので、スクリーン の形状が既知でなぐさらにはデジタルカメラが観察位置に設置されてなくても、観察 位置にお 、て歪みのな 、画像に補正することが可能である。  [0008] On the other hand, in the image correction method disclosed in Patent Document 2, image correction is performed using a marker projected from the marker projector onto the screen as a reference position, so that the shape of the screen is not known. Even if the digital camera is not installed at the observation position, it is possible to correct the image without distortion at the observation position.
[0009] しかし、他方ではマーカー投射器を観察位置に置くことが必須条件であるため、シ ステム構成上どうしても観察位置に機材が置けない場合には、歪みを正確に補正す ることができなくなる。また、観察位置が任意に変化した場合には、その都度、観察位 置にマーカー投射器を設置してスクリーン上にマーカーを投射する必要があるため、 観察位置の変化に対してリアルタイムに画像歪みを補正できないと言う問題がある。  However, on the other hand, it is an essential condition to place the marker projector at the observation position. Therefore, if the equipment cannot be placed at the observation position due to the system configuration, the distortion cannot be accurately corrected. . In addition, when the observation position changes arbitrarily, it is necessary to install a marker projector at the observation position and project the marker on the screen each time. There is a problem that cannot be corrected.
[0010] したがって、力かる点に鑑みてなされた本発明の目的は、スクリーンの形状が既知 でなくても、任意の観察位置において投射画像の位置ずれや歪み、さらには色ずれ をリアルタイムで補正できるマルチプロジェクシヨンシステムを提供することにある。 発明の開示 [0010] Therefore, an object of the present invention made in view of the power is to correct the displacement and distortion of the projected image at an arbitrary observation position and the color shift in real time even if the shape of the screen is not known. It is an object of the present invention to provide a multi-projection system. Disclosure of the invention
[0011] 上記目的を達成する請求項 1に係る発明は、複数台の画像投射装置によりスクリー ン上に投射された画像を貼りあわせて一つの大きな画像を形成するマルチプロジェ クシヨンシステムにおいて、  [0011] The invention according to claim 1, which achieves the above object, provides a multi-projection system for forming one large image by combining images projected on a screen by a plurality of image projection devices,
上記画像投射装置により上記スクリーン上に投射された画像を、相対的位置関係 が既知である異なる位置から撮影して視差画像データを取得する画像取得手段と、 上記画像取得手段により取得された視差画像データおよび上記相対的位置関係 の情報に基づいて、上記スクリーン上に投射された画像の各点の 3次元的位置を推 定して、上記画像投射装置に入力する画像を補正するための補正データを算出す る画像補正データ算出手段と、  Image acquisition means for acquiring an image projected on the screen by the image projection device from a different position having a known relative positional relationship to acquire parallax image data; and a parallax image acquired by the image acquisition means. Correction data for estimating the three-dimensional position of each point of the image projected on the screen based on the data and the information on the relative positional relationship, and correcting the image input to the image projection device Image correction data calculating means for calculating
上記画像補正データ算出手段により算出された補正データに基づいて、上記画像 投射装置に入力する画像を補正する画像補正手段とを備えることを特徴とするもの である。 Image correction means for correcting an image input to the image projection device based on the correction data calculated by the image correction data calculation means. It is.
[0012] 請求項 2に係る発明は、請求項 1に記載のマルチプロジェクシヨンシステムにおいて 、上記画像取得手段は、相対的位置関係が既知の異なる位置に設置された 2台の デジタルカメラを有し、これら 2台のデジタルカメラにより上記スクリーン上に投射され た画像を撮影して視差画像データを取得することを特徴とするものである。  [0012] The invention according to claim 2 is the multi-projection system according to claim 1, wherein the image acquisition means has two digital cameras installed at different positions having a known relative positional relationship. In addition, parallax image data is obtained by photographing an image projected on the screen by the two digital cameras.
[0013] 請求項 3に係る発明は、請求項 1に記載のマルチプロジェクシヨンシステムにおいて 、上記画像取得手段は、 1台のデジタルカメラと、該デジタルカメラを平行移動させる 移動機構とを有し、上記移動機構により上記デジタルカメラを平行移動させて、相対 的位置関係が既知である異なる位置カゝら上記スクリーン上に投射された画像を撮影 して視差画像データを取得することを特徴とするものである。  [0013] The invention according to claim 3 is the multi-projection system according to claim 1, wherein the image acquisition unit has one digital camera and a moving mechanism that translates the digital camera in parallel. Moving the digital camera in parallel by the moving mechanism to capture parallax image data by capturing an image projected on the screen from a different position having a known relative positional relationship. It is.
[0014] 請求項 4に係る発明は、複数台の画像投射装置によりスクリーン上に投射された画 像を貼りあわせて一つの大きな画像を形成するマルチプロジェクシヨンシステムにお いて、  [0014] The invention according to claim 4 is a multi-projection system for forming one large image by pasting images projected on a screen by a plurality of image projection devices,
上記スクリーン上に所定の角度でマーカーを投射するマーカー投射手段と、 上記画像投射装置により上記スクリーン上に投射された画像および上記マーカー 投射手段により上記スクリーン上に投射されたマーカーをそれぞれ撮影して画像デ ータを取得する画像取得手段と、  Marker projecting means for projecting a marker at a predetermined angle on the screen, and an image projected on the screen by the image projection device and an image obtained by photographing the marker projected on the screen by the marker projecting means, respectively. Image acquisition means for acquiring data;
上記マーカー投射手段と上記画像取得手段との相対的位置関係を固定するため の支持手段と、  Support means for fixing a relative positional relationship between the marker projection means and the image acquisition means;
上記画像取得手段により取得された画像データおよび上記相対的位置関係の情 報に基づいて、上記スクリーン上に投射された画像の各点の 3次元的位置を推定し て、上記画像投射装置に入力する画像を補正するための補正データを算出する画 像補正データ算出手段と、  The three-dimensional position of each point of the image projected on the screen is estimated based on the image data acquired by the image acquiring means and the information on the relative positional relationship, and is input to the image projection device. Image correction data calculating means for calculating correction data for correcting an image to be corrected;
上記画像補正データ算出手段により算出された補正データに基づいて、上記画像 投射装置に入力する画像を補正する画像補正手段とを備えることを特徴とするもの である。  An image correcting means for correcting an image input to the image projection device based on the correction data calculated by the image correction data calculating means.
[0015] 請求項 5に係る発明は、複数台の画像投射装置によりスクリーン上に投射された画 像を貼りあわせて一つの大きな画像を形成するマルチプロジェクシヨンシステムにお いて、 [0015] The invention according to claim 5 provides a multi-projection system for forming one large image by pasting images projected on a screen by a plurality of image projection devices. And
上記画像投射装置により上記スクリーン上に投射された画像を、上記画像投射装 置と相対的位置関係が既知の場所から撮影して画像データを取得する複数台の画 像取得手段と、  A plurality of image acquisition means for acquiring an image data by photographing an image projected on the screen by the image projection device from a location whose relative positional relationship with the image projection device is known,
上記複数台の画像取得手段により取得された画像データおよび上記相対的位置 関係の情報に基づいて、上記スクリーン上に投射された画像の各点の 3次元的位置 を推定して、上記画像投射装置に入力する画像を補正するための補正データを算 出する画像補正データ算出手段と、  The three-dimensional position of each point of the image projected on the screen is estimated based on the image data acquired by the plurality of image acquisition means and the information on the relative positional relationship, and the image projection device Image correction data calculating means for calculating correction data for correcting an image input to the
上記画像補正データ算出手段により算出された補正データに基づいて、上記画像 投射装置に入力する画像を補正する画像補正手段とを備えることを特徴とするもの である。  An image correcting means for correcting an image input to the image projection device based on the correction data calculated by the image correction data calculating means.
[0016] 請求項 6に係る発明は、複数台の画像投射装置によりスクリーン上に投射された画 像を貼りあわせて一つの大きな画像を形成するマルチプロジェクシヨンシステムにお いて、  [0016] The invention according to claim 6 is a multi-projection system for forming one large image by pasting images projected on a screen by a plurality of image projection devices,
相対的位置関係が既知である異なる位置力 上記スクリーン上にマーカーを投射 するマーカー投射手段と、  Different positional forces whose relative positional relationship is known; marker projecting means for projecting a marker on the screen;
上記画像投射装置により上記スクリーン上に投射された画像および上記マーカー 投射手段により上記スクリーン上に投射されたマーカーを撮影して画像データを取 得する画像取得手段と、  Image acquisition means for taking an image projected on the screen by the image projection device and a marker projected on the screen by the marker projection means to acquire image data,
上記画像取得手段により取得された画像データおよび上記相対的位置関係の情 報に基づいて、上記スクリーン上に投射された画像の各点の 3次元的位置を推定し て、上記画像投射装置に入力する画像を補正するための補正データを算出する画 像補正データ算出手段と、  The three-dimensional position of each point of the image projected on the screen is estimated based on the image data acquired by the image acquiring means and the information on the relative positional relationship, and is input to the image projection device. Image correction data calculating means for calculating correction data for correcting an image to be corrected;
上記画像補正データ算出手段により算出された補正データに基づいて、上記画像 投射装置に入力する画像を補正する画像補正手段とを備えることを特徴とするもの である。  An image correcting means for correcting an image input to the image projection device based on the correction data calculated by the image correction data calculating means.
[0017] 請求項 7に係る発明は、請求項 6に記載のマルチプロジェクシヨンシステムにおいて 、上記マーカー投射手段は、相対的位置関係が既知の異なる位置に設置された 2台 のレーザーポインタを有することを特徴とするものである。 [0017] The invention according to claim 7 is the multi-projection system according to claim 6, wherein the two marker projection units are installed at different positions with a known relative positional relationship. Characterized in that it has a laser pointer.
[0018] 請求項 8に係る発明は、請求項 6に記載のマルチプロジェクシヨンシステムにおいて 、上記マーカー投射手段は、 1台のレーザーポインタと、該レーザーポインタを平行 移動させる移動機構とを有し、上記移動機構により上記レーザーポインタを平行移動 させて、相対的位置関係が既知である異なる位置力 上記スクリーン上にマーカーを 投射することを特徴とするものである。  [0018] The invention according to claim 8 is the multi-projection system according to claim 6, wherein the marker projecting means has one laser pointer and a moving mechanism that translates the laser pointer in parallel. The laser pointer is moved in parallel by the moving mechanism to project a marker onto the screen with a different positional force whose relative positional relationship is known.
[0019] 請求項 9に係る発明は、請求項 1一 8のいずれか一項に記載のマルチプロジェクシ ヨンシステムにおいて、上記補正データ算出手段は、上記スクリーンに対する観察者 の位置情報に基づいて補正データを算出することを特徴とするものである。  [0019] In a ninth aspect of the present invention, in the multi-projection system according to any one of the first to eighteenth aspects, the correction data calculating means performs correction based on observer position information with respect to the screen. It is characterized by calculating data.
[0020] 請求項 10に係る発明は、請求項 9に記載のマルチプロジェクシヨンシステムにおい て、上記観察者の視点位置を検出して上記観察者の位置情報を得るための視点検 出センサを備えることを特徴とするものである。  [0020] The invention according to claim 10 is the multi-projection system according to claim 9, further comprising a visual inspection sensor for detecting a viewpoint position of the observer and obtaining position information of the observer. It is characterized by the following.
図面の簡単な説明  Brief Description of Drawings
[0021] [図 1]本発明の第 1実施の形態におけるマルチプロジェクションシステムの全体構成 を示す図である。  FIG. 1 is a diagram showing an overall configuration of a multi-projection system according to a first embodiment of the present invention.
[図 2]図 1に示すプロジェクタに入力するテストパターン画像を示す図である。  FIG. 2 is a view showing a test pattern image input to the projector shown in FIG. 1.
[図 3]図 1に示す補正データ算出部の構成を示すブロック図である。  FIG. 3 is a block diagram showing a configuration of a correction data calculation unit shown in FIG. 1.
[図 4]図 1に示す画像変換部の構成を示すブロック図である。  FIG. 4 is a block diagram showing a configuration of an image conversion unit shown in FIG. 1.
[図 5]第 1実施の形態における幾何補正データ算出処理を説明するためのフローチ ヤートである。  FIG. 5 is a flowchart for explaining a geometric correction data calculation process in the first embodiment.
[図 6]図 1に示すプロジェクタによるマーカー投射と 2台のカメラによるマーカー撮影の 様子を示す図である。  FIG. 6 is a view showing marker projection by the projector shown in FIG. 1 and marker imaging by two cameras.
[図 7]第 1実施の形態におけるスクリーン立体形状推定方法を示す概念図である。  FIG. 7 is a conceptual diagram showing a screen three-dimensional shape estimation method according to the first embodiment.
[図 8]少ないマーカー数でスクリーン形状を推定する方法を示す概念図である。  FIG. 8 is a conceptual diagram showing a method for estimating a screen shape with a small number of markers.
[図 9]推定されたスクリーン形状から観察視点を中心としたマーカー投影像を作成す る様子を示す図である。  FIG. 9 is a diagram illustrating a state in which a marker projection image centering on the observation viewpoint is created from the estimated screen shape.
[図 10]推定されたスクリーン形状から観察視点を中心とした広視野角な投影面上に マーカー投影像を作成する様子を示す図である。 [図 11]本発明の第 2実施の形態におけるマルチプロジェクシヨンシステムの全体構成 を示す図である。 FIG. 10 is a diagram showing a state in which a marker projection image is created on a projection plane having a wide viewing angle centered on an observation viewpoint from an estimated screen shape. FIG. 11 is a diagram showing an overall configuration of a multi-projection system according to a second embodiment of the present invention.
[図 12]図 11に示す補正データ算出部の構成を示すブロック図である。  FIG. 12 is a block diagram showing a configuration of a correction data calculation unit shown in FIG.
[図 13]本発明の第 3実施の形態におけるマルチプロジェクシヨンシステムの全体構成 を示す図である。  FIG. 13 is a diagram showing an overall configuration of a multi-projection system according to a third embodiment of the present invention.
[図 14]図 13に示す補正データ算出部の構成を示すブロック図である。  FIG. 14 is a block diagram showing a configuration of a correction data calculation unit shown in FIG.
圆 15]第 3実施の形態における幾何補正データ算出処理を説明するためのフローチ ヤートである。 [15] This is a flowchart for explaining the geometric correction data calculation process in the third embodiment.
圆 16]第 3実施の形態におけるスクリーン形状推定方法を示す概念図である。 FIG. 16 is a conceptual diagram showing a screen shape estimation method in the third embodiment.
[図 17]本発明の第 4実施の形態におけるマルチプロジェクシヨンシステムの全体構成 を示す図である。  FIG. 17 is a diagram showing an overall configuration of a multi-projection system according to a fourth embodiment of the present invention.
[図 18]図 17に示す補正データ算出部の構成を示すブロック図である。  FIG. 18 is a block diagram showing a configuration of a correction data calculation unit shown in FIG.
[図 19]本発明の第 5実施の形態におけるマルチプロジェクシヨンシステムの全体構成 を示す図である。  FIG. 19 is a diagram showing an overall configuration of a multi-projection system according to a fifth embodiment of the present invention.
[図 20]図 19に示す補正データ算出部の構成を示すブロック図である。  FIG. 20 is a block diagram showing a configuration of a correction data calculation unit shown in FIG. 19.
[図 21]本発明の第 6実施の形態におけるマルチプロジェクシヨンシステムの全体構成 を示す図である。  FIG. 21 is a diagram illustrating an overall configuration of a multi-projection system according to a sixth embodiment of the present invention.
圆 22]第 6実施の形態におけるスクリーン形状算出方法の一例を説明するための図 である。 [22] FIG. 22 is a diagram for explaining an example of a screen shape calculation method according to the sixth embodiment.
[図 23]本発明の第 7実施の形態におけるマルチプロジェクシヨンシステムの全体構成 を示す図である。  FIG. 23 is a diagram showing an overall configuration of a multi-projection system according to a seventh embodiment of the present invention.
[図 24]同じぐ第 8実施の形態におけるマルチプロジェクシヨンシステムの全体構成を 示す図である。  FIG. 24 is a diagram showing an overall configuration of a multi-projection system according to an eighth embodiment.
[図 25]同じぐ第 9実施の形態におけるマルチプロジェクシヨンシステムの全体構成を 示す図である。  FIG. 25 is a diagram showing an overall configuration of a multi-projection system according to a ninth embodiment.
圆 26]第 9実施の形態におけるスクリーン形状推定処理を説明するための図である。 [26] FIG. 26 is a diagram for describing the screen shape estimation processing in the ninth embodiment.
[図 27]同じぐスクリーン形状推定処理を説明するためのフローチャートである。 FIG. 27 is a flowchart for explaining the same screen shape estimation process.
[図 28]同じぐ幾何補正データ算出処理を説明するためのフローチャートである。 [図 29]図 25に示す補正データ算出部の構成を示すブロック図である。 FIG. 28 is a flowchart for explaining the same geometric correction data calculation process. FIG. 29 is a block diagram illustrating a configuration of a correction data calculation unit illustrated in FIG. 25.
[図 30]本発明の第 10実施の形態におけるマルチプロジェクシヨンシステムの全体構 成を示す図である。  FIG. 30 is a diagram showing an overall configuration of a multi-projection system according to a tenth embodiment of the present invention.
[図 31]第 10実施の形態で用いる観察用眼鏡を示す斜視図である。  FIG. 31 is a perspective view showing observation glasses used in the tenth embodiment.
[図 32]本発明の変形例を示す図である。  FIG. 32 is a view showing a modification of the present invention.
[図 33]図 2 (a)に示す各マーカーの詳細を示す図である。  FIG. 33 is a diagram showing details of each marker shown in FIG. 2 (a).
発明を実施するための最良の形態  BEST MODE FOR CARRYING OUT THE INVENTION
[0022] 以下、本発明によるマルチプロジェクシヨンシステムの実施の形態について、図面 を参照して説明する。  Hereinafter, embodiments of a multi-projection system according to the present invention will be described with reference to the drawings.
[0023] (第 1実施の形態)  (First Embodiment)
図 1一図 10および図 33は第 1実施の形態を示すもので、図 1はマルチプロジェクシ ヨンシステムの全体構成を示す図、図 2 (a)および (b)はプロジェクタに入力するテスト パターン画像を示す図、図 3は図 1に示す補正データ算出部の構成を示すブロック 図、図 4は図 1に示す画像変換部の構成を示すブロック図、図 5は幾何補正データ算 出処理を説明するためのフローチャート、図 6はプロジェクタによるマーカー投射と 2 台のカメラによるマーカー撮影の様子を示す図、図 7はスクリーン立体形状の推定方 法を示す概念図、図 8は少ないマーカー数でスクリーン形状を推定する方法を示す 概念図、図 9は推定されたスクリーン形状 (マーカー位置)から観察視点を中心とした マーカー投影像を作成する様子を示す図、図 10 (a)および (b)は推定されたスクリー ン形状 (マーカー位置)から観察視点を中心とした広視野角な投影面上にマーカー 投影像を作成する様子を示す図、図 33 (a)および (b)は図 2 (a)に示す各マーカー の詳細を示す図である。  Fig. 1 Fig. 10 and Fig. 33 show the first embodiment, Fig. 1 shows the overall configuration of a multi-projection system, and Figs. 2 (a) and (b) show test patterns input to the projector. FIG. 3 is a block diagram showing the configuration of the correction data calculation unit shown in FIG. 1, FIG. 4 is a block diagram showing the configuration of the image conversion unit shown in FIG. 1, and FIG. Flowchart for explanation, Fig. 6 shows marker projection by the projector and marker shooting by two cameras, Fig. 7 is a conceptual diagram showing the method of estimating the three-dimensional shape of the screen, and Fig. 8 is the screen with a small number of markers. Fig. 9 is a conceptual diagram showing the method of estimating the shape, Fig. 9 is a diagram showing how to create a marker projection image centered on the observation viewpoint from the estimated screen shape (marker position), and Figs. 10 (a) and (b) are Estimated Figure 33 (a) and (b) show how a marker projection image is created from a screen shape (marker position) on a wide viewing angle projection plane centered on the observation viewpoint.Figure 2 (a) FIG. 3 is a diagram showing details of each marker.
[0024] 本実施の形態は、図 1に示すように、それぞれ画像投射装置であるプロジェクタ 1 A およびプロジェクタ IBによりドーム状またはアーチ状のスクリーン 2へ画像を一部ォ 一バーラップして投射させることにより、投射される画像を貼り合わせてスクリーン 2上 に一枚の大きな画像を表示するものである。なお、画像投射装置としてのプロジェク タとしては、透過型液晶プロジェクタ、反射型液晶プロジェクタ、 DMD (デジタルマイ クロミラー素子)を用いた DLP方式プロジェクタ、 CRT投射管ディスプレイ、レーザス キャンプロジェクタ等が使用可能である。ここで、プロジェクタ 1 Aおよびプロジェクタ 1 Bにより投射される各々の画像は、そのままではプロジェクタ 1A, 1Bの色特性の違い や設置位置のずれにより、きれ 、に貼り合わされな 、。 In the present embodiment, as shown in FIG. 1, an image is partially overlapped and projected onto a dome-shaped or arch-shaped screen 2 by projectors 1 A and IB, which are image projection devices, respectively. Thus, a single large image is displayed on the screen 2 by pasting the projected images together. In addition, the projectors as the image projection device include a transmission type liquid crystal projector, a reflection type liquid crystal projector, a DLP type projector using a DMD (digital micromirror element), a CRT projection tube display, and a laser projector. A can projector or the like can be used. Here, the images projected by the projector 1A and the projector 1B are not stuck together as they are due to the difference in the color characteristics of the projectors 1A and 1B and the displacement of the installation positions.
[0025] そこで本実施の形態では、先ずプロジェクタ 1Aおよびプロジェクタ 1Bにテストパタ ーン画像データを入力してスクリーン 2上にテストパターン画像を投射し、その投射さ れたテストパターン画像を画像取得手段であるカメラ (デジタルカメラ) 3A, 3Bにより 撮影してテストパターンの撮影画像を取得する。なお、ここで投射するテストパターン 画像としては、図 2 (a)に示すような画面上に規則的に並んだマーカー画像や、図 2 ( b)に示すようなプロジェクタ 1A, 1Bの色特性を取得するための R (赤)、 G (緑)、 B ( 青)の各色の異なる信号レベルの色信号画像がある。ここで、図 2 (a)に示す各マー カーは、図 33 (a)に示すように、 XY空間座標上で点対称であり、図 33 (b)に示すよ うに、マーカー中心に向力つて信号値が高くなるように設定する。このようなマーカー とすることにより、スクリーン形状の影響によってマーカー画像が著しく傾いたり、回転 したりしても、常に一定の精度でマーカー中心の位置を検出することができる。また、 カメラが傾いたり、回転したりした場合も、同様の効果が得られる。また、画像取得手 段であるデジタルカメラとしては、モノクロまたはマルチバンドのものを適用でき、その 撮像素子も CCDや CMOSタイプのものを適用することができる。  [0025] Therefore, in the present embodiment, first, test pattern image data is input to projector 1A and projector 1B, and a test pattern image is projected on screen 2, and the projected test pattern image is obtained by image acquisition means. A camera (digital camera) 3A, 3B captures an image of the test pattern. Note that the test pattern images projected here include marker images regularly arranged on the screen as shown in Fig. 2 (a) and the color characteristics of projectors 1A and 1B as shown in Fig. 2 (b). There are color signal images with different signal levels for each color of R (red), G (green), and B (blue) to acquire. Here, each marker shown in Fig. 2 (a) is point-symmetric on the XY space coordinates as shown in Fig. 33 (a), and as shown in Fig. 33 (b), the force is applied to the marker center. So that the signal value becomes higher. By using such a marker, the position of the center of the marker can always be detected with constant accuracy even if the marker image is significantly inclined or rotated due to the influence of the screen shape. Similar effects can be obtained when the camera is tilted or rotated. As a digital camera as an image acquisition means, a monochrome or multi-band digital camera can be applied, and a CCD or CMOS type image sensor can be applied.
[0026] これら取得されたテストパターン画像は、補正データ算出部 4に送出し、ここでテスト パターンの撮影画像に基づいてプロジェクタ 1A, 1Bに入力する画像を補正するた めの補正データを算出する。算出されたデータは、画像変換部 5に送出し、ここで補 正データ算出部 4において算出された補正データにより、プロジェクタ 1A, 1Bに入 力するコンテンツ画像データの補正を行う。  These acquired test pattern images are sent to a correction data calculation section 4, where correction data for correcting the images input to the projectors 1A and 1B is calculated based on the shot images of the test patterns. . The calculated data is sent to the image conversion unit 5, where the correction data calculated by the correction data calculation unit 4 corrects the content image data input to the projectors 1A and 1B.
[0027] 以上のようにして補正されたコンテンツ画像データを、プロジェクタ 1Aおよびプロジ ェクタ 1Bに入力することにより、スクリーン 2上につなぎ目なぐきれいに貼り合わされ た一枚の画像が表示される。  [0027] By inputting the content image data corrected as described above to the projector 1A and the projector 1B, a single image that is seamlessly bonded at the seam is displayed on the screen 2.
[0028] ここで、カメラ 3Aおよびカメラ 3Bは、図 1に示すように所定の距離 dだけ離して支持 具 6にそれぞれ固定する。さらに、カメラ 3Aおよびカメラ 3Bは、ともにスクリーン全体 の範囲が撮影できるように広角なレンズ、あるいは、より広い画角が得られるように魚 眼レンズを用いる。 Here, the camera 3A and the camera 3B are fixed to the support member 6 at a predetermined distance d as shown in FIG. In addition, camera 3A and camera 3B can both use a wide-angle lens to capture the entire area of the screen, or a fish lens to provide a wider angle of view. Use an eye lens.
[0029] このようにして、カメラ 3A, 3Bでスクリーン 2上に投影されたテストパターン画像を異 なる視点位置力も撮影することにより、テストパターンの視差画像が取得できる。さら に、各々の視点の相対的な位置関係が既知であることから、上記の視差画像からス クリーン 2上に投射された画像各点の 3次元的な位置を推定することが可能となる。  [0029] In this way, the parallax image of the test pattern can be obtained by photographing the test pattern images projected on the screen 2 with the cameras 3A and 3B at different viewpoint forces. Further, since the relative positional relationship between the viewpoints is known, it is possible to estimate the three-dimensional position of each point of the image projected on the screen 2 from the parallax image.
[0030] 次に、図 3を参照して本実施の形態における補正データ算出部 4の詳細ブロックに ついて説明する。  Next, with reference to FIG. 3, a detailed block of the correction data calculation unit 4 in the present embodiment will be described.
[0031] 本実施の形態における補正データ算出部 4は、カメラ撮影画像データ格納部 11A およびカメラ撮影画像データ格納部 11B、マーカー位置検出格納部 12、スクリーン 形状 ·カメラ位置推定格納部 13、観察位置設定部 14、マーカー位置座標変換部 15 、プロジェクタ幾何補正データ算出部 16、プロジェクタガンマ補正データ算出部 17、 およびプロジェクタ色補正マトリクス算出部 18を有している。  [0031] The correction data calculation unit 4 in the present embodiment includes a camera photographed image data storage unit 11A and a camera photographed image data storage unit 11B, a marker position detection storage unit 12, a screen shape / camera position estimation storage unit 13, an observation position. It has a setting unit 14, a marker position coordinate conversion unit 15, a projector geometric correction data calculation unit 16, a projector gamma correction data calculation unit 17, and a projector color correction matrix calculation unit 18.
[0032] カメラ撮影画像データ格納部 11 Aおよびカメラ撮影画像データ格納部 11Bは、カメ ラ 3Aおよびカメラ 3Bにより撮影された種々のテストパターンの撮影画像を格納する。  [0032] The camera photographed image data storage unit 11A and the camera photographed image data storage unit 11B store photographed images of various test patterns photographed by the camera 3A and the camera 3B.
[0033] マーカー位置検出格納部 12では、カメラ 3Aおよびカメラ 3Bで撮影されたテストパ ターンのうち、マーカー画像を入力して撮影画像上における各プロジェクタで投影し たマーカーの位置を検出し、その位置情報を格納する。  [0033] The marker position detection storage unit 12 detects the position of the marker projected from each of the test patterns captured by the cameras 3A and 3B on the captured image and projected by each projector on the captured image. Stores information.
[0034] スクリーン形状'カメラ位置推定格納部 13では、カメラ 3Aおよびカメラ 3Bに対応す る撮影画像上における各マーカーの位置情報から、各マーカーの 3次元的な位置を 推定して、スクリーン 2の立体形状およびスクリーン 2に対するカメラ位置を算出する。  [0034] The screen shape 'camera position estimation storage unit 13 estimates the three-dimensional position of each marker from the position information of each marker on the captured image corresponding to the camera 3A and the camera 3B. Calculate the 3D shape and the camera position with respect to the screen 2.
[0035] 観察位置設定部 14では、予め設定された観察位置またはユーザーにより任意に 設定された観察位置の 3次元的な位置情報を格納して、後段のマーカー位置座標 変換部 15に送られる。  The observation position setting unit 14 stores the three-dimensional position information of the observation position set in advance or the observation position arbitrarily set by the user, and sends the information to the marker position coordinate conversion unit 15 in the subsequent stage.
[0036] マーカー位置座標変換部 15では、スクリーン形状'カメラ位置算出格納部において 算出された 3次元的なマーカー位置と、観察位置設定部 14において設定された観 察位置の 3次元的な位置情報とに基づ 、て、観察位置を視点としてマーカーを 2次 元平面に投影した際のマーカーの 2次元座標位置を算出する。  In the marker position coordinate conversion unit 15, the three-dimensional marker position calculated in the screen shape′camera position calculation storage unit and the three-dimensional position information of the observation position set in the observation position setting unit 14 Based on the above, the two-dimensional coordinate position of the marker when the marker is projected on the two-dimensional plane with the observation position as the viewpoint is calculated.
[0037] プロジェクタ幾何補正データ算出部 16では、マーカー位置座標変換部 15で作成 された観察位置を視点としたマーカーの 2次元座標を用いて、観察位置を中心とす る投影面とプロジェクタ 1A, 1Bの画像面との幾何座標関係を導出し、その導出した 幾何座標関係に基づ!、てプロジェクタ画像の位置ずれや歪みを補正する幾何補正 データを算出して、その算出した幾何補正データを後段の画像変換部 5へ出力する [0037] In the projector geometric correction data calculation unit 16, the marker position coordinate conversion unit 15 creates Using the two-dimensional coordinates of the marker with the observed observation position as the viewpoint, the geometric coordinate relationship between the projection plane centered on the observation position and the image planes of the projectors 1A and 1B is derived, and the derived geometric coordinate relation Based on the calculated geometric correction data for correcting the displacement and distortion of the projector image, the calculated geometric correction data is output to the image conversion unit 5 at the subsequent stage.
[0038] プロジェクタガンマ補正データ算出部 17では、カメラ 3A (もしくはカメラ 3B)で撮影 された各種の色信号画像に基づいて、プロジェクタ 1A, 1Bの画面内の色むら、ガン マ特性むらを補正するためのガンマ補正データを算出し、その算出したガンマ補正 データを後段の画像変換部 5へ出力する。 [0038] The projector gamma correction data calculation unit 17 corrects color unevenness and gamma characteristic unevenness in the screens of the projectors 1A and 1B based on various color signal images captured by the camera 3A (or camera 3B). Gamma correction data is calculated, and the calculated gamma correction data is output to the image conversion unit 5 at the subsequent stage.
[0039] プロジェクタ色補正マトリクス算出部 18では、カメラ 3A (もしくはカメラ 3B)で撮影さ れた各種の色信号画像に基づいて、プロジェクタ 1A, 1B間の色の違いを補正する ための色補正マトリクスを算出し、その算出した色補正マトリクスを後段の画像変換部 5へ出力する。  [0039] The projector color correction matrix calculator 18 calculates a color correction matrix for correcting a color difference between the projectors 1A and 1B based on various color signal images captured by the camera 3A (or the camera 3B). Is calculated, and the calculated color correction matrix is output to the image conversion unit 5 at the subsequent stage.
[0040] 次に、図 4を参照して本実施の形態における画像変換部 5の詳細ブロックについて 説明する。  Next, with reference to FIG. 4, detailed blocks of the image conversion unit 5 in the present embodiment will be described.
[0041] 画像変換部 5は、大別して補正データ記憶部 21と補正データ作用部 22とからなつ ている。補正データ記憶部 21には、幾何補正データ保存部 23、ガンマ補正データ 保存部 24および色補正マトリクス保存部 25を設け、補正データ算出部 4のプロジヱク タ幾何補正データ算出部 16で算出された幾何補正データを幾何補正データ保存部 23に、プロジェクタガンマ補正データ算出部 17で算出されたプロジェクタガンマ補正 データをガンマ補正データ保存部 24に、プロジェクタ色補正マトリクス算出部 18で算 出された色補正マトリクスを色補正マトリクス保存部 25にそれぞれ記憶する。  The image conversion unit 5 is roughly divided into a correction data storage unit 21 and a correction data operation unit 22. The correction data storage unit 21 includes a geometric correction data storage unit 23, a gamma correction data storage unit 24, and a color correction matrix storage unit 25, and the geometric data calculated by the projector geometric correction data calculation unit 16 of the correction data calculation unit 4. The correction data is stored in the geometric correction data storage unit 23, the projector gamma correction data calculated in the projector gamma correction data calculation unit 17 is stored in the gamma correction data storage unit 24, and the color correction matrix calculated in the projector color correction matrix calculation unit 18 Are stored in the color correction matrix storage unit 25, respectively.
[0042] 補正データ作用部 22には、ガンマ変換部 26、幾何補正データ作用部 27、色補正 マトリクス作用部 28、ガンマ補正部 29およびガンマ補正データ作用部 30を設ける。 この補正データ作用部 22では、先ず、ガンマ変換部 26において入力画像 (コンテン ッ画像データ)の非線形なガンマ特性を補正した後、幾何補正データ作用部 27にお いて幾何補正データ保存部 23から入力されるプロジェクタ毎の幾何補正データを用 いて入力画像の幾何補正を行う。その後、色補正マトリクス作用部 28において、色補 正マトリクス保存部 24から入力されるプロジェクタ毎の色補正マトリクスを用いて入力 画像の RGB信号のカラーマトリクス変換を行う。次に、ガンマ補正部 29において、プ ロジェクタ画面全体で均一なガンマ特性を補正した後、ガンマ補正データ作用部 30 にお 、て、ガンマ補正データ保存部 25に保存されたガンマ補正データに基づ 、て、 プロジェクタ画面の 1画素毎に均一なガンマ力 のずれ (差分)を補正する。このよう に、ガンマ補正部 29において、画面全体である程度大まかなガンマ補正を行い、そ の後、ガンマ補正データ作用部 30で用いる 1画素毎のガンマ補正データを差分で持 たせれば、補正データのデータ量'メモリ量を圧縮することができるので、コストダウン を図ることができる。 The correction data operation unit 22 includes a gamma conversion unit 26, a geometric correction data operation unit 27, a color correction matrix operation unit 28, a gamma correction unit 29, and a gamma correction data operation unit 30. In the correction data operation unit 22, first, the gamma conversion unit 26 corrects the nonlinear gamma characteristic of the input image (content image data), and then the geometric correction data operation unit 27 inputs the data from the geometric correction data storage unit 23. The geometric correction of the input image is performed using the geometric correction data for each projector. After that, the color correction matrix The color matrix conversion of the RGB signal of the input image is performed using the color correction matrix for each projector input from the positive matrix storage unit 24. Next, the gamma correction section 29 corrects the uniform gamma characteristics over the entire projector screen, and then the gamma correction data action section 30 executes the gamma correction based on the gamma correction data stored in the gamma correction data storage section 25. In addition, uniform gamma force deviation (difference) is corrected for each pixel of the projector screen. In this manner, the gamma correction unit 29 performs a rough gamma correction to some extent on the entire screen, and then, if the gamma correction data for each pixel used in the gamma correction data operation unit 30 can be provided as a difference, the correction data can be obtained. Since the amount of data and the amount of memory can be compressed, costs can be reduced.
[0043] 以上により、画像変換部 5で補正されたコンテンツ画像データは、後段のプロジェク タ 1Aおよびプロジェクタ 1Bへ出力される。また、テストパターン画像は、補正されず にそのままプロジェクタ 1 Aおよびプロジェクタ 1 Bへ出力される。  As described above, the content image data corrected by the image conversion unit 5 is output to the subsequent projector 1A and projector 1B. The test pattern image is output to projector 1A and projector 1B without correction.
[0044] 次に、図 5を参照して、本実施の形態における幾何補正データ算出処理について 説明する。  Next, with reference to FIG. 5, a description will be given of the geometric correction data calculation processing in the present embodiment.
[0045] 先ず、画像変換部 5にお ヽて、入力画像に対して何も補正しな!、状態 (スルー状態 )に設定して (ステップ S1)、テストパターン画像のうち図 2 (a)に示すマーカー画像を 入力して、プロジェクタ 1A, 1Bに表示する(ステップ S2)。その後、プロジェクタ 1A, 1Bによりスクリーン 2に投影された画像をカメラ 3A, 3Bでそれぞれ撮影して (ステップ S3, S4)、撮影された視差画像を補正データ算出部 4のカメラ撮影画像データ格納 部 11A, 11Bにそれぞれ格納する。図 6は、この様子を示したもので、一方のプロジ ェクタ 1Bによるマーカー投射と 2台のカメラ 3A, 3Bによるマーカー撮影の様子を示し ている。  First, in the image conversion unit 5, no correction is made to the input image !, a state (through state) is set (step S1), and FIG. The marker image shown in is input and displayed on projectors 1A and 1B (step S2). Thereafter, the images projected on the screen 2 by the projectors 1A and 1B are photographed by the cameras 3A and 3B respectively (steps S3 and S4), and the photographed parallax images are stored in the camera photographed image data storage unit 11A of the correction data calculation unit 4. , 11B. FIG. 6 shows this state, in which a marker is projected by one of the projectors 1B and a marker is photographed by the two cameras 3A and 3B.
[0046] 次に、撮影された視差画像に基づいて、マーカー位置検出格納部 12においてマ 一力一位置を検出する。その後、スクリーン形状'カメラ位置算出格納部 13において 、視差画像同士の同じマーカーに対応する検出された撮影画像面上のマーカー点( 対応点)の位置を検出して (ステップ S5)、その検出した複数のマーカーについての 3次元的な位置力も全体的なスクリーン形状を補間演算により推定すると共に、カメラ 位置を推定する(ステップ S6)。図 7は、この様子を示したもので、 2台のカメラ 3A, 3 Bによるマーカー撮影画像力 スクリーン 2の立体形状を推定する方法を概念的に示 している。なお、図 8に示すように、ある程度マーカー点数が少ない場合には、大まか なスクリーン形状の先見情報を与えておくことで、多数のマーカー点で推定したときと 同等な精度でスクリーン形状を推定することも可能である。 Next, based on the captured parallax image, the marker position detection storage unit 12 detects a manual force position. Thereafter, the screen shape 'camera position calculation storage unit 13 detects the position of a marker point (corresponding point) on the detected captured image plane corresponding to the same marker between the parallax images (step S5), and the detected position is detected. As for the three-dimensional position force of a plurality of markers, the overall screen shape is estimated by interpolation and the camera position is estimated (step S6). Figure 7 illustrates this situation, with two cameras 3A, 3 This figure conceptually shows a method for estimating the three-dimensional shape of the screen 2 with the marker image captured by B. As shown in Fig. 8, when the number of marker points is small to some extent, the screen shape is estimated with the same accuracy as that estimated at many marker points by giving rough screen shape foresight information. It is also possible.
[0047] 次に、ユーザーにより、実際に投射画像を観察する位置を設定する (ステップ S7)。 Next, the user sets the position where the projected image is actually observed (step S7).
なお、この観察位置は、ユーザーが指定しなくとも、例えばドーム中心位置であるとか 予めデフォルトにより決めておいてもよい。その後、推定されたマーカーの 3次元的な 位置および設定された観察位置に基づ 、て、マーカー位置座標変換部 15にお 、て 、観察位置における視点を中心とした 2次元投影面におけるマーカーの位置座標を 算出する (ステップ S8)。図 9は、この様子を示したものである。なお、この際、投影面 の画角は、マルチプロジェクシヨンシステムに入力するコンテンツ画像の画角と同じと する。例えば、コンテンツ画像が魚眼レンズで撮影されたような広視野画像であれば 、図 10 (a)に示すように、投影画像も広視野角(例えば、 110度一 360度)の座標系 で表わされる 2次元投影面上のマーカー位置座標を算出すればよい。また、図 10 (b )に示すように、観察位置が任意に変化した場合には、その観察位置における投影 点を算出すればよい。  The observation position may be, for example, the center position of the dome without being specified by the user, or may be determined in advance by default. Thereafter, based on the estimated three-dimensional position of the marker and the set observation position, the marker position coordinate conversion unit 15 firstly outputs the marker on the two-dimensional projection plane centered on the viewpoint at the observation position. Calculate the position coordinates (step S8). Figure 9 illustrates this situation. At this time, the angle of view of the projection plane is the same as the angle of view of the content image input to the multi-projection system. For example, if the content image is a wide-field image captured by a fish-eye lens, as shown in FIG. 10A, the projected image is also represented by a coordinate system with a wide viewing angle (for example, 110 degrees to 360 degrees). What is necessary is just to calculate the marker position coordinates on the two-dimensional projection plane. Further, as shown in FIG. 10B, when the observation position changes arbitrarily, the projection point at the observation position may be calculated.
[0048] 次に、算出されたマーカー位置座標に基づいて、プロジェクタ幾何補正データ算出 部 16においてプロジェクタ毎の幾何補正データを算出する (ステップ S9)。具体的に は、観察位置における視点画像面上のマーカー位置座標とプロジェクタ 1A, 1Bへ 入力したテストパターン画像におけるマーカーの位置座標との対応関係から、視点 画像面上の座標とプロジェクタ画像面上の座標との幾何関係を求め、その幾何関係 に基づ!/、て視点画像面上で位置ずれや歪みのな 、画像が出力されるように、入力 画像を補正する幾何補正データを算出する。その後、算出したプロジェクタ毎の幾何 補正データを画像変換部 5へ出力して (ステップ S10)、幾何補正データ算出の処理 を終了する。  Next, based on the calculated marker position coordinates, the projector geometric correction data calculation unit 16 calculates geometric correction data for each projector (step S9). Specifically, from the correspondence between the marker position coordinates on the viewpoint image plane at the observation position and the marker position coordinates on the test pattern image input to the projectors 1A and 1B, the coordinates on the viewpoint image plane and the positions on the projector image plane are obtained. The geometric relationship with the coordinates is obtained, and geometric correction data for correcting the input image is calculated based on the geometric relationship so that the image is output without any displacement or distortion on the viewpoint image plane. After that, the calculated geometric correction data for each projector is output to the image conversion unit 5 (step S10), and the processing of calculating the geometric correction data is terminated.
[0049] 以上のようにして算出された補正データを用いて、画像変換部 5においてコンテン ッ画像を幾何補正すれば、設定された観察位置にぉ ヽて歪みのな ヽ画像を表示す ることがでさる。 [0050] ガンマ補正データ、色補正マトリクスにつ 、ては、テストパターン画像として図 2 (b) に示す各種の色信号画像データをプロジェクタで表示し、これをマーカー画像の場 合と同様にカメラ 3A, 3Bで撮影して、撮影された画像データから、プロジェクタ 1A, 1Bの各色特性のむらおよびガンマ特性のむらを算出し、これらを全画面で均一にす るような補正データをそれぞれ求める。 [0049] If the content image is geometrically corrected by the image conversion unit 5 using the correction data calculated as described above, an image with no distortion is displayed at the set observation position. It comes out. Regarding the gamma correction data and the color correction matrix, various color signal image data shown in FIG. 2 (b) is displayed by a projector as a test pattern image, and this is displayed on a camera in the same manner as in the case of a marker image. Photographs are taken with 3A and 3B, and the unevenness of each color characteristic and the gamma characteristic of projectors 1A and 1B are calculated from the photographed image data, and correction data for making these uniform over the entire screen is obtained.
[0051] なお、これらガンマ補正データ、色補正マトリクスを算出するにあたっては、視差画 像を撮影する必要がないので、各種の色信号画像の撮影データとしては、カメラ 3A もしくはカメラ 3Bのどちらかで撮影されたものがあればよい。また、色信号画像は、図 2 (b)に示した R (赤)、 G (緑)、 B (青)の単色の色信号データをテストパターンとして 用いる場合に限らず、白力も黒の混合色 (グレースケール)による色信号データを用 V、、カメラ 3Aおよびカメラ 3Bにそれぞれ異なる分光特性のフィルタを挿入して同時 に撮影すれば、例えばカメラ 3A力も青成分の色信号画像を、カメラ 3Bから赤成分の 色信号画像を同時に取得することもできる。このように、色信号画像をカメラ 3Aおよ びカメラ 3Bで分担して得るようにすれば、撮影時間を短縮することができる。  In calculating these gamma correction data and color correction matrices, it is not necessary to take parallax images. Therefore, as the shooting data of various color signal images, either camera 3A or camera 3B can be used. All you have to do is take a picture. The color signal image is not limited to the case where the single color signal data of R (red), G (green), and B (blue) shown in Fig. 2 (b) is used as a test pattern. By using color signal data by color (gray scale) V, and by inserting filters with different spectral characteristics into cameras 3A and 3B at the same time, for example, camera 3A can convert the color signal image of blue component into camera 3B. , A color signal image of the red component can be obtained at the same time. As described above, if the color signal images are shared and obtained by the cameras 3A and 3B, the shooting time can be reduced.
[0052] (第 2実施の形態)  (Second Embodiment)
図 11および図 12は第 2実施の形態を示すもので、図 11はマルチプロジェクシヨン システムの全体構成を示す図、図 12は図 11に示す補正データ算出部の構成を示す ブロック図である。  11 and 12 show the second embodiment. FIG. 11 is a diagram showing the overall configuration of the multi-projection system, and FIG. 12 is a block diagram showing the configuration of the correction data calculation unit shown in FIG.
[0053] 本実施の形態は、第 1実施の形態において使用した 2台のカメラ 3Aおよびカメラ 3 Bに代えて、移動機構である移動ステージ 31に平行移動可能に 1台のカメラ(デジタ ルカメラ) 3を支持し、このカメラ 3を平行移動させて相対的位置が既知の距離 dの両 端で順次撮影することにより、第 1実施の形態と同様に視差画像を得るようにしたもの である。  In the present embodiment, instead of the two cameras 3A and 3B used in the first embodiment, one camera (digital camera) is provided so as to be able to move in parallel to a moving stage 31 which is a moving mechanism. The camera 3 is supported, the camera 3 is moved in parallel, and the relative position is sequentially photographed at both ends of a known distance d, thereby obtaining a parallax image as in the first embodiment.
[0054] このため、補正データ算出部 4には、カメラ 3での投影画像を、カメラ撮影画像デー タ格納部 11A, 11Bに切り替えて供給するための切り替えスィッチ 32を設け、カメラ 3 を距離 dの左端に位置させて撮影したときは、その撮影画像を切り替えスィッチ 32を 経てカメラ撮影画像データ格納部 11 Aに格納し、カメラ 3を距離 dの右端に位置させ て撮影したときは、その撮影画像を切り替えスィッチ 32を経てカメラ撮影画像データ 格納部 1 IBに格納する。その他の構成および動作は、第 1実施の形態と同様である For this reason, the correction data calculation unit 4 is provided with a switching switch 32 for switching and supplying the image projected by the camera 3 to the camera photographed image data storage units 11A and 11B. When the camera is taken at the left end of the camera, the captured image is stored in the camera-captured image data storage unit 11A via the switch 32, and when the camera 3 is taken at the right end of the distance d, the image is taken. Image switching data via camera switch 32 Storage 1 Store in IB. Other configurations and operations are the same as those of the first embodiment.
[0055] 本実施の形態によれば、第 1実施の形態と比較すると、カメラ 3を移動しながら順次 撮影する分、撮影時間が長くなるが、より少ない機材で同様の補正を実現でき、コス トダウンを図ることができる。 According to the present embodiment, as compared with the first embodiment, the shooting time is increased by the sequential shooting while moving the camera 3, but the same correction can be realized with less equipment, and the cost can be reduced. Down can be achieved.
[0056] (第 3実施の形態)  (Third Embodiment)
図 13—図 16は第 3実施の形態を示すもので、図 13はマルチプロジェクシヨンシス テムの全体構成を示す図、図 14は図 13に示す補正データ算出部の構成を示すプロ ック図、図 15は幾何補正データ算出処理を説明するためのフローチャート、図 16は スクリーン形状 (マーカー位置)を推定する方法を示す概念図である。  13 to 16 show the third embodiment. FIG. 13 is a diagram showing the overall configuration of the multi-projection system, and FIG. 14 is a block diagram showing the configuration of the correction data calculation unit shown in FIG. FIG. 15 is a flowchart for explaining the geometric correction data calculation processing, and FIG. 16 is a conceptual diagram showing a method for estimating a screen shape (marker position).
[0057] 本実施の形態は、図 13に示すように、第 1実施の形態におけるカメラ 3Aおよびカメ ラ 3Bの代わりに、 1台のカメラ 3とスクリーン全体に亘つて等角度でマーカーを投射す るマーカー投射手段であるレーザーポインタ 35とを用いてスクリーン形状を推定する ものである。カメラ 3およびレーザーポインタ 35は、両者の相対的な位置関係を固定 して支持手段である支持具 36にそれぞれ固定する。  In the present embodiment, as shown in FIG. 13, instead of camera 3A and camera 3B in the first embodiment, one camera 3 projects markers at an equal angle over the entire screen. The screen shape is estimated using a laser pointer 35 as marker projection means. The camera 3 and the laser pointer 35 are fixed to a support member 36 as a support means, with their relative positional relationship fixed.
[0058] 補正データ算出部 4は、図 14に示すように、カメラ撮影画像データ格納部 11の他 に、図 3と同様に、マーカー位置検出格納部 12、スクリーン形状'カメラ位置推定格 納部 13、観察位置設定部 14、マーカー位置座標変換部 15、プロジェクタ幾何補正 データ算出部 16、プロジェクタガンマ補正データ算出部 17、およびプロジェクタ色補 正マトリクス算出部 18を有している力 本実施の形態では、カメラ撮影画像データ格 納部 11、マーカー位置検出格納部 12、スクリーン形状'カメラ位置算出格納部 13、 マーカー位置座標変換部 15の機能が第 1実施の形態と異なっており、その他の機 能は第 1実施の形態と同様である。  As shown in FIG. 14, in addition to the camera captured image data storage unit 11, the correction data calculation unit 4 includes a marker position detection storage unit 12, a screen shape 'camera position estimation storage unit as in FIG. 13, an observation position setting unit 14, a marker position coordinate conversion unit 15, a projector geometric correction data calculation unit 16, a projector gamma correction data calculation unit 17, and a projector color correction matrix calculation unit 18 The functions of the camera captured image data storage unit 11, the marker position detection storage unit 12, the screen shape 'camera position calculation storage unit 13, and the marker position coordinate conversion unit 15 are different from those of the first embodiment. The functions are the same as in the first embodiment.
[0059] すなわち、カメラ撮影画像データ格納部 11は、レーザーポインタ 35によりスクリーン 2上に投影されたマーカーの撮影画像と、プロジェクタ 1A, 1Bにより投影されたマー カー画像および色信号画像のテストパターンの撮影画像とを格納する。また、マーカ 一位置検出格納部 12では、レーザーポインタ 35によりスクリーン 2上に投影されたマ 一力一の位置と、プロジェクタ 1A, 1Bにより投影されたマーカーの位置とを、それぞ れの撮影画像から検出する。 [0059] That is, the camera photographed image data storage unit 11 stores the photographed image of the marker projected on the screen 2 by the laser pointer 35 and the test pattern of the marker image and the color signal image projected by the projectors 1A and 1B. The captured image is stored. In the marker one position detection storage unit 12, the position of the marker projected on the screen 2 by the laser pointer 35 and the position of the marker projected by the projectors 1A and 1B are respectively shown. From the captured image.
[0060] さらに、スクリーン形状'カメラ位置算出格納部 13では、検出されたレーザーポイン タ 35によるマーカーの位置から、スクリーン形状およびカメラ位置を推定する。また、 マーカー位置座標変換部 15にお ヽては、推定されたスクリーン形状およびカメラ位 置、さらには観察位置設定部 14において設定された観察位置から、マーカー位置 検出格納部 12において検出されたプロジェクタ 1A, 1Bによるマーカー位置座標を、 観察視点から見たマーカー位置座標に変換する。  Further, the screen shape / camera position calculation storage unit 13 estimates the screen shape and the camera position from the marker position detected by the laser pointer 35. Further, in the marker position coordinate conversion unit 15, the projector detected in the marker position detection storage unit 12 from the estimated screen shape and camera position and the observation position set in the observation position setting unit 14. The marker position coordinates by 1A and 1B are converted to the marker position coordinates viewed from the observation viewpoint.
[0061] 以上により変換されたプロジェクタ 1A, 1Bのマーカー位置座標をもとに、プロジェク タ幾何補正データ算出部 16において、第 1実施の形態と同様に、幾何補正データを 算出する。  [0061] Based on the marker position coordinates of projectors 1A and 1B converted as described above, projector geometric correction data calculation unit 16 calculates geometric correction data in the same manner as in the first embodiment.
[0062] 以下、図 15を参照して、本実施の形態による幾何補正データ算出処理について説 明する。先ず、レーザーポインタ 35によりスクリーン 2上にマーカーを投影し (ステップ S 11)、投影されたマーカー画像をカメラ 3により撮影して (ステップ S 12)、カメラ撮影 画像データ格納部 11に格納する。次に、プロジェクタ 1A, 1Bによりマーカー画像を スクリーン 2上に投影し (ステップ S13)、投影されたマーカー画像を同様にカメラ 3に より撮影して (ステップ S 14)、カメラ撮影画像データ格納部 11に格納する。次に、撮 影された各々のマーカー画像に基づいて、マーカー位置検出格納部 12においてレ 一ザ一ポインタ 35によるマーカー位置およびプロジェクタ 1A, 1Bによるマーカー位 置をそれぞれ検出する (ステップ S15, S16)。  Hereinafter, the geometric correction data calculation processing according to the present embodiment will be described with reference to FIG. First, a marker is projected on the screen 2 by the laser pointer 35 (step S11), the projected marker image is photographed by the camera 3 (step S12), and stored in the camera photographed image data storage unit 11. Next, the marker images are projected onto the screen 2 by the projectors 1A and 1B (step S13), and the projected marker images are similarly photographed by the camera 3 (step S14). To be stored. Next, the marker position by the laser pointer 35 and the marker position by the projectors 1A and 1B are detected in the marker position detection storage unit 12 based on each of the captured marker images (steps S15 and S16). .
[0063] 次に、検出されたレーザーポインタ 35によるマーカー位置に基づいて、スクリーン 形状 ·カメラ位置算出格納部 13にお 、て、スクリーン 2の形状およびカメラ 3の位置を 推定する (ステップ S 17)。具体的には、図 16に示すように、予め定められたレーザー ポインタ 35によるマーカーの投影角および撮影画像上のマーカーの位置から、マー カーの 3次元的な位置を算出してスクリーン形状およびカメラ位置を推定する。  Next, based on the detected marker position by the laser pointer 35, the shape of the screen 2 and the position of the camera 3 are estimated in the screen shape / camera position calculation storage unit 13 (step S17). . Specifically, as shown in FIG. 16, the three-dimensional position of the marker is calculated from the projection angle of the marker by a predetermined laser pointer 35 and the position of the marker on the captured image, and the screen shape and the camera are calculated. Estimate the location.
[0064] 次に、観察位置を設定したら (ステップ S18)、マーカー位置座標変換部 15におい て、先ず、観察位置を視点とした投影画像上におけるレーザーポインタ 35の座標位 置を算出し (ステップ S 19)、次に観察位置におけるマーカー座標位置およびカメラ 撮影画像上でのマーカー座標位置から、観察位置およびカメラ画像間の座標関係 を求め(ステップ S20)、その後、算出した座標関係を用いて、プロジェクタ 1A, 1Bに より投影されたマーカーの撮影画像上の座標位置を、観察位置を視点とした投影面 上におけるプロジェクタ 1A, 1Bのマーカー座標位置に変換する(ステップ S21)。 Next, after setting the observation position (Step S18), the marker position coordinate conversion unit 15 first calculates the coordinate position of the laser pointer 35 on the projection image from the observation position as a viewpoint (Step S18). 19) Then, from the marker coordinate position at the observation position and the marker coordinate position on the camera image, the coordinate relationship between the observation position and the camera image (Step S20), and then, using the calculated coordinate relationships, the coordinate positions of the markers projected by the projectors 1A and 1B on the captured image are used as the projectors 1A and 1B on the projection plane with the observation position as the viewpoint. (Step S21).
[0065] 以上により変換された観察位置におけるプロジェクタ 35のマーカー位置情報を用 いて、第 1実施の形態と同様、プロジェクタ幾何補正データ算出部 16においてプロジ ェクタ毎の幾何補正データを算出し (ステップ S22)、その算出した幾何補正データを 画像変換部 5へ出力して (ステップ S23)、その幾何補正データに基づいて入力画像 の画像変換を行う。これにより、プロジェクタ 1A, 1Bによって、入力画像を観察位置 力も見て歪みのな 、画像に表示することができる。  Using the marker position information of the projector 35 at the observation position converted as described above, the projector geometric correction data calculation unit 16 calculates the geometric correction data for each projector as in the first embodiment (step S22). ), And outputs the calculated geometric correction data to the image conversion unit 5 (step S23), and performs image conversion of the input image based on the geometric correction data. This allows the projectors 1A and 1B to display the input image on the image without distortion while also viewing the observation position.
[0066] (第 4実施の形態)  (Fourth Embodiment)
図 17および図 18は本発明の第 4実施の形態を示すもので、図 17はマルチプロジ エタションシステムの全体構成を示す図、図 18は図 17に示す補正データ算出部の 構成を示すブロック図である。  17 and 18 show a fourth embodiment of the present invention. FIG. 17 is a diagram showing the overall configuration of the multi-projection system, and FIG. 18 is a block diagram showing the configuration of the correction data calculation unit shown in FIG. FIG.
[0067] 本実施の形態は、第 1実施の形態にお!ヽて、スクリーン全体の視差画像を取得する カメラ 3A, 3Bの他に、スクリーン 2を小領域に分割して撮影するカメラ(デジタルカメ ラ) 3C, 3Dを設け、プロジェクタ 1A, 1Bの位置ずれや歪みを補正する幾何補正デ ータについては、カメラ 3A, 3Bで撮影された視差画像を用いて第 1実施の形態と同 様に算出し、プロジェクタ 1A, 1Bの色補正マトリクスおよびガンマ補正データについ ては、カメラ 3C, 3Dで撮影された色信号画像力も算出するようにしたものである。  The present embodiment is different from the first embodiment in that, in addition to cameras 3A and 3B for acquiring a parallax image of the entire screen, a camera (digital Cameras) 3C and 3D are provided, and the geometric correction data for correcting the displacement and distortion of the projectors 1A and 1B is the same as in the first embodiment using parallax images taken by the cameras 3A and 3B. For the color correction matrices and gamma correction data of the projectors 1A and 1B, the color signal image powers captured by the cameras 3C and 3D are also calculated.
[0068] ここで、一般に、プロジェクタの位置ずれや歪みに関しては、ある程度カメラの解像 度が低くても、後のマーカー位置検出の処理においてカメラの分解能よりも細かい精 度で求めることが可能である力 プロジェクタの画素単位で生じる色むらについては、 カメラの分解能よりも細かい精度で検出することが困難である。  [0068] Here, in general, even if the resolution of the camera is low to some extent, it is possible to obtain the position shift and the distortion of the projector with a precision finer than the resolution of the camera in the subsequent marker position detection processing. A certain force It is difficult to detect the color unevenness that occurs at the pixel level of the projector with a precision smaller than the resolution of the camera.
[0069] そこで、本実施の形態では、色むらを検出する色信号画像をカメラ 3C, 3Dによりス クリーン 2上の小領域で撮影することによって、より細かい精度で色むらを補正する。  Thus, in the present embodiment, color unevenness is corrected with finer accuracy by capturing a color signal image for detecting color unevenness in a small area on screen 2 by cameras 3C and 3D.
[0070] このため、補正データ算出部 4には、図 18に示すように、図 3に示したカメラ撮影画 像データ格納部 11A, 11B、マーカー位置検出格納部 12、スクリーン形状'カメラ位 置推定格納部 13、観察位置設定部 14、マーカー位置座標変換部 15、プロジェクタ 幾何補正データ算出部 16の他に、カメラ撮影画像データ格納部 11C, 11Dおよび プロジェクタ色補正マトリクス ·ガンマ補正データ算出部 19を設ける。 [0070] For this reason, as shown in Fig. 18, the correction data calculation unit 4 includes the camera photographed image data storage units 11A and 11B, the marker position detection storage unit 12, the screen shape "camera position" shown in Fig. 3. Estimation storage unit 13, observation position setting unit 14, marker position coordinate conversion unit 15, projector In addition to the geometric correction data calculation unit 16, camera image data storage units 11C and 11D and a projector color correction matrix / gamma correction data calculation unit 19 are provided.
[0071] カメラ撮影画像データ格納部 11C, 11Dには、カメラ 3C, 3Dにより撮影されたマー カー画像および色信号画像のテストパターンの撮影画像を格納する。また、マーカ 一位置検出格納部 12では、カメラ 3A, 3Bにより撮影されたマーカー画像のマーカ 一座標位置を検出すると共に、カメラ 3C, 3Dにより撮影されたマーカー画像のマー カー座標位置も検出し、これら検出したカメラ 3C, 3Dの撮影画像におけるマーカー 位置をプロジェクタ色補正マトリクス ·ガンマ補正データ算出部 19に供給する。  [0071] Camera photographed image data storage units 11C and 11D store photographed images of test patterns of marker images and color signal images photographed by cameras 3C and 3D. In addition, the marker one position detection storage unit 12 detects the marker one coordinate position of the marker image captured by the cameras 3A and 3B, and also detects the marker coordinate position of the marker image captured by the cameras 3C and 3D. The detected marker positions in the captured images of the cameras 3C and 3D are supplied to the projector color correction matrix / gamma correction data calculation unit 19.
[0072] プロジェクタ色補正マトリクス 'ガンマ補正データ算出部 19では、マーカー位置検出 格納部 12で検出されたカメラ 3C, 3Dの撮影画像におけるマーカー位置と、カメラ撮 影画像データ格納部 11C, 11Dに格納された色信号画像とを用いて、プロジェクタ 1 A, 1Bの各画素に対応する色むらを検出して、それを補正するための色補正マトリク スおよびガンマ補正データを算出し、その算出した色補正マトリクスおよびガンマ補 正データを画像変換部 5へ出力して入力画像を変換する。これにより、より細かい精 度で色むらを補正することができるので、色むらのない画像をスクリーン 2上に表示す ることができる。なお、プロジェクタ 1A, 1Bの幾何補正データについては、第 1実施 の形態と同様である。  [0072] The projector color correction matrix 'gamma correction data calculation unit 19 stores the marker positions in the captured images of the cameras 3C and 3D detected by the marker position detection storage unit 12 and the camera captured image data storage units 11C and 11D. The color unevenness corresponding to each pixel of the projectors 1A and 1B is detected using the obtained color signal image, and a color correction matrix and gamma correction data for correcting the color unevenness are calculated, and the calculated color is calculated. The correction matrix and the gamma correction data are output to the image conversion unit 5 to convert the input image. As a result, color unevenness can be corrected with finer accuracy, and an image without color unevenness can be displayed on the screen 2. Note that the geometric correction data of the projectors 1A and 1B are the same as in the first embodiment.
[0073] (第 5実施の形態)  (Fifth Embodiment)
図 19および図 20は本発明の第 5実施の形態を示すもので、図 19はマルチプロジ ェクシヨンシステムの全体構成を示す図、図 20は図 19に示す補正データ算出部の 構成を示すブロック図である。  19 and 20 show a fifth embodiment of the present invention. FIG. 19 shows the overall configuration of a multi-projection system, and FIG. 20 shows the configuration of the correction data calculation unit shown in FIG. It is a block diagram.
[0074] 本実施の形態は、第 3実施の形態において、カメラ 3を回転可能に支持具 36に支 持し、このカメラ 3を回転制御部 41により回転させながらスクリーン 2上の小領域毎に テストパターン画像を順次撮影して、その小領域毎に分けて撮影したテストパターン 画像を用いて補正データを算出するようにしたものである。  This embodiment is different from the third embodiment in that the camera 3 is rotatably supported by the support 36, and the camera 3 is rotated by the rotation control unit 41 for each small area on the screen 2. The test pattern images are sequentially photographed, and the correction data is calculated using the test pattern images photographed separately for each small area.
[0075] このため、本実施の形態では、カメラ 3による各撮影画像に対応する回転角度情報 を回転制御部 41から補正データ算出部 4に供給する。また、補正データ算出部 4に は、回転制御部 41からのカメラ 3の回転角度情報を格納する回転角度格納部 20を 付加し、この回転角度格納部 20に格納された各撮影画像に対応する回転角度情報 を、スクリーン形状'カメラ位置算出格納部 13およびマーカー位置座標変換部 15に 供給して、スクリーン形状'カメラ位置の推定および観察位置におけるマーカー位置 座標への変換の際に各撮影画像データと対応して用い、これにより第 3実施の形態 と同様に、それぞれの補正データを算出する。 For this reason, in the present embodiment, rotation angle information corresponding to each image captured by the camera 3 is supplied from the rotation control unit 41 to the correction data calculation unit 4. Further, the correction data calculation unit 4 includes a rotation angle storage unit 20 for storing rotation angle information of the camera 3 from the rotation control unit 41. In addition, the rotation angle information corresponding to each captured image stored in the rotation angle storage unit 20 is supplied to the screen shape 'camera position calculation storage unit 13 and the marker position coordinate conversion unit 15, and the screen shape' camera position It is used in correspondence with each photographed image data when estimating and converting the observation position to the marker position coordinates, thereby calculating respective correction data in the same manner as in the third embodiment.
[0076] このように、スクリーン 2上の小領域毎に分けて撮影したテストパターン画像を用い て補正データを算出することにより、第 4実施の形態と同様に、より細かい空間精度 での補正が可能となる。  As described above, by calculating the correction data using the test pattern images shot separately for each of the small regions on the screen 2, correction with finer spatial accuracy can be performed as in the fourth embodiment. It becomes possible.
[0077] (第 6実施の形態)  (Sixth Embodiment)
図 21および図 22は本発明の第 6実施の形態を示すもので、図 21はマルチプロジ ェクシヨンシステムの全体構成を示す図、図 22は本実施の形態によるスクリーン形状 の算出方法の一例を説明するための図である。  FIGS. 21 and 22 show a sixth embodiment of the present invention. FIG. 21 is a diagram showing the overall configuration of a multi-projection system, and FIG. 22 is an example of a screen shape calculating method according to the present embodiment. FIG.
[0078] 本実施の形態は、第 5実施の形態において、レーザーポインタ 35およびカメラ 3を 支治具 36に固定し、この支持具 36を回転制御部 41により回転させながら、スクリー ン 2上の小領域毎にレーザーポインタ 35によるマーカー投射およびカメラ 3による撮 影を行って、その小領域毎に分けて撮影したテストパターン画像を用いて補正デー タを算出するようにしたもので、その他の構成は第 5実施の形態と同様である。このよ うに構成すれば、レーザーポインタ 35によるマーカー投射角が狭 、範囲であっても、 スクリーン全体を網羅することができるので、レーザーポインタ 35の構成を簡略ィ匕す ることがでさる。 This embodiment is different from the fifth embodiment in that the laser pointer 35 and the camera 3 are fixed to the support jig 36, and the support 36 is rotated by the rotation The marker projection by the laser pointer 35 and the imaging by the camera 3 are performed for each small area, and the correction data is calculated using the test pattern image photographed separately for each small area. Is the same as in the fifth embodiment. With such a configuration, the entire screen can be covered even when the marker projection angle by the laser pointer 35 is narrow and in a range, so that the configuration of the laser pointer 35 can be simplified.
[0079] ここで、スクリーン 2の小領域毎に撮影されたマーカー画像から、スクリーン 2の全体 の形状を求める際は、第 5実施の形態と同様に、回転制御部 41から出力される各撮 影画像に対応する回転角度情報を用いて、各撮影画像同士の回転角度による相対 的な位置関係からスクリーン全体の形状を推定することができる。  Here, when obtaining the entire shape of the screen 2 from the marker images taken for each small area of the screen 2, each image output from the rotation control unit 41 is obtained in the same manner as in the fifth embodiment. Using the rotation angle information corresponding to the shadow image, the shape of the entire screen can be estimated from the relative positional relationship between the rotation angles of the captured images.
[0080] または、回転角度情報を用いなくとも、例えば図 22に示すように、お互いの撮影領 域の重複する部分にプロジェクタ 1A, 1Bによりそれぞれマーカーを投影して撮影し 、その各々撮影したマーカー画像におけるプロジェクタ 1A, 1Bのマーカー位置(同 一点)をもとにして、お互いの撮影画像から推定されたスクリーン形状を合成してもよ い。このようにすれば、回転角度情報にある程度誤差があっても、精度よく互いの位 置関係を合成することができる。 Alternatively, without using the rotation angle information, for example, as shown in FIG. 22, the projectors 1A and 1B project the markers onto the overlapping portions of the photographing areas, and photograph the markers. Based on the marker positions (same point) of projectors 1A and 1B in the image, the screen shape estimated from each captured image may be synthesized. Yes. In this way, even if there is some error in the rotation angle information, the positional relationship can be accurately synthesized.
[0081] (第 7実施の形態)  (Seventh Embodiment)
図 23は、本発明の第 7実施の形態におけるマルチプロジヱクシヨンシステムの全体 構成を示す図である。  FIG. 23 is a diagram showing the overall configuration of the multi-projection system according to the seventh embodiment of the present invention.
[0082] 本実施の形態は、互いに位置が固定されたカメラおよびレーザーポインタを複数セ ット用いて、スクリーン 2の小領域毎に、隣接する小領域間でオーバーラップさせてテ ストパターン画像の投射およびその撮影を行 ヽ、その小領域毎に分けて撮影したテ ストパターン画像を用いて補正データを算出するようにしたもので、その他の構成は 第 5実施の形態および第 6実施の形態と同様である。なお、図 23は、カメラ 3Aおよび レーザーポインタ 35Aを距離 dl離して支持具 36Aに固定したものと、カメラ 3Bおよ びレーザーポインタ 35Bを距離 d2離して支持具 36Bに固定したものとの 2セットを用 いた場合を示している。ここで、距離 dl, d2は等しくても、異なっても良ぐ任意に設 定することができる。  In the present embodiment, a plurality of sets of a camera and a laser pointer whose positions are fixed to each other are used to overlap a small area of the screen 2 with an adjacent small area to form a test pattern image. The projection and its photographing are performed, and correction data is calculated using a test pattern image photographed separately for each of the small areas. The other configurations are the fifth and sixth embodiments. Is the same as Fig. 23 shows two sets, one with camera 3A and laser pointer 35A fixed to support 36A at a distance dl, and the other with camera 3B and laser pointer 35B fixed at support d at a distance d2. This shows the case where is used. Here, the distances dl and d2 can be set arbitrarily, even if they are equal or different.
[0083] このように構成すれば、スクリーン 2の小領域毎のテストパターン画像を同時に撮影 することができるので、撮影時間を短縮でき、補正データを短時間で算出することが できる。なお、本実施の形態によるスクリーン形状の推定は、第 6実施の形態で説明 したように、各マーカー撮影画像からスクリーン 2の小領域の形状を推定し、その後、 図 22に示したように合成して全体のスクリーン形状を推定すればよ!、。  With this configuration, a test pattern image for each small area of the screen 2 can be photographed simultaneously, so that the photographing time can be reduced and the correction data can be calculated in a short time. Note that the screen shape estimation according to the present embodiment estimates the shape of the small area of the screen 2 from each marker captured image as described in the sixth embodiment, and then synthesizes as shown in FIG. Then estimate the overall screen shape!
[0084] (第 8実施の形態)  (Eighth Embodiment)
図 24は、本発明の第 8実施の形態におけるマルチプロジェクシヨンシステムの全体 構成を示す図である。  FIG. 24 is a diagram showing the overall configuration of the multi-projection system according to the eighth embodiment of the present invention.
[0085] 本実施の形態は、第 7実施の形態のように複数セットのカメラおよびレーザーポイン タを設けることなぐプロジェクタ 1A, 1Bと同じ台数のカメラ 3A, 3Bを用いてスクリー ン 2の立体形状を推定する。このため、プロジェクタ 1 Aとカメラ 3Aとを距離 dl離して 支持具 36Aに固定し、プロジェクタ 1Bとカメラ 3Bとを距離 d2離して支持具 36Bに固 定して、カメラ 3Aによりプロジェクタ 1Aによるスクリーン 2の投影領域、すなわちプロ ジェクタ 1Aによる投影画像とそれとオーバーラップするプロジェクタ 1Bによる投影画 像の一部とを撮影できるようにし、カメラ 3Bによりプロジェクタ 1Bによるスクリーン 2の 投影領域、すなわちプロジェクタ 1Bによる投影画像とそれとオーバーラップするプロ ジェクタ 1Aによる投影画像の一部とを撮影できるようにする。なお、距離 dl, d2は、 カメラ 3A, 3Bにより対応するプロジェクタ 1A, 1Bによる投影領域を撮影できれば、 等しくても、異なっても良ぐ任意に設定することができる。 [0085] The present embodiment uses the same number of cameras 3A and 3B as the projectors 1A and 1B without providing a plurality of sets of cameras and laser pointers as in the seventh embodiment. Is estimated. For this reason, the projector 1A and the camera 3A are fixed to the support 36A at a distance dl, the projector 1B and the camera 3B are fixed to the support 36B at a distance d2, and the screen 3 Projection area, that is, the image projected by projector 1A and the image projected by projector 1B overlapping it Part of the image can be captured so that the camera 3B can capture the projection area of the screen 2 by the projector 1B, that is, the image projected by the projector 1B and a part of the image projected by the projector 1A that overlaps it. . Note that the distances dl and d2 can be set arbitrarily or differently, as long as the cameras 3A and 3B can shoot the projection areas by the corresponding projectors 1A and 1B.
[0086] このようにして、スクリーン形状を推定するにあたっては、プロジェクタ 1A, 1Bにより マーカー画像を投射してカメラ 3A, 3Bで撮影し、それらの撮影画像と、プロジェクタ 1A, 1Bおよびカメラ 3A, 3Bのそれぞれの相対的位置関係を示す情報とを用いて、 スクリーン形状を推定する。その他の構成および動作は、第 7実施の形態と同様であ る。 [0086] As described above, when estimating the screen shape, the marker images are projected by the projectors 1A and 1B and photographed by the cameras 3A and 3B, and the photographed images are combined with the projectors 1A and 1B and the cameras 3A and 3B. The screen shape is estimated using the information indicating the relative positional relationship between the two. Other configurations and operations are the same as those of the seventh embodiment.
[0087] 本実施の形態によれば、レーザーポインタを用いることなぐ簡単な構成でスクリー ン形状を推定して画像補正することができる。  According to the present embodiment, it is possible to estimate a screen shape and correct an image by using a simple configuration without using a laser pointer.
[0088] (第 9実施の形態)  (Ninth Embodiment)
図 25—図 29は本発明の第 9実施の形態を示すもので、図 25はマルチプロジェク シヨンシステムの全体構成を示す図、図 26 (a)一 (c)はスクリーン形状推定処理を説 明するための図、図 27はスクリーン形状推定処理を説明するためのフローチャート、 図 28は幾何補正データ算出処理を説明するためのフローチャート、図 29は図 25に 示す補正データ算出部の構成を示すブロック図である。  FIGS. 25 to 29 show a ninth embodiment of the present invention. FIG. 25 is a diagram showing the overall configuration of a multi-projection system, and FIGS. 26 (a) to (c) explain screen shape estimation processing. FIG. 27 is a flowchart for explaining the screen shape estimation process, FIG. 28 is a flowchart for explaining the geometric correction data calculation process, and FIG. 29 is a block diagram showing the configuration of the correction data calculation unit shown in FIG. FIG.
[0089] 本実施の形態は、 2台のレーザーポインタ 35A, 35Bを距離 d離して支持具 36に固 定し、これら相対的な位置関係が固定されたレーザーポインタ 35A, 35Bにより異な る位置力もスクリーン 2の全体にマーカーを投射して、その投射された各々のマーカ 一をカメラ 3A, 3Bによりスクリーン 2の小領域毎に分けて撮影し、これら撮影された画 像からスクリーン 2の形状を推定するものである。  In the present embodiment, the two laser pointers 35A and 35B are fixed to the support 36 at a distance d, and different positional forces can be applied by the laser pointers 35A and 35B whose relative positional relationship is fixed. Markers are projected on the entire screen 2, and each projected marker is shot separately for each small area of the screen 2 by cameras 3A and 3B, and the shape of the screen 2 is estimated from these shot images. To do.
[0090] 以下、 2台のレーザーポインタ 35A, 35Bとカメラ 3A, 3Bとを用いてスクリーン形状 を推定の具体的な方法にっ 、て、図 26 (a)一 (c)を参照しながら説明する。  Hereinafter, a specific method for estimating the screen shape using the two laser pointers 35A and 35B and the cameras 3A and 3B will be described with reference to FIGS. 26 (a) and 26 (c). I do.
[0091] 先ず、図 26 (a)に示すように、 2台のレーザーポインタ 35A, 35B力 、それぞれ所 定の角度 Θ Ak, Θ Bk(k= l— k、 kは角度サンプル数)でマーカーを同時に投射し 、これをカメラ 3Aで撮影する。 [0092] 次に、例えば、レーザーポインタ 35Bにおいて投射された i番目のマーカー点に着 目し、これと隣接するレーザーポインタ 35Aにおいて投射されたマーカー点(k=j、j + 1)をカメラ撮影画像上から抽出する。 [0091] First, as shown in Fig. 26 (a), two laser pointers 35A and 35B are used, and markers are set at predetermined angles Θ Ak and Θ Bk (k = l-k, where k is the number of angle samples). Are projected at the same time, and this is photographed by the camera 3A. Next, for example, focusing on the i-th marker point projected by the laser pointer 35B, the marker point (k = j, j + 1) projected by the laser pointer 35A adjacent thereto is photographed by a camera. Extract from the image.
[0093] 次に、図 26 (b)に示す撮影画像上におけるレーザーポインタ 35Bの i番目のマーカ 一点(B )と、レーザーポインタ 35Aの jおよび j + 1番目のマーカー点(A , A )との i j j + 1 距離の比 ( β Ζ α )を計算すると共に、その算出した距離の比と、レーザーポインタ 3 5Αの jおよび j + 1番目のマーカー投射角 0 Αおよび 0 Α とを用いて、図 26 (c)に  Next, one point (B) of the i-th marker of the laser pointer 35B on the captured image shown in FIG. 26 (b), and j and j + 1-th marker points (A, A) of the laser pointer 35A Ijj + 1 distance ratio (β Ζ α) of the laser pointer 35 Α j and j + 1st marker projection angles 0 Α and 0 、 Fig. 26 (c)
j j + l  j j + l
示すように、レーザーポインタ 35Bによるマーカー投影位置と同じ位置にレーザーポ インタ 35Aからマーカーを投射する場合の投射角 Θ ' Aを補間演算により計算する 。例えば、線形補間であれば、 θ ' Α = ( β / α ) · ( Θ Α — 0 A ) + 0 Α、により計  As shown in the figure, the projection angle Θ'A when the marker is projected from the laser pointer 35A to the same position as the marker projection position by the laser pointer 35B is calculated by interpolation. For example, for linear interpolation, θ 'Α = (β / α) · (Θ Α — 0 A) + 0 Α
i j + l j j  i j + l j j
算する。さらに、スクリーン形状が曲面である場合には、高次補間式により、より精度 良く求めることも可能である。  Calculate. Further, when the screen shape is a curved surface, it can be obtained with higher accuracy by a higher-order interpolation formula.
[0094] その後、計算した 0 ' Aと、予め設定されたレーザーポインタ 35Bによる i番目のマ 一力一の投射角 0 A.と、レーザーポインタ 35A, 35B間の距離 dとを用いて、レーザ 一ポインタ 35Bによる i番目のマーカーの 3次元位置を算出する。この処理を、 i= l一 Kの全ての点について求めることで、スクリーン形状を推定する。  [0094] Thereafter, using the calculated 0'A, the i-th projection angle 0A of the i-th power by the laser pointer 35B set in advance, and the distance d between the laser pointers 35A and 35B, the laser Calculate the three-dimensional position of the i-th marker by one pointer 35B. The screen shape is estimated by obtaining this processing for all points of i = l−1K.
[0095] 図 27は、以上の処理をまとめたもので、その詳細な説明は図面の記載と重複する ので省略する。なお、スクリーン形状をさらに精度よく推定するために、レーザーボイ ンタ 35Bによるマーカー投影位置と同じ位置にレーザーポインタ 35Aからマーカーを 投射する場合の投射角 Θ ' Aを計算した後、実際にその投射角でレーザーポインタ 35Aによりマーカー投射を行って再びカメラ撮影を行い、その結果、レーザーポイン タ 35Bによるマーカーとずれていた場合には、再び補正を行って、より正確なレーザ 一ポインタ 35Aによる投射角を推定してもよ!/、。  FIG. 27 summarizes the above processing, and a detailed description thereof will be omitted because it is redundant with the description of the drawings. In order to estimate the screen shape more accurately, after calculating the projection angle Θ'A when projecting the marker from the laser pointer 35A to the same position as the marker projection position by the laser pointer 35B, the projection angle is actually calculated. When the marker is projected with the laser pointer 35A and the camera is shot again, as a result, if it is misaligned with the marker with the laser pointer 35B, it is corrected again and the more accurate projection angle with the laser pointer 35A is obtained. You can estimate!
[0096] 図 28は、本実施の形態における幾何補正データ算出処理を示すもので、その詳 細な説明は図面の記載と重複するので省略する。なお、図 28において、符号 Sで示 す処理が図 27の処理にあたる。  FIG. 28 shows a process of calculating geometric correction data in the present embodiment, and a detailed description thereof will be omitted because it is the same as that in the drawings. Note that, in FIG. 28, the process indicated by reference symbol S corresponds to the process in FIG.
[0097] 図 29は、本実施の形態における補正データ算出部 4の詳細ブロックを示す図であ る。この補正データ算出部 4は、図 14に示した第 3実施の形態の補正データ算出部 4において、カメラ撮影画像データ格納部 11に代えて、カメラ 3Aによる撮影画像デ ータを格納するカメラ撮影画像データ格納部 11 Aと、カメラ 3Bによる撮影画像データ を格納するカメラ撮影画像データ格納部 11Bを設けたもので、その他の構成は第 3 実施の形態と同様である。 FIG. 29 is a diagram showing detailed blocks of the correction data calculation unit 4 in the present embodiment. This correction data calculation unit 4 is the correction data calculation unit of the third embodiment shown in FIG. 4, instead of the camera captured image data storage unit 11, a camera captured image data storage unit 11 A for storing captured image data from the camera 3A and a camera captured image data storage unit for storing captured image data from the camera 3B 11B is provided, and the other configuration is the same as that of the third embodiment.
[0098] 本実施の形態では、この補正データ算出部 4にお 、て、カメラ撮影画像データ格納 部 11Aおよびカメラ撮影画像データ格納部 11Bに格納されたスクリーン 2の小領域 毎の画像を用いて、図 27および図 28に示した処理を行って幾何補正データを算出 すると共に、第 1実施の形態で説明したと同様にして、プロジェクタ 1A, 1Bの各色特 性のむらおよびガンマ特性のむらを算出し、これらを全画面で均一にするような色補 正マトリクスおよびガンマ補正データを算出して、それらの補正データを画像変換部 5に出力する。 [0098] In the present embodiment, the correction data calculation unit 4 uses the image for each small area of the screen 2 stored in the camera captured image data storage unit 11A and the camera captured image data storage unit 11B. In addition to calculating the geometric correction data by performing the processing shown in FIGS. 27 and 28, the color characteristic unevenness and the gamma characteristic unevenness of the projectors 1A and 1B are calculated in the same manner as described in the first embodiment. Then, a color correction matrix and gamma correction data that make these uniform over the entire screen are calculated, and the correction data is output to the image conversion unit 5.
[0099] 本実施の形態によれば、相対的な位置関係が固定されたレーザーポインタ 35A, 3 5Bにより異なる位置からスクリーン 2の全体にマーカーを投射し、その投射された各 々のマーカーをカメラ 3A, 3Bによりスクリーン 2の小領域毎に分けて撮影して、その 撮影された画像からスクリーン 2の形状を推定するようにしたので、カメラ 3A, 3Bの位 置が既知でなくとも、すなわち固定されてなくとも、スクリーン 2の 3次元的な位置を推 定でき、カメラ 3A, 3Bの設置の自由度を高めることができる。  [0099] According to the present embodiment, markers are projected onto the entire screen 2 from different positions by laser pointers 35A and 35B having a fixed relative positional relationship, and each projected marker is projected by a camera. 3A and 3B were used to take pictures of each small area of screen 2, and the shape of screen 2 was estimated from the captured images.Therefore, even if the positions of cameras 3A and 3B were not known, they were fixed. If not, the three-dimensional position of the screen 2 can be estimated, and the degree of freedom in installing the cameras 3A and 3B can be increased.
[0100] なお、本実施の形態では、 2台のレーザーポインタ 35A, 35Bを用いた力 1台のレ 一ザ一ポインタを移動ステージに平行移動可能に支持し、この 1台のレーザーポイン タを平行移動させて相対的位置が既知の距離 dの両端でスクリーン 2の全体にマー カーを投射して、同様にして補正データを算出することもできる。  [0100] In the present embodiment, one laser pointer using two laser pointers 35A and 35B is supported on the moving stage so as to be able to move in parallel, and this one laser pointer is used. The correction data can be calculated in the same manner by projecting a marker on the entire screen 2 at both ends of a distance d whose relative position is known by moving in parallel.
[0101] (第 10実施の形態)  [0101] (Tenth embodiment)
図 30および図 31は本発明の第 10実施の形態を示すもので、図 30はマルチプロジ ェクションシステムの全体構成を示す図、図 31は本実施の形態で使用する観察用眼 鏡を示す斜視図である。  FIGS. 30 and 31 show a tenth embodiment of the present invention. FIG. 30 is a diagram showing the overall configuration of a multi-projection system, and FIG. 31 is a perspective view showing an observation scope used in the present embodiment. FIG.
[0102] 本実施の形態では、スクリーン 2上の複数の位置にそれぞれ観察位置検出センサ 4 5を設け、これら観察位置検出センサ 45により観察者 46の視点位置を検出して、そ の検出した視点位置を補正データ算出部 4の観察位置設定部 14に観察位置として 設定し、その設定された観察位置に基づいて、その観察点において自動的に歪み 補正を行って歪みのない画像をスクリーン 2上に表示するようにしたものである。 [0102] In the present embodiment, observation position detection sensors 45 are provided at a plurality of positions on the screen 2, and the viewpoint position of the observer 46 is detected by the observation position detection sensors 45. The position is used as the observation position in the observation position setting unit 14 of the correction data calculation unit 4. A distortion is automatically corrected at the observation point based on the set observation position, and an image without distortion is displayed on the screen 2.
[0103] このため、本実施の形態では、観察者 46には、例えば図 31に示すような赤外線 L ED47を搭載した観察用眼鏡 48をかけるようにし、観察位置検出センサ 45は赤外線 検出センサ等で構成して、赤外線 LED47からの赤外線を観察位置検出センサ 45で 検出して観察者 46の視点位置を検出する。なお、赤外線 LED47から出射させる赤 外線は、観察者 46の視点方向に指向性を持たせておく。  For this reason, in the present embodiment, the observer 46 is put on observation glasses 48 equipped with an infrared LED 47 as shown in FIG. 31, for example, and the observation position detection sensor 45 is an infrared detection sensor or the like. The observation position detection sensor 45 detects infrared rays from the infrared LED 47 to detect the viewpoint position of the observer 46. Note that the infrared rays emitted from the infrared LED 47 have directivity in the direction of the viewpoint of the observer 46.
[0104] また、補正データ算出部 4および画像変換部 5においては、上述した第 1一第 9実 施の形態のいずれかの実施の形態によって、カメラを用いてスクリーン 2の形状およ びプロジェクタ 1A, 1Bの投影位置関係を予め算出しておき、任意の観察位置にお ける歪み補正を常時行える状態としておく。  [0104] In the correction data calculation unit 4 and the image conversion unit 5, the shape of the screen 2 and the projector using the camera may be set according to any one of the first to ninth embodiments. The projection position relationship between 1A and 1B is calculated in advance, and distortion correction at an arbitrary observation position is always performed.
[0105] このように構成すれば、観察者 46の動きに応じて、リアルタイムで観察位置を検出 して歪みのない画像に自動的に補正することができるので、例えば従来の視点追従 型の立体表示システム等においても、常に、スクリーン形状による歪みのない状態で 映像を観察することができる。  With this configuration, the observation position can be detected in real time according to the movement of the observer 46 and automatically corrected to an image without distortion. Even in a display system or the like, an image can always be observed without distortion due to the screen shape.
[0106] なお、本発明は、上記実施の形態にのみ限定されるものではなぐ幾多の変形また は変更が可能である。例えば、上記実施の形態では、半球ドーム型のスクリーン 2に 画像を投影表示するようにした力 アーチ型のスクリーンや 360° 全ての方向がスク リーンで覆われている全球型スクリーン等の曲面スクリーンに画像を投影表示する場 合や、図 32 (a)に示すように、平面型のスクリーン 2aにフロント投射して画像表示す る場合や、図 32 (b)に示すように、平面型のスクリーン 2aにリア投射して画像表示す る場合にも、本発明を有効に適用することができる。また、プロジェクタの数も、 2台に 限らず、 3台以上の場合にも、本発明を適用することができる。  [0106] The present invention is not limited to the above-described embodiment, but can be variously modified or changed. For example, in the above-described embodiment, a curved screen such as an arch screen or a spherical screen in which all 360 ° directions are covered with screens is used to project and display an image on the hemispherical dome screen 2. When projecting and displaying an image, as shown in Fig. 32 (a), when projecting an image by front projection on a flat screen 2a, or as shown in Fig. 32 (b), a flat screen The present invention can also be effectively applied to the case where an image is displayed by rear projection on 2a. Also, the number of projectors is not limited to two, and the present invention can be applied to a case where there are three or more projectors.
産業上の利用可能性  Industrial applicability
[0107] 本発明によれば、スクリーン上に投射された画像の各点の 3次元的位置、すなわち スクリーン位置'形状を推定して、投射画像の位置ずれや歪み、さらには色ずれを補 正することができるので、スクリーンの形状が既知でなくても、画像を良好に表示する ことができる。し力も、一度、画像取得手段で画像データを取得しておけば、その後 は自由に観察位置を設定して歪み補正を変更することができるので、観察位置変化 に対してリアルタイムに補正することができる。 According to the present invention, the three-dimensional position of each point of the image projected on the screen, that is, the screen position 'shape is estimated, and the positional deviation and distortion of the projected image and the color deviation are corrected. Therefore, an image can be displayed well even if the shape of the screen is not known. Once the image data has been acquired by the image acquisition means, Can freely set the observation position and change the distortion correction, so that it is possible to correct the change in the observation position in real time.

Claims

請求の範囲 The scope of the claims
[1] 複数台の画像投射装置によりスクリーン上に投射された画像を貼りあわせて一つの 大きな画像を形成するマルチプロジェクシヨンシステムにおいて、  [1] In a multi-projection system in which images projected on a screen by a plurality of image projection devices are combined to form one large image,
上記画像投射装置により上記スクリーン上に投射された画像を、相対的位置関係 が既知である異なる位置から撮影して視差画像データを取得する画像取得手段と、 上記画像取得手段により取得された視差画像データおよび上記相対的位置関係 の情報に基づいて、上記スクリーン上に投射された画像の各点の 3次元的位置を推 定して、上記画像投射装置に入力する画像を補正するための補正データを算出す る画像補正データ算出手段と、  Image acquisition means for acquiring an image projected on the screen by the image projection device from a different position having a known relative positional relationship to acquire parallax image data; and a parallax image acquired by the image acquisition means. Correction data for estimating the three-dimensional position of each point of the image projected on the screen based on the data and the information on the relative positional relationship, and correcting the image input to the image projection device Image correction data calculating means for calculating
上記画像補正データ算出手段により算出された補正データに基づいて、上記画像 投射装置に入力する画像を補正する画像補正手段とを備えることを特徴とするマル チプロジェクシヨンシステム。  A multi-projection system comprising: an image correction unit that corrects an image input to the image projection device based on the correction data calculated by the image correction data calculation unit.
[2] 上記画像取得手段は、相対的位置関係が既知の異なる位置に設置された 2台の デジタルカメラを有し、これら 2台のデジタルカメラにより上記スクリーン上に投射され た画像を撮影して視差画像データを取得することを特徴とする請求項 1に記載のマ ルチプロジェクシヨンシステム。 [2] The image acquisition means has two digital cameras installed at different positions with a known relative positional relationship, and captures an image projected on the screen by the two digital cameras. 2. The multi-projection system according to claim 1, wherein parallax image data is acquired.
[3] 上記画像取得手段は、 1台のデジタルカメラと、該デジタルカメラを平行移動させる 移動機構とを有し、上記移動機構により上記デジタルカメラを平行移動させて、相対 的位置関係が既知である異なる位置カゝら上記スクリーン上に投射された画像を撮影 して視差画像データを取得することを特徴とする請求項 1に記載のマルチプロジェク シヨンシステム。  [3] The image acquisition means has one digital camera and a moving mechanism for translating the digital camera in parallel, and the digital camera is translated by the moving mechanism so that the relative positional relationship is known. 2. The multi-projection system according to claim 1, wherein parallax image data is obtained by photographing an image projected on the screen from a different position.
[4] 複数台の画像投射装置によりスクリーン上に投射された画像を貼りあわせて一つの 大きな画像を形成するマルチプロジェクシヨンシステムにおいて、  [4] In a multi-projection system in which images projected onto a screen by a plurality of image projection devices are combined to form one large image,
上記スクリーン上に所定の角度でマーカーを投射するマーカー投射手段と、 上記画像投射装置により上記スクリーン上に投射された画像および上記マーカー 投射手段により上記スクリーン上に投射されたマーカーをそれぞれ撮影して画像デ ータを取得する画像取得手段と、  Marker projecting means for projecting a marker at a predetermined angle on the screen, and an image projected on the screen by the image projection device and an image obtained by photographing the marker projected on the screen by the marker projecting means, respectively. Image acquisition means for acquiring data;
上記マーカー投射手段と上記画像取得手段との相対的位置関係を固定するため の支持手段と、 To fix the relative positional relationship between the marker projection means and the image acquisition means Support means of
上記画像取得手段により取得された画像データおよび上記相対的位置関係の情 報に基づいて、上記スクリーン上に投射された画像の各点の 3次元的位置を推定し て、上記画像投射装置に入力する画像を補正するための補正データを算出する画 像補正データ算出手段と、  The three-dimensional position of each point of the image projected on the screen is estimated based on the image data acquired by the image acquiring means and the information on the relative positional relationship, and is input to the image projection device. Image correction data calculating means for calculating correction data for correcting an image to be corrected;
上記画像補正データ算出手段により算出された補正データに基づいて、上記画像 投射装置に入力する画像を補正する画像補正手段とを備えることを特徴とするマル チプロジェクシヨンシステム。  A multi-projection system comprising: an image correction unit that corrects an image input to the image projection device based on the correction data calculated by the image correction data calculation unit.
[5] 複数台の画像投射装置によりスクリーン上に投射された画像を貼りあわせて一つの 大きな画像を形成するマルチプロジェクシヨンシステムにおいて、  [5] In a multi-projection system in which images projected on a screen by a plurality of image projection devices are combined to form one large image,
上記画像投射装置により上記スクリーン上に投射された画像を、上記画像投射装 置と相対的位置関係が既知の場所から撮影して画像データを取得する複数台の画 像取得手段と、  A plurality of image acquisition means for acquiring an image data by photographing an image projected on the screen by the image projection device from a location whose relative positional relationship with the image projection device is known,
上記複数台の画像取得手段により取得された画像データおよび上記相対的位置 関係の情報に基づいて、上記スクリーン上に投射された画像の各点の 3次元的位置 を推定して、上記画像投射装置に入力する画像を補正するための補正データを算 出する画像補正データ算出手段と、  The three-dimensional position of each point of the image projected on the screen is estimated based on the image data acquired by the plurality of image acquisition means and the information on the relative positional relationship, and the image projection device Image correction data calculating means for calculating correction data for correcting an image input to the
上記画像補正データ算出手段により算出された補正データに基づいて、上記画像 投射装置に入力する画像を補正する画像補正手段とを備えることを特徴とするマル チプロジェクシヨンシステム。  A multi-projection system comprising: an image correction unit that corrects an image input to the image projection device based on the correction data calculated by the image correction data calculation unit.
[6] 複数台の画像投射装置によりスクリーン上に投射された画像を貼りあわせて一つの 大きな画像を形成するマルチプロジェクシヨンシステムにおいて、  [6] In a multi-projection system for forming one large image by pasting images projected on a screen by a plurality of image projection devices,
相対的位置関係が既知である異なる位置力 上記スクリーン上にマーカーを投射 するマーカー投射手段と、  Different positional forces whose relative positional relationship is known; marker projecting means for projecting a marker on the screen;
上記画像投射装置により上記スクリーン上に投射された画像および上記マーカー 投射手段により上記スクリーン上に投射されたマーカーを撮影して画像データを取 得する画像取得手段と、  Image acquisition means for taking an image projected on the screen by the image projection device and a marker projected on the screen by the marker projection means to acquire image data,
上記画像取得手段により取得された画像データおよび上記相対的位置関係の情 報に基づいて、上記スクリーン上に投射された画像の各点の 3次元的位置を推定し て、上記画像投射装置に入力する画像を補正するための補正データを算出する画 像補正データ算出手段と、 The image data acquired by the image acquiring means and the information on the relative positional relationship. Image correction data calculating means for estimating the three-dimensional position of each point of the image projected on the screen based on the information and calculating correction data for correcting the image input to the image projection device. When,
上記画像補正データ算出手段により算出された補正データに基づいて、上記画像 投射装置に入力する画像を補正する画像補正手段とを備えることを特徴とするマル チプロジェクシヨンシステム。  A multi-projection system comprising: an image correction unit that corrects an image input to the image projection device based on the correction data calculated by the image correction data calculation unit.
[7] 上記マーカー投射手段は、相対的位置関係が既知の異なる位置に設置された 2台 のレーザーポインタを有することを特徴とする請求項 6に記載のマルチプロジェクショ ンシステム。 7. The multi-projection system according to claim 6, wherein the marker projecting means has two laser pointers installed at different positions whose relative positional relationship is known.
[8] 上記マーカー投射手段は、 1台のレーザーポインタと、該レーザーポインタを平行 移動させる移動機構とを有し、上記移動機構により上記レーザーポインタを平行移動 させて、相対的位置関係が既知である異なる位置力 上記スクリーン上にマーカーを 投射することを特徴とする請求項 6に記載のマルチプロジェクシヨンシステム。  [8] The marker projecting means has one laser pointer and a moving mechanism for moving the laser pointer in parallel. The moving mechanism moves the laser pointer in parallel so that the relative positional relationship is known. 7. The multi-projection system according to claim 6, wherein a marker is projected on the screen.
[9] 上記補正データ算出手段は、上記スクリーンに対する観察者の位置情報に基づい て補正データを算出することを特徴とする請求項 1一 8のいずれか一項に記載のマ ルチプロジェクシヨンシステム。  9. The multi-projection system according to claim 11, wherein the correction data calculation means calculates correction data based on position information of an observer with respect to the screen.
[10] 上記観察者の視点位置を検出して上記観察者の位置情報を得るための視点検出 センサを備えることを特徴とする請求項 9に記載のマルチプロジェクシヨンシステム。  10. The multi-projection system according to claim 9, further comprising a viewpoint detection sensor for detecting a viewpoint position of the observer and obtaining position information of the observer.
PCT/JP2005/001337 2004-02-27 2005-01-31 Multiprojection system WO2005084017A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2004-054717 2004-02-27
JP2004054717A JP2005244835A (en) 2004-02-27 2004-02-27 Multiprojection system

Publications (1)

Publication Number Publication Date
WO2005084017A1 true WO2005084017A1 (en) 2005-09-09

Family

ID=34908803

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2005/001337 WO2005084017A1 (en) 2004-02-27 2005-01-31 Multiprojection system

Country Status (2)

Country Link
JP (1) JP2005244835A (en)
WO (1) WO2005084017A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012249009A (en) * 2011-05-26 2012-12-13 Nippon Telegr & Teleph Corp <Ntt> Optical projection control apparatus, optical projection control method, and program
CN111684793A (en) * 2018-02-08 2020-09-18 索尼公司 Image processing device, image processing method, program, and projection system
US11676241B2 (en) 2020-01-31 2023-06-13 Seiko Epson Corporation Control method for image projection system, and image projection system

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4857732B2 (en) * 2005-11-24 2012-01-18 パナソニック電工株式会社 Virtual reality generation system
JP4696018B2 (en) * 2006-04-13 2011-06-08 日本電信電話株式会社 Observation position following video presentation device, observation position following video presentation program, video presentation device, and video presentation program
JP4973009B2 (en) * 2006-05-29 2012-07-11 セイコーエプソン株式会社 Projector and image projection method
JP2008017348A (en) * 2006-07-07 2008-01-24 Matsushita Electric Works Ltd Video display apparatus, and distortion correction processing method of video signal
JP2008017347A (en) * 2006-07-07 2008-01-24 Matsushita Electric Works Ltd Video display apparatus, and distortion correction processing method of video signal
JP2008015381A (en) * 2006-07-07 2008-01-24 Matsushita Electric Works Ltd Video display apparatus, distortion correction processing method for video signal
JP4965967B2 (en) * 2006-10-30 2012-07-04 株式会社日立製作所 Image display system adjustment system
US8994757B2 (en) * 2007-03-15 2015-03-31 Scalable Display Technologies, Inc. System and method for providing improved display quality by display adjustment and image processing using optical feedback
JP5298545B2 (en) * 2008-01-31 2013-09-25 セイコーエプソン株式会社 Image forming apparatus
JP5955003B2 (en) * 2012-01-26 2016-07-20 キヤノン株式会社 Image processing apparatus, image processing method, and program
JP2016019194A (en) 2014-07-09 2016-02-01 株式会社東芝 Image processing apparatus, image processing method, and image projection device
CN104457616A (en) * 2014-12-31 2015-03-25 苏州江奥光电科技有限公司 360-degree three-dimensional imaging projection device
JP6543067B2 (en) * 2015-03-31 2019-07-10 株式会社メガチップス Projection system, projector device, imaging device, and program
JP6390032B2 (en) * 2015-09-02 2018-09-19 カルソニックカンセイ株式会社 Head-up display distortion correction method and head-up display distortion correction apparatus using the same
WO2019155903A1 (en) 2018-02-08 2019-08-15 ソニー株式会社 Information processing device and method
WO2020218028A1 (en) * 2019-04-25 2020-10-29 ソニー株式会社 Image processing device, image processing method, program, and image processing system
JP2020187227A (en) * 2019-05-13 2020-11-19 株式会社アーキジオ Omnidirectional image capturing device
JP7099497B2 (en) * 2020-07-28 2022-07-12 セイコーエプソン株式会社 Image generation method, image generation system, and program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001061121A (en) * 1999-08-23 2001-03-06 Nec Corp Projector
JP2001083949A (en) * 1999-09-16 2001-03-30 Japan Science & Technology Corp Image projecting device
JP2002532795A (en) * 1998-12-07 2002-10-02 ユニバーサル シティ スタジオズ インコーポレイテッド Image correction method for compensating viewpoint image distortion
JP2004015205A (en) * 2002-06-04 2004-01-15 Olympus Corp Multi-projection system and correction data acquisition method in multi-projection system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002532795A (en) * 1998-12-07 2002-10-02 ユニバーサル シティ スタジオズ インコーポレイテッド Image correction method for compensating viewpoint image distortion
JP2001061121A (en) * 1999-08-23 2001-03-06 Nec Corp Projector
JP2001083949A (en) * 1999-09-16 2001-03-30 Japan Science & Technology Corp Image projecting device
JP2004015205A (en) * 2002-06-04 2004-01-15 Olympus Corp Multi-projection system and correction data acquisition method in multi-projection system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012249009A (en) * 2011-05-26 2012-12-13 Nippon Telegr & Teleph Corp <Ntt> Optical projection control apparatus, optical projection control method, and program
CN111684793A (en) * 2018-02-08 2020-09-18 索尼公司 Image processing device, image processing method, program, and projection system
US11218662B2 (en) 2018-02-08 2022-01-04 Sony Corporation Image processing device, image processing method, and projection system
US11676241B2 (en) 2020-01-31 2023-06-13 Seiko Epson Corporation Control method for image projection system, and image projection system

Also Published As

Publication number Publication date
JP2005244835A (en) 2005-09-08

Similar Documents

Publication Publication Date Title
WO2005084017A1 (en) Multiprojection system
JP4108609B2 (en) How to calibrate a projector with a camera
US9892488B1 (en) Multi-camera frame stitching
JP6369810B2 (en) Projection image display system, projection image display method, and projection display device
US9195121B2 (en) Markerless geometric registration of multiple projectors on extruded surfaces using an uncalibrated camera
WO2018076154A1 (en) Spatial positioning calibration of fisheye camera-based panoramic video generating method
JP6037375B2 (en) Image projection apparatus and image processing method
KR20160034847A (en) System and method for calibrating a display system using a short throw camera
US10063792B1 (en) Formatting stitched panoramic frames for transmission
KR20160118868A (en) System and method for displaying panorama image using single look-up table
JP2015056834A (en) Projection system, image processing system, projection method and program
CN103685917A (en) Image processor, image processing method and program, and imaging system
WO2006025191A1 (en) Geometrical correcting method for multiprojection system
US20040169827A1 (en) Projection display apparatus
KR100790887B1 (en) Apparatus and method for processing image
CN110505468B (en) Test calibration and deviation correction method for augmented reality display equipment
JP2004015205A (en) Multi-projection system and correction data acquisition method in multi-projection system
JP2004228824A (en) Stack projection device and its adjusting method
CN113259642B (en) Film visual angle adjusting method and system
JP2004228619A (en) Method of adjusting distortion in video image of projector
JP2012078490A (en) Projection image display device, and image adjusting method
KR20140121345A (en) Surveillance Camera Unit And Method of Operating The Same
JP4230839B2 (en) Multi-camera system and adjustment device thereof
Johnson et al. A distributed cooperative framework for continuous multi-projector pose estimation
CN114339179B (en) Projection correction method, apparatus, storage medium and projection device

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase