WO1998030029A1 - Dispositif et procede pour synthese d'images - Google Patents
Dispositif et procede pour synthese d'images Download PDFInfo
- Publication number
- WO1998030029A1 WO1998030029A1 PCT/JP1997/004896 JP9704896W WO9830029A1 WO 1998030029 A1 WO1998030029 A1 WO 1998030029A1 JP 9704896 W JP9704896 W JP 9704896W WO 9830029 A1 WO9830029 A1 WO 9830029A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- light source
- processing target
- background
- target area
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/2224—Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/74—Circuits for processing colour signals for obtaining special effects
- H04N9/75—Chroma key
Definitions
- the present invention relates to an image synthesizing apparatus, and is applied to a case where a live camera image and a computer graphics image are synthesized in real time, for example, in a broadcasting studio or the like using a technique of chroma. It is preferable.
- Background art
- a predetermined color for example, blue or green, etc.
- the person is made to stand in front of a background having a color not included in the image of (1), and the person is imaged.
- a key signal is generated based on the color of the background from the imaging signal, and an image based on computer graphics is replaced in place of the background based on the key signal.
- a computer graphics video and a person are combined to generate an image as if a person actually existed in the virtual space generated by the computer graphics. Can be done.
- the present invention has been made in view of the above points, and proposes an image synthesizing apparatus that can easily change the background image to be synthesized in response to the movement of the imaging unit with a simple configuration. Is what you do.
- the present invention detects an area obtained by imaging a processing target area of a predetermined color from video data obtained via an imaging unit, and inserts another image into the area to synthesize the image.
- an image synthesizing apparatus that generates a video
- an illumination unit that forms a light source image in the same color as the processing target region in the processing target region, and position information of the imaging unit with respect to the processing target region is detected based on the light source image.
- a position detecting means and an image generating means for changing another image in response to a change in the position of the imaging means based on the position information are provided.
- the position of the imaging unit with respect to the processing target area is detected based on the light source image, and other images are changed in accordance with the detected change in the position of the imaging unit.
- the video to be inserted can be changed according to the movement of the imaging means, and a composite image without a sense of incongruity can be generated.
- the input video data is generated in an image synthesizing apparatus that detects an area obtained by imaging a processing target area of a predetermined color from the input video data and inserts another video into the area to generate a synthesized video.
- Illuminating means for forming a plurality of light source images in the same color as the processing target area in the processing target area, detecting the processing target area from the input video data, and setting a plurality of light sources formed in the processing target area.
- Position detection means for detecting the positions of four light source images that are the basis of the image, and background source to be inserted into the processing target area based on the position information of the light source images detected by the position detection means
- Image conversion means for three-dimensionally converting video data, and synthesizing means for synthesizing the background source video data image-converted by the image conversion means into an area corresponding to a processing target area of the input video data are provided. I do.
- the position information of the four reference light source images is detected, and the background source video data to be inserted is three-dimensionally converted based on the position information.
- the positions of the two light source images it is possible to generate background source video data that changes in accordance with the movement of the imaging means even when the imaging means moves, and it is possible to generate a composite video without a sense of incongruity. .
- an image synthesizing device that can change the synthesized background image naturally according to the movement of the imaging means with a simpler configuration.
- FIG. 5 is a block diagram showing the configuration of the image synthesizing apparatus according to the first embodiment.
- FIG. 2 is a chart for explaining the identification code of the point light source.
- FIG. 3 is a schematic diagram for explaining the key signals # 1 and # 2.
- FIG. 4 is a video image diagram for explaining the image conversion of the background source video signal.
- FIG. 5 is a video image diagram for explaining a synthesized video signal generated by the image synthesis device.
- FIG. 6 is a schematic diagram for explaining a method of detecting the position information (distance, inclination, and position) of the video camera according to the first embodiment.
- FIG. 7 is a schematic diagram for explaining a method of detecting the position information (distance, inclination, and position) of the video camera according to the first embodiment.
- FIG. 8 is a schematic diagram for explaining a method for detecting the position information (distance, inclination, and position) of the video camera according to the first embodiment.
- FIG. 9 is a schematic diagram for explaining a method for detecting the position information (distance, inclination, and position) of the video camera according to the first embodiment.
- FIG. 10 is a schematic diagram for explaining a method of detecting the position information (distance, inclination, and position) of the video camera according to the first embodiment.
- FIG. 11 is a schematic diagram used to describe a method for detecting position information (distance, inclination, and position) of a video camera according to the first embodiment.
- FIG. 12 is a schematic diagram used to describe a method for detecting position information (distance, inclination, and position) of a video camera according to the first embodiment.
- FIG. 13 is a schematic diagram for explaining detection of a point light source.
- FIG. 14 is a block diagram showing the configuration of the image composition device according to the second embodiment.
- FIG. 15 is a schematic diagram used to explain a reference point light source.
- FIG. 16 is a schematic diagram for explanation when the background plate is imaged from the front.
- FIG. 17 is a schematic diagram used to explain detection of a reference point light source.
- FIG. 18 is a schematic diagram used for explanation when the reference point light source deviates from the angle of view.
- FIG. 19 is a schematic diagram used to explain the operation of the frame memory.
- FIG. 20 is a schematic diagram used for describing three-dimensional coordinates.
- FIG. 21 is a schematic diagram for explaining the correspondence between the frame memory and the monitor screen.
- FIG. 22 is a flowchart showing a method of calculating a read address according to the second embodiment.
- FIG. 23 is a schematic diagram for explaining the coordinates of the detected point light source.
- reference numeral 1 denotes an image synthesizing apparatus according to the first embodiment as a whole.
- Projector device 4 that projects the light source and video camera 2
- An image processing unit 5 that generates a composite image from the captured video signal V 1.
- the image synthesizing apparatus 1 captures an image of an analyzer standing in front of the background plate 3 together with the background plate 3, and at the background plate 3, for example, an image of a virtual space generated by computer graphics or another image. By synthesizing a landscape image taken at a different location, a synthetic image is generated as if the corresponding antenna were actually present in another location other than the virtual space.
- the background plate 3 used in the image synthesizing apparatus 1 is dyed in a single color (for example, blue or green) which the subject such as an analyzer standing in front of the background plate 3 does not have.
- a single color for example, blue or green
- the subject and the background can be distinguished by color.
- the projector device 4 is a device for projecting a plurality of point light sources P 1 i to P nn formed in a matrix at a predetermined pitch in the horizontal and vertical directions on the background plate 3. is there.
- the emission colors of the point light sources PII to Pnn are set to the same color as the background plate 3, for example, blue or green.
- the point light source P0 projected to the approximate center of the background plate 3 among the plurality of point light sources P11 to Pnn is set as the reference point light source.
- the position of the reference point light source P 0 is set as the origin of the three-dimensional space, and the horizontal direction toward the reference point light source P 0 is the X axis, the vertical direction is the y axis, A three-dimensional coordinate system is defined with the depth direction as the z-axis, and the position of the video camera 2 is detected using the three-dimensional coordinate system. Further, in the image synthesizing device 1, the video synthesized with the background is deformed based on the detected position of the video camera 2, whereby when the video camera 2 is moved, it is deformed. The background image changes naturally following the movement.
- the point light sources P 11 to P nn blink in synchronization with the frame timing of the video camera 2. It is as follows. At this time, lighting is represented by “1” and lighting is represented by “0”. For example, as shown in FIG. 2, instead of turning on or off all the point light sources P11 to Pnn all at once, instead of turning on and off each point light source in a 5-frame cycle, Thus, the point light sources P11 to Pnn can be identified by the way of blinking. For example, as shown in Fig. 2, if "OFF-OFF-ON-OFF->OFF->ON" is performed in 5 frames, it can be identified as the point light source P11.
- the projector device 4 receives the first frame from the system controller D-controller 6 of the image processing unit 5.
- the reference pulse signal SP whose signal level rises at the time of the timing is supplied.
- the projector device 4 causes the point light sources P11 to Pnn to blink independently with a period of 5 frames as shown in FIG. 2 based on the reference pulse signal SP. It has become.
- this flashing method of the point light source shown in FIG. 2 represents information for identification, this flashing method is referred to as an identification code in the following description.
- the power camera position / optical axis detection circuit 7 which will be described later, detects the reference point light source P0 by the blinking of the point light source, this identification code is stored in a memory or the like in advance. I'm gripping.
- the reference pulse signal SP is supplied from the system controller 6.
- the video signal V1 composed of the human and the background imaged by the bidet talent camera 2 is output to the digital circuit 8 and the mixer circuit 9 of the image processing section 5. Is done.
- the video power camera 2 outputs magnification information at the time of capturing an object as zoom information ZM to the system controller 6 of the image processing unit 5.
- n—La 6 is a control means for controlling the operation of the image synthesizing apparatus 1, and as described above, the reference pulse signal SP And the reference pulse signal SP is sent to the camera position-optical axis detection circuit 7 together with the zoom information ZM.
- the system controller 6 also outputs the zoom information ZM to a three-dimensional conversion address calculation circuit 10 described later.
- the chroma circuit 8 to which the video signal V 1 is supplied extracts a color signal corresponding to the hue of the background plate 3 from the video signal V 1 and compares the color signal with a predetermined threshold.
- a cow signal K 1 indicating the background portion in the video signal VI is generated, and the cow signal K 1 is output to the camera position / optical axis detection circuit 7.
- the bearer circuit 8 generates a key signal K 2 indicating a person portion in the video signal V 1 by inverting the half signal K 1, and converts the key signal K 2 into a mixer. Output to circuit 9.
- FIG. 3 shows an image diagram of the video signal V 1 and the key signals K 1 and K 2.
- the video signal V1 includes the image portion a of the person and the background plate 3.
- Image part b exists.
- the background plate 3 is set to a special color such as blue or green, a color signal corresponding to the hue of the background plate 3 is extracted from the video signal V 1 and compared with a threshold value.
- FIG. 3 (B) it is possible to easily generate a key signal K1 whose signal level is, for example, "1" in the background part b of the video signal VI. If this key signal K1 is inverted, a key signal K2 such that the signal level becomes, for example, "1" in the person part a can be easily generated as shown in FIG. 3 (C). it can.
- the camera position / optical axis detection circuit 7 generates a video signal consisting of only a background portion by extracting a region where the signal level of the key signal K 1 is “1” from the video signal V 1. Then, based on the identification code and the reference pulse signal SP stored in the internal memory, the camera position / optical axis detection circuit 7 converts each point light source from the video signal consisting only of the background portion. Detect the positions of P11 to Pnn. Subsequently, the camera position / optical axis detection circuit 7 extracts four point light sources adjacent to the vicinity of the center of the screen from the detected point light sources PI l to P nn, and determines the positions and positions of the four point light sources.
- the tilt of the video force camera 2 in the vertical and horizontal directions with respect to the background plate 3 of the optical axis that is, The angles X and ⁇ ⁇ around the X and Y axes of the three-dimensional coordinate system, the distance L from the video camera 2 to the background plate 3, and the position of the video camera 2 with respect to the reference point light source P0, that is, the reference point light source P
- the position coordinates (X, ⁇ , Z) in the three-dimensional coordinate system with 0 as the origin are calculated, and the calculated parameter information is used as detection data S 1 as a three-dimensional conversion address calculation circuit 10.
- the three-dimensional conversion address calculation circuit 10 Based on the detected data S 1 and the zoom information ZM supplied from the system controller 6, the three-dimensional conversion address calculation circuit 10 places the background image to be synthesized at the position of the video camera 2. A read address for deforming three-dimensionally in response is generated, and the read address S 2 is output to the frame memory 11. That is, since the three-dimensional conversion address calculation circuit 10 generates a video image as if a background image was viewed from the position of the video camera 2, the readout address necessary to realize such three-dimensional image conversion is obtained. And outputs it to frame memory 11.
- Video tape recorder (VTR)] 2 records the video to be synthesized as a background image (for example, video generated by computer graphics or landscape video shot at another location).
- a video tape is mounted, and the video tape recorder 12 reproduces the video tape and outputs a video signal of a background image used for synthesis (hereinafter referred to as a background source video signal).
- V2 is output.
- the background source video signal V 2 is supplied to the above-mentioned frame memory 11, and is sequentially written to a storage area in the frame memory 11.
- the frame memory 11 sequentially reads the background source video signal V 2 written in the storage area based on the read address S 2 supplied from the three-dimensional conversion address operation circuit 10. Accordingly, a background source video signal V 3 deformed three-dimensionally in accordance with the position of the video force camera 2 is generated, and this is output to the mixer circuit 9.
- the background source video signal V2 is a video signal of an image of a painting attached to a museum wall
- the position of the bidet talent camera 2 is the background plate 3.
- the frame memory 11 reads the painting diagonally to the left as shown in Fig. 4 (B) by the read process based on the read address S2. Taken from the front Transforms into a shadowed video signal V3.
- the mixer circuit 9 selects the video signal V 1 when the signal level of the key signal K 2 is “1”, and selects the background source video signal V 3 when the signal level of the key signal K 2 is “0”.
- the video signal V 1 and the background source video signal V 3 are synthesized.
- V 4 K 2-V 1 + (1-K 2) V 3 (1)
- f is the focal length of the zoom lens 13 in the video camera 2.
- the optical path length S from the background plate 3 to the imaging surface of the video camera 2 is almost equal to the distance L from the background plate 3 to the video camera 2 (the optical axis 0 of the video camera 2 corresponds to the background plate 3). Distance to the point of intersection). Therefore, by transforming equation (1), the following equation is obtained for the distance L from the background plate 3 to the video camera 2.
- the relational expression shown in m can be obtained.
- the focal length f obtained from the zoom information ZM of the zoom lens 13 and the distance M between the point light sources arranged at a predetermined pitch.
- the distance L from the background plate 3 to the video force camera 2 can be obtained by performing the calculation of equation (3).
- FIG. 8 when the video force camera 2 is tilted with respect to the background plate 3, as shown in FIG. The image of P 4 is observed as being deformed from the rectangular shape.
- the distance m between the images when the point light source image is arranged so as to cross the optical axis 0 of the video camera 2 vertically is calculated.
- the calculation is performed by substituting the distance m into equation (3).
- the distance L from the background plate 3 to the video camera 2 can be calculated.
- the distance m is based on the coordinate values of the four point light source images used as the basis for calculating the distance, and if the optical axis 0 is inside the four points, the straight line connecting the adjacent point light source images
- the calculation can be performed by the interpolation calculation processing to extrapolate a straight line connecting the adjacent point light source images.
- the light source is inclined in both the horizontal and vertical directions, as shown in Fig. 10, the point light source of the rectangular shape formed by these four points crosses the optical axis 0 vertically.
- the distance L can be calculated by calculating the distance m between the images when the images are arranged.
- the inclination y of the optical axis 0 in the horizontal direction is inside the four point light source images.
- the horizontal interval of the optical axis 0 is internally divided into a: b
- the following expression can be obtained. Therefore, the inclination y of the optical axis 0 in the horizontal direction can be obtained by using the equation (4).
- any one of the rectangular regions formed by these four points If the vertical line segments m 1 and m 2 are detected based on the point image and the arithmetic processing of the formula (4) is executed based on these vertical line segments ml and m 2, the horizontal direction can be obtained. Ru can be obtained inclination y of the optical axis.
- the camera position 'optical axis detection circuit 7 is, as shown in FIG. 13, an image V]' by a point light source extracted from the video signal V 1.
- an image V]' by a point light source extracted from the video signal V 1.
- the camera position / optical axis detection circuit 7 detects the identification stored in the internal memory.
- the point light source image is determined based on the code, and even if the point light source is partially shielded by, for example, a person standing in front of the background plate 3, it is adjacent in the horizontal and vertical directions. All four point light source images are reliably detected.
- a point light source image to be processed is separately extracted.
- the camera position / optical axis detection circuit 7 determines whether any of the detected four point light source images can be interpolated by using, for example, the preceding and following frames in the frames assigned to turn off. Or, if necessary, selecting the other four point light source images, thereby extracting four adjacent point light source images on the background plate 3 for each frame.
- the camera position 'optical axis detection circuit 7 obtains the zoom obtained from the video camera 2.
- Information With reference to ZM, the above-mentioned interpolation based on the reference position Q and the interpolation calculation processing by the outside are executed, and the distance between the point at which the optical axis The distance and the inclination x and y in the up, down, left, and right directions are detected.
- the coordinate values of these point light sources with respect to the reference point light source P 0 are detected from the identification code of the point light source used for the detection. Calculate the video camera 2 coordinate values ( ⁇ , ⁇ , ⁇ ⁇ ⁇ ) from the value, the detected distance, the slope, and y .
- the force camera position / optical axis detection circuit 7 repeats this coordinate value detection processing, distance, and inclination detection processing for each field of the video signal V 1, and the detection data S obtained as a result of these processings is obtained.
- 1 (X, ⁇ , ⁇ , 0 ⁇ , L) is output to the three-dimensional conversion address operation circuit 10.
- the image synthesizing apparatus 1 first drives the ⁇ -ejector apparatus 4 so that the image synthesizing apparatus 1 reacts with the background plate 3 installed on the background of the person as the subject.
- Project point light sources P11 to Pnn At this time, the point light sources P11 to Pnn are blinked in a fixed pattern so that the point light sources P11 to Pnn can be identified later.
- a person who is a subject is imaged by the video camera 2, and the resulting video signal V 1 is input to the image processing unit 5.
- the zoom information ZM used at the time of imaging by the video force camera 2 is input to the image processing unit 5.
- the video signal V 1 is input to the chroma circuit 8.
- the chroma keying circuit 8 extracts a color signal corresponding to the hue of the background plate 3 from the video signal V 1 and compares the color signal with a predetermined threshold to thereby extract a background portion in the video signal V 1. And a key signal K2 indicating the subject portion in the video signal V1.
- the camera position / optical axis detection circuit ⁇ ⁇ first receives the key signal K 1 and the video signal V 1, and generates a video signal including only a background portion from the video signal V 1 based on the key signal K 1. . Then, the camera position / optical axis detection circuit ⁇ reads the identification code including the blinking information of the point light source stored in the internal memory, and reads the identification code and the reference pulse signal. Based on the SP, the positions of the point light sources P11 to Pnn are detected from a video signal consisting of only the background portion.
- the camera position and the optical axis detection circuit 7 move the reference point Q in the display screen corresponding to the optical axis 0 of the bidet talent camera 2 from the detected point light sources P11 to Pnn.
- the four point light sources that are closest to and that are adjacent on the background plate 3 in the horizontal and vertical directions are extracted.
- the camera position / optical axis detection circuit 7 calculates the above-mentioned equations (3) and (4) based on the extracted coordinate values of the four point light sources and the zoom information ZM.
- the distance to the point where the optical axis 0 of the video force camera 2 intersects the background plate 3, the inclination y of the horizontal optical axis ⁇ , and the inclination of the vertical optical axis 0 are calculated.
- the camera position / optical axis detection circuit 7 determines the point light source image based on the identification code, so that, for example, a person standing in front of the background plate 3
- the four point light source images required for calculating the distance L and the like can be reliably detected.
- the distance L can be easily calculated by interpolation and extrapolation calculation processing based on the reference position Q with reference to the zoom information ZM obtained from the video camera 2. And the slopes ⁇ and ⁇ can be calculated.
- the camera position and the optical axis detection circuit 7 are based on the position of the point light source image used for detecting the distance, etc.
- the position coordinates (X, Y, Z) of the video light camera 2 with respect to the point light source P0, that is, the origin in the three-dimensional coordinate system are calculated.
- the three-dimensional conversion address operation circuit 10 calculates these parameters ( ⁇ , ⁇ , ⁇ , ⁇ , L) calculated by the camera position / optical axis detection circuit 7 and the system Readout for 3D conversion to generate an image as if a background image (V 2) was seen from the position of the video camera 2 based on the zoom information ⁇ ⁇ supplied from the camera 6. Generates address S2.
- the frame memory 11 reads out the stored background source video signal V 2 based on the read address S 2, thereby obtaining the background image from the position of the video talent 2.
- a background source video signal V 3 as if viewed from above is generated.
- the background source video signal V3 and the video signal V1 based on the half-signal ⁇ 2, for example, as shown in FIG. Instead of geo, it can generate an image as if it were in a virtual space of computer graphics or another place.
- the background source video signal V3 to be synthesized with the background is three-dimensionally converted as viewed from the position of the video camera 2, a synthesized image without a sense of incongruity is obtained. be able to.
- the image synthesizing apparatus 1 sequentially generates read addresses S based on the detected position of the video camera 2, and generates a background source video signal based on the sequentially generated read addresses S 2. Since V2 is image-converted, even if the position of video camera 2 moves, the position of video camera 2 is changed according to the movement of video camera 2. Since the background image also changes, a more realistic composite image without discomfort can be obtained.
- the zoom information ZM of the video camera 2 is also used for calculating the readout address S2, even when the person to be the subject is zoomed, the person is enlarged. In conjunction with the reduction, the background image can also be enlarged and reduced without discomfort.
- the position information of the video camera 2 is detected based on the point light source image projected on the background plate 3 in a matrix form, and the viewpoint is set based on the position information.
- the background image to be combined with the background can be changed according to the movement of the video camera 2 or the like.
- the parameters of the image conversion that is, the read address of the frame memory
- the position of the reference point light source is detected by detecting where the video camera 2 has moved by moving the video camera 2. In this case, the parameters of the image conversion are determined.
- reference numeral 20 denotes the image synthesizing apparatus according to the second embodiment as a whole.
- This image synthesizing device 20 is also basically composed of a video camera 2, a projector device 4 and an image processing unit 5, and
- the point greatly different from the image synthesizing apparatus 1 of the first embodiment is that a point light source coordinate detecting circuit 21, a three-dimensional conversion address calculating circuit 22 and a screen address generating circuit 23 are newly provided. That is.
- the point light source coordinate detection circuit 21 in the image synthesizing device 20 determines the position of four point light sources, which are the reference among the plurality of point light sources P1l to Pnn projected on the background plate 2, by video. This is a circuit that detects from the signal V 1.
- This image synthesizing device 20 As shown in Fig. 15, four point light sources P43, P44, P53 and P54 surrounding the center BO of the background plate 3 are used as reference point light sources, and the point light source coordinate detection circuit 2 In step 1, the position coordinates of these four point light sources P43, P44, P53 and P54 are detected from the video signal V1.
- the background light source coordinate detection circuit 21 takes an image of the background plate 3 from the front, as shown in FIG. 16, the four reference points P 4 3 seen in the center of the angle of view D, as shown in FIG. , P44, P53 and P54 are detected on the monitor screen by the movement of the video camera 2. For example, if the video signal V 1 obtained by the imaging of the video camera 2 is in a state as shown in FIG. 17, the point light source coordinate detection circuit 21 calculates the four reference points P 43 and P 44 , P53 and P54 are detected on the monitor screen.
- each point light source becomes a matrix at a predetermined pitch. Since the original positions of the reference point light sources P43, P44, P53, and P54 are known in advance because they are located at Light source P 43, P
- This transformation matrix is the transformation matrix required to convert the background image into a video viewed from the viewpoint of the video camera 2.
- the point light source coordinate detection circuit 21 generates a video signal consisting of only the background portion by extracting an area where the signal level of the key signal K 1 is “1” from the video signal V 1. I do. Then, the point light source coordinate detection circuit 21 reads out the identification code stored in the internal memory and, based on the identification code and the reference pulse signal SP, includes only the background portion thereof.
- the point light source coordinate detection circuit 21 The position coordinates of the other point light sources present in the video signal are detected, and the reference point light sources P43, P44, P53, and P54 are subjected to interpolation based on the detected position coordinates. Detect position coordinates.
- the point light source coordinate detection circuit 21 uses the position coordinates of the reference point light sources P 43, P 44, P 53 and P 54 detected in this way as three-dimensional detection data S 10. Output to conversion address operation circuit 22.
- the three-dimensional conversion address arithmetic circuit 22 includes reference point light sources P 43, P 44, P 53, and? Formed on the background plate 3 in a matrix at a predetermined pitch.
- the position coordinates of the reference point light sources P 43, P 44, and P 53 are based on the original position coordinates and the detection data S 10 supplied from the point light source coordinate detection circuit 21.
- reference point light sources P43, P44, P55 based on the position coordinates of P54 and the zoom information ZM supplied from the system controller 6.
- 3 and P54 calculate the transformation matrices needed to move the video camera 2 to such a location on the monitor screen.
- This conversion matrix is exactly the same as the conversion matrix for converting the background image to be synthesized with the background of the video signal V 1 into an image viewed from the viewpoint of the video camera 2.
- the three-dimensional conversion address calculation circuit 22 calculates the inverse matrix of this conversion matrix, and calculates the inverse matrix of the inverse matrix in the raster scan order supplied from the screen address generation circuit 23. By multiplying the screen address (X s , Y s), the read address ( ⁇ ⁇ , ⁇ ⁇ ) of the frame memory 11 is calculated, and this is calculated. Output to memory 1 1.
- the reason why the inverse matrix of the conversion matrix is obtained is that this image synthesizing device 20 performs three-dimensional image conversion by multiplying the background source video signal V 2 by the conversion matrix. Instead, the background source video signal V2 is written in the memory, and when the background source video signal V2 is read, the three-dimensional image conversion power ⁇ line> as indicated by the conversion matrix is used. And the power to make them be heard.
- the frame memory 11 has a video recorder 12 Based background source over scan video signal V 2 supplied write in order inside the Note re region, the three-dimensional conversion add-less operation circuit 2 2 read A de-less being Kyoawase from ( ⁇ ⁇ , ⁇ ⁇ ) Then, by reading the background source video signal V 2, a background source video V 3 as if a background image was viewed from the position of the video force camera 2 was generated as shown in FIG. To the circuit 9.
- the mixer circuit 9 based on the key signal ⁇ ⁇ ⁇ 2 indicating the area of the subject, the video signal V 1 captured by the video camera 2 and the three-dimensional image are obtained.
- the background image is transformed according to the position of the video camera 2 as shown in Fig. 5
- the video signal V 4 can be generated.
- the background source video signal V 2 In order to be able to insert the background source video signal V 2 into the background plate 3 in the video signal V 1, the background source video signal V 2 must be mapped to the three-dimensional space where the background plate 3 exists. Then, it must be projected on the monitor screen from the viewpoint of the operator. Because the background plate 3 exists in the three-dimensional space, the background plate 3 in the video signal V 1 is monitored based on the background plate 3 existing in the three-dimensional space with the viewpoint of the operator as a base point. This is because they are projected on the clean surface.
- the three-dimensional conversion address arithmetic circuit 22 calculates a conversion matrix including a mapping to a three-dimensional space and a projection from the three-dimensional space to a two-dimensional plane, calculates an inverse matrix of the conversion matrix, and reads the conversion matrix.
- the dress must be generated. This will be specifically described below.
- the three-dimensional coordinate system used in the second embodiment has the origin at the center of the monitor screen and the monitor screen.
- the horizontal direction of the screen is the X-axis
- the vertical direction of the monitor screen is the y-axis
- the monitor screen is the It is defined by the xyz rectangular coordinate system with the vertical direction as the z-axis.
- the right direction of the monitor screen is the positive direction
- the left direction of the monitor screen is the negative direction
- the upward direction of the monitor screen is Positive direction
- the downward direction of the monitor screen is the negative direction
- the z-axis is the positive direction of the depth direction of the monitor screen
- the front side of the screen that is, the operator's viewpoint Side
- a virtual coordinate value between 14 and +4 is set, and in the y-axis direction in the screen area, a virtual coordinate value of 1-3 is set. A virtual coordinate value between +3 is set. Of course, virtual coordinate values are set outside the screen area.
- the operator's viewpoint position P Z is virtually set at the z coordinate on the z axis at “1 16”.
- 3D image conversion processing that is, mapping to 3D space and projection from 3D space to monitor screen plane
- background source video signal V2 A method of generating the background source video signal V 3 inserted at the position of the background plate 3 in the video signal V 1 will be described.
- the background source video signal V 2 is not subjected to any three-dimensional processing, and is stored in the frame memory 11 as it is. Since the background source video signal V 2 is a two-dimensional video signal, as shown in FIG. 20 (A), the video signal existing at the position M on the monitor screen in the three-dimensional space It is.
- This background source video signal V 2 must be coordinate-transformed to the background plate 3 existing in the three-dimensional space as described above.
- Background plate 3 here, as shown in FIG. 2 0 (A), a positive direction of the z-axis, present in the position M 2 which is inclined about 45 ° to scan click rie down surface Shall be
- the background plate 3 present in the Yo I Do position M 2 Then, subjected parallel movement in the positive direction of the z-axis relative to the background source Subideo signal V 2, the rotation processing of approximately 45 ° around the y-axis There must be.
- Such a coordinate conversion process is a three-dimensional conversion matrix T. This can be done by using. That is, the background source A three-dimensional transformation matrix T for each pixel of the video signal V 2. , A video signal V 2 ′ existing in the three-dimensional space can be generated.
- This three-dimensional transformation matrix ⁇ is generally
- Conversion parameter r used for! To r 33 are elements for rotating the background source video signal V 2 around the X axis, the y axis, and the z axis, and the background source video signal V 2 in the X axis direction, the y axis direction, and the z axis. Includes elements for enlarging and reducing the scale in each direction, and elements for skewing the background source video signal V 2 in the X-axis direction, y-axis direction, and z-axis direction, respectively. It is a parameter.
- the conversion parameters 1 x , 1 y , and 1 z are parameters including elements for translating the background source video signal V 2 in the X-axis, y-axis, and z-axis directions, respectively.
- the conversion parameter s is a parameter determined by the zoom information ZM of the video camera 2, and the entire background source video signal V2 is converted into one axis in each of the three-dimensional axes. This is a parameter that includes an element for scaling.
- this transformation matrix Represents a coordinate system such as a rotation transformation and a coordinate system for a translation transformation and a scaling transformation in the same one coordinate system, so that a four-by-four column system is obtained.
- a coordinate system is called a homogeneous coordinate system (Homogeneous Coordinate).
- the background source video signal V 2 ′ which has been coordinate-transformed into the three-dimensional space using the three-dimensional transformation matrix, is inserted into the video signal V 1 at the position of the background plate 3, Projection processing must be performed on the monitor screen based on the operator's viewpoint. Must. That is, in other words, as shown in FIG. 20 (A), when the background source video signal V 2 ′ existing at the position M 2 in the three-dimensional space is viewed from the virtual viewpoint PZ on the z-axis, X y The background source video signal V 3 to be seen on the plane must be obtained.
- This projection processing is a perspective transformation matrix P. This can be done by using. That is, the perspective transformation matrix P for each pixel of the background source video signal V 2 ′. By multiplying by, the background source video signal V 3, which is a perspective view of the background source video signal V 2 ′ existing in the three-dimensional space on the xy plane, can be obtained.
- This perspective transformation matrix P. 'S parameter P z when seen through the background source Subideo signal V 2 'on X y plane is a perspective Bae Kuti blanking value for applying the perspective. Normally, the perspective value Pz is set to "1/16" as a reference value. This means that the z coordinate value of the virtual viewpoint PZ is “1-16”. It should be noted that the perspective value Pz can be changed to a desired value by an operator setting.
- the background source video signal V 2 is subjected to such coordinate transformation into the three-dimensional space and the projection processing from the three-dimensional space to the xy plane to obtain the background source video.
- the video signal V 2 can be inserted into the video signal V 1 at the position of the background plate 3.
- this conversion process uses a three-dimensional conversion matrix T.
- Background source video signal V2 by A spatial image conversion step until a three-dimensional background source video signal V 2 ′ is obtained from, and a perspective change matrix P.
- the perspective transformation matrix ⁇ ⁇ the perspective transformation matrix ⁇ ⁇
- the image synthesizing apparatus 20 converts the background source video signal V 2 into a frame Write to memory 11 in order, and transform matrix T.
- the video source V1 can be inserted into the background plate 3 of the video signal V1.
- Such a background source video signal V3 is generated.
- the background source video signal V2 written to the frame memory 11 and the background source video signal V3 read from the frame memory 11 are both two-dimensional video data.
- the frame memory 11 is a memory for storing two-dimensional data. Therefore, in the operation of the read address used for the read operation from the frame memory 11, the parameter for calculating the data in the z-axis direction in the three-dimensional space is substantially a parameter. Will not be used. Therefore, of the transformation matrix T shown in equation (7), the parameters in the third column and third row for calculating the data in the z-axis direction are unnecessary.
- Mazufu Remume mode Li 1 1 a two-dimensional ⁇ de-less on (XM, Y M) and the position base click preparative Le a [chi Micromax Upsilon Micromax, the add-less on Monitasu click rie down (X s, Y s) and And the position vector is [XsYs]. Then when representing the two-dimensional position vector bets le on the frame memo Li 1 1 [X M Upsilon Micromax] in the same coordinate system, base click preparative Le [x m y m H.
- the monitor is expressed scan click position downy click DOO on rie down X S and Y s] in the same coordinate system, it is expressed as base click preparative Le [x s y s 1] Wear.
- the parameter "H.” in this homogeneous coordinate system is a parameter that indicates the size of the vector.
- the background source video signal is obtained.
- the monitor it is necessary to find a point on the frame memory 11 corresponding to a point on the task line. That is, the following equation obtained by modifying equation (9)
- [X m H 0] CX 1] ⁇ T as shown in (10), the position base click preparative Le on Monitasu click rie down 1 1 [x s ys 1] based on criteria, the transformation matrix T 33 inverse matrix T n-frame Note position downy click preparative Le on Li 1 1 using a 1 [x m y m H. of ] Must be calculated.
- a child that is the same concept may convert the position vector preparative le by homogeneous coordinate system on the monitor scan click rie emissions [X sys 1] to click preparative Le base position of the two-dimensional [X s Y s]
- the parameters “X s” and “ys” representing the direction of the position vector in the homogeneous coordinate system are replaced with the parameter “1” representing the size of the position vector in the homogeneous coordinate system. It is sufficient to normalize with. Therefore, the parameters “X s” and “Y s” of the two-dimensional position vector on the monitor screen are expressed by the following equations.
- Equation (15) the parameters “X M ” and 2M of the two-dimensional position vector on the frame memory 11 are obtained.
- “Y M ” is biixb 2 iy + b
- W 1 — a 22 a 3 1 a 1 3 + a 2 1 a 32 a 1 3 + a 1 2 a 3 1 a 23
- Equations 9 to 28 are:
- W r is P z + r
- the read addresses ( ⁇ ⁇ , ⁇ ⁇ ) of the frame memory 11 can be calculated using equations (42) to (44). You can ask.
- a calculation method of each parameter Ichita the transformation matrix T 33 As described above, the relationship between the position vector on the frame memory 11 and the position vector on the monitor screen is as shown in equation (9). . Therefore Ru can and this to obtain each para main Ichita transformation matrix T 3 3 by substituting the actual value of the position base click preparative Le to this equation (9).
- the position vectors on the monitor screen include the position vectors of the four point light sources ⁇ 43, ⁇ 44, ⁇ 53, and ⁇ 54 detected by the point light source coordinate detection circuit 21. Use tolls.
- the entire background plate 3 is considered as the entire memory area of the frame memory, and the point light sources at that time ⁇ 43, ⁇ 44, ⁇ Use the position vector on the frame memory 11 corresponding to the position of 53 and ⁇ 54.
- the position vectors of the point light sources ⁇ 43, ⁇ 44, ⁇ 53 and ⁇ 54 detected by the point light source coordinate detection circuit 21 are sequentially referred to as [X, Y,], [X 2 Y 2 ], [X 3 Y 3], and CX 4 Y 4], the position on the frame MEMO Li 1 1 corresponding to the position of the original point source P 4 3, P 4 4, P 5 3 and P 5 4 Select the vectors in sequence [X'1Y '!
- Equation (47) becomes: cx KYKK i)
- Equation (53) By substituting equation (53) into equations (51) and (52), the parameters “X i” and The formula for “Y i” is: a 1 1 X 'i + ⁇ 2 ⁇ Y' i ten a
- K. A 1 3 'X' i + a 23 'Y', + (62) X ⁇ 'X' 2 + a 2 1 'Y "+ a (63)
- the three-dimensional conversion address operation circuit 22 operates in the same manner as described above. Generate read address to supply to frame memory 11 1.
- the three-dimensional conversion address calculation circuit 22 generates the reference point light sources P 43, P 44, P 53 and P 54 supplied as detection data S 10 from the point light source coordinate detection circuit 21. And the reference point light sources P43, P44, P53, and P54 when the entire background plate 3 corresponds to the memory area of the frame memory.
- a simultaneous linear equation for each parameter of the above-mentioned transformation matrix T 33 is set, and the simultaneous linear equation is solved. Ri by the, determine the transformation matrix T 3 3.
- the three-dimensional conversion add-less operation circuit 2 2 The following, the determined transformation matrix T 3 3 of using each parameter calculated the inverse matrix T 3 1, the para menu over the inverse matrix T 3 1 data and scan click subjected from rie down add-less generator 2 3, lined by the scan click rie N'a de Les (X s, Y s) and a read a supplies based on the frame Note Li 1 1
- the dress ( ⁇ ,, ⁇ ⁇ ) is obtained and supplied to the frame memory 11.
- step S ⁇ 2 entered from step S ⁇ 1 the system controller 6 controls the drive of the ⁇ -ejector device 4 to blink at a 5-frame cycle.
- a plurality of point light sources ⁇ 11 to ⁇ ⁇ ⁇ are projected onto the background plate 3 so that the turn is repeated, and a reference pulse signal S ⁇ ⁇ indicating the first frame of the 5-frame period is detected as the point light source coordinates.
- the point light source coordinate detection circuit 21 sets the value of the internal counter ⁇ to “1” in response to the reception of the reference pulse signal S ⁇ , and
- the video signal V 1 corresponding to the first frame of the five-frame period The coordinates of the point light source are detected from the background video.
- the coordinates of the point light source are, as shown in FIG. 23, coordinates on the monitor screen plane with the center of the monitor screen, that is, the optical axis as the origin.
- step SP5 the point light source coordinate detection circuit 21 determines whether the value of the previously set power counter n has reached "5" or not by determining whether the value has reached "5" at the end of the five-frame cycle. Determine whether the frame has been reached. In this case, since the value of the counter n is "1", the process proceeds to step SP6, where the point light source coordinate detection circuit 21 adds "1" to the value of the counter n, and again. Return to step SP 4.
- step SP4 the coordinates of the point light source are similarly detected from the background image of the video signal V1 corresponding to the second frame of the 5-frame period.
- the coordinates of the point light source are detected from the background image of the video signal V1 corresponding to each frame of the 5-frame cycle by repeating the processing of steps SP5 to SP6.
- the value of the counter n is “5”, and the point light source coordinate detection circuit 21 determines Proceed from step SP5 to step SP7.
- the point light source coordinate detection circuit 21 includes the reference point light sources P43, P44, P53 and P53 among the point light sources whose coordinates have been detected from the 5-frame video signal V1. It is determined whether all the point light sources corresponding to 54 exist. As a result, if all the point light sources corresponding to the reference point light sources P43, P44, P53 and P54 exist, the reference point light sources P43, P44, P53 and The coordinates of P54 are output to the three-dimensional conversion address calculation circuit 22 as detection data S10.
- step SP 8 the point light source coordinate detecting circuit 21 turns off the light.
- the coordinates of each point light source are detected based on the coordinates of the point light source of each detected frame, and in the next step SP9, four reference point light sources P43 and P43 based on the detected coordinates of each point light source are detected.
- the coordinates of P44, P53, and P54 are obtained by interpolation processing, and output to the three-dimensional conversion address calculation circuit 12 as detection data.
- reference point light source P 43, P44, P53, and P54 may not be present in the five frames if the reference point light source is hidden by a person standing in front of It is possible that the reference point light source deviates from the angle of view of La 2.
- step SP10 the three-dimensional conversion address calculation circuit 22 receiving the coordinates of the reference point light sources P43, P44, P53, and P54 from the point light source coordinate detection circuit 21 calculating a respective parameter menu chromatography data of the point light source P 4 3, P 4 4, P 5 3 and P 5 converts 4 coordinates on the basis of the matrix T 33.
- step SP 1 3-dimensional conversion add-less operation circuit 2 2, based on the parameters of the calculated transformation matrix T 33, calculates a read add-less (XM, YM), the following
- the read address (XM, YM) is supplied to the frame memory 11.
- each parameter of a conversion matrix T for image-converting the background source video signal V 2 is calculated based on the detected coordinates of the four points, and each parameter of the obtained conversion matrix T 3 is calculated.
- the read address ( ⁇ ⁇ , ⁇ ⁇ ) of the frame memory 11 is calculated based on this.
- the image synthesizing device 20 first drives the projector device 4 so that the point light sources P11 to P11 are generated with respect to the background plate 3 installed on the background of the person as the subject. Project P nn. At this time, the point light sources P11 to Pnn are blinked in a constant pattern with a repetition period of 5 frames so that the point light sources P11 to Pnn can be identified later.
- the video signal V1 is manually input to a chroma circuit 8.
- the ⁇ -matrix circuit 8 extracts a color signal corresponding to the hue of the background plate 3 from the video signal V 1 and compares the color signal with a predetermined threshold to thereby obtain a background portion in the video signal V 1.
- a key signal K1 indicating the minute is generated, and a key signal K2 indicating the subject portion in the video signal V1 is generated.
- the point light source coordinate detection circuit 21 first receives the key signal K 1 and the video signal V 1 and generates a video signal consisting of only a background portion from the video signal V 1 based on the single signal K 1. Then, the point light source coordinate detecting circuit 21 reads out the identification code composed of the blinking information of the point light source stored in the internal memory, and reads four identification codes based on the identification code.
- the reference point light sources P43, P44, P53 and P54 are detected from the video signal consisting only of the background part, and the reference point light sources P43, P44, P53 and P5 are detected. Detect the position coordinates on the monitor screen of step 4.
- the reference point light source P 4 was included in the video signal consisting of only the background.
- the point light source coordinate detection circuit 21 detects the position coordinates of other point light sources present in the video signal and detects the position coordinates. Based on the obtained position coordinates, the position coordinates of the reference point light sources P43, P44, P53, and P54 are detected by interpolation.
- the three-dimensional conversion address calculation circuit 22 calculates the background source video signal V 2 three-dimensional transformation matrix T 3 is calculated for the image conversion, read a de Les supplied to frame Note Li 1 1 on the basis of their respective para main transformation matrix T 3 Isseki subjecting Te (chi Micromax , ⁇ ⁇ ).
- the background source video signal V 2 stored in the memory area is read based on the read address ( ⁇ ⁇ , ⁇ ).
- a background source video signal V 3 as if a background image was viewed from the position of the video camera 2 is generated. Therefore, by combining the background source video signal V 3 and the video signal V 1 based on the key signal K 2, as shown in FIG. 5, the position of the video camera 2 is changed according to the position of the video camera 2. It is possible to generate a synthesized video signal V 4 in which the background image is deformed and has no uncomfortable feeling.
- the image synthesizing apparatus 20 of the second embodiment four of the plurality of point light sources P11 to Pnn are set in advance as reference point light sources, and the four The position coordinates of the reference point light source on the monitor screen are detected from the video signal V 1, and a read address for image conversion of the background image is generated based on the position coordinates of the four points.
- the background image can be deformed according to the position of the video camera with a simpler configuration than in the first embodiment, and a composite image without a sense of incongruity can be easily formed. Can be generated in
- a point light source is projected in a matrix onto the background plate 3, and the position coordinates of the four point light sources on the monitor screen are detected from the video signal V1.
- a read address for image conversion is generated based on the detected position coordinates of the four points to deform the background source video signal V2, so that a simpler configuration can be achieved.
- a background image that deforms according to the position of the video force camera 2 can be generated. In this way, it is possible to realize an image synthesizing device 20 that can naturally change the image of the background to be synthesized in response to the movement of the imaging means with a simpler configuration.
- a light emitting diode may be provided in a matrix shape with respect to the background plate in advance, and the light emitting diode may be turned on and off in a predetermined pattern to achieve the same effect as described above. Can be obtained.
- the present invention is not limited to this, and the same effect as in the above case can be obtained by forming a point light source on the background plate by scanning with one laser beam. It can be.
- the same effect as in the above case can be obtained by providing an illuminating means for forming a light source serving as a reference for image synthesis processing on the background plate.
- a case has been described in which the point light sources arranged in a matrix can be identified by blinking in a predetermined pattern, but the present invention is not limited to this.
- the same effect as in the above case can be obtained even if the point light source can be identified by converting the light emission intensity with a predetermined pattern.
- the first embodiment described above the case where the distance, the inclination, and the like are detected based on the interval between the point light source images has been described.
- the present invention is not limited to this, and the light source image is formed in a rod shape.
- a light source image is formed in a frame shape and the size of the frame shape is determined.
- the point is to use a light source image formed on the background to be processed. It can be widely applied when detecting the position information of the imaging means.
- the present invention is not limited to this.
- the present invention can be widely applied to a case where these props are used as processing targets instead of a background plate to synthesize various images.
- This can be used in a broadcast station or the like when a composite video is generated by inserting another video into a predetermined area of video data.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Circuits (AREA)
- Image Processing (AREA)
- Processing Of Color Television Signals (AREA)
- Processing Or Creating Images (AREA)
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-1998-0702733A KR100495191B1 (ko) | 1996-12-26 | 1997-12-26 | 영상합성기및영상합성방법 |
EP97950431A EP0895429A4 (en) | 1996-12-26 | 1997-12-26 | DEVICE AND METHOD FOR SYNTHESIZING IMAGES |
US09/051,001 US6104438A (en) | 1996-12-26 | 1997-12-26 | Image synthesizer and image synthesizing method for synthesizing according to movement |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP34682396 | 1996-12-26 | ||
JP8/346823 | 1996-12-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1998030029A1 true WO1998030029A1 (fr) | 1998-07-09 |
Family
ID=18386054
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP1997/004896 WO1998030029A1 (fr) | 1996-12-26 | 1997-12-26 | Dispositif et procede pour synthese d'images |
Country Status (4)
Country | Link |
---|---|
US (1) | US6104438A (ja) |
EP (1) | EP0895429A4 (ja) |
KR (1) | KR100495191B1 (ja) |
WO (1) | WO1998030029A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10145673A (ja) * | 1996-11-12 | 1998-05-29 | Sony Corp | ビデオ信号処理装置及びビデオ信号処理方法 |
JP2008046103A (ja) * | 2006-07-19 | 2008-02-28 | Shimatec:Kk | 表面検査装置 |
JP2020081756A (ja) * | 2018-11-30 | 2020-06-04 | 国立大学法人静岡大学 | 顔画像処理装置、画像観察システム、及び瞳孔検出システム |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6191812B1 (en) * | 1997-04-01 | 2001-02-20 | Rt-Set Ltd. | Method of providing background patterns for camera tracking |
US6211913B1 (en) * | 1998-03-23 | 2001-04-03 | Sarnoff Corporation | Apparatus and method for removing blank areas from real-time stabilized images by inserting background information |
DE69937298T2 (de) * | 1998-06-22 | 2008-02-07 | Fujifilm Corp. | Bilderzeugungsgerät und verfahren |
JP3241327B2 (ja) * | 1998-08-22 | 2001-12-25 | 大聖電機有限会社 | クロマキーシステム |
JP2000090277A (ja) * | 1998-09-10 | 2000-03-31 | Hitachi Denshi Ltd | 基準背景画像更新方法及び侵入物体検出方法並びに侵入物体検出装置 |
US6570612B1 (en) * | 1998-09-21 | 2003-05-27 | Bank One, Na, As Administrative Agent | System and method for color normalization of board images |
US6674917B1 (en) * | 1998-09-29 | 2004-01-06 | Hitachi, Ltd. | Method of synthesizing an image for any light source position and apparatus therefor |
US6504625B1 (en) * | 1998-12-24 | 2003-01-07 | Champion International Company | System and method for print analysis |
US7230628B1 (en) * | 2000-10-05 | 2007-06-12 | Shutterfly, Inc. | Previewing a framed image print |
US7177483B2 (en) * | 2002-08-29 | 2007-02-13 | Palo Alto Research Center Incorporated. | System and method for enhancement of document images |
US7675540B2 (en) * | 2003-08-19 | 2010-03-09 | Kddi Corporation | Concealed regions complementing system of free viewpoint video images |
FI20065063A0 (fi) * | 2006-01-30 | 2006-01-30 | Visicamet Oy | Menetelmä ja mittalaite mitata pinnan siirtymä |
US8045060B2 (en) * | 2006-10-04 | 2011-10-25 | Hewlett-Packard Development Company, L.P. | Asynchronous camera/projector system for video segmentation |
FR2913749B1 (fr) * | 2007-03-13 | 2009-04-24 | Commissariat Energie Atomique | Systeme d'eclairage matriciel, notamment de type scialytique procede de controle d'un tel eclairage et procede d'etalonnage d'une camera equipant un tel systeme |
JP4794510B2 (ja) * | 2007-07-04 | 2011-10-19 | ソニー株式会社 | カメラシステムおよびカメラの取り付け誤差の補正方法 |
US9055226B2 (en) * | 2010-08-31 | 2015-06-09 | Cast Group Of Companies Inc. | System and method for controlling fixtures based on tracking data |
US9350923B2 (en) | 2010-08-31 | 2016-05-24 | Cast Group Of Companies Inc. | System and method for tracking |
US9448067B2 (en) * | 2011-09-23 | 2016-09-20 | Creatz Inc. | System and method for photographing moving subject by means of multiple cameras, and acquiring actual movement trajectory of subject based on photographed images |
KR20130084720A (ko) * | 2012-01-18 | 2013-07-26 | 삼성전기주식회사 | 영상 처리 장치 및 방법 |
US20180020167A1 (en) * | 2016-05-10 | 2018-01-18 | Production Resource Group, Llc | Multi Background Image Capturing Video System |
DE102021106488A1 (de) | 2020-12-23 | 2022-06-23 | Arnold & Richter Cine Technik Gmbh & Co. Betriebs Kg | Hintergrund-Wiedergabeeinrichtung, Hintergrundwiedergabesystem, Aufnahmesystem, Kamerasystem, digitale Kamera und Verfahren zum Steuern einer Hintergrund-Wiedergabeeinrichtung |
GB202112327D0 (en) * | 2021-08-27 | 2021-10-13 | Mo Sys Engineering Ltd | Rendering image content |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS63271664A (ja) * | 1987-03-17 | 1988-11-09 | クォンテル リミテッド | 画像処理装置 |
JPH02199971A (ja) * | 1988-09-20 | 1990-08-08 | Quantel Ltd | ビデオ処理装置 |
JPH02292986A (ja) * | 1989-05-08 | 1990-12-04 | Nippon Hoso Kyokai <Nhk> | 画像合成用キー信号発生方法 |
JPH02306782A (ja) * | 1989-05-22 | 1990-12-20 | Asutoro Design Kk | 画像合成装置 |
JPH0342788Y2 (ja) * | 1984-12-26 | 1991-09-06 | ||
JPH05207502A (ja) * | 1992-01-29 | 1993-08-13 | Nippon Hoso Kyokai <Nhk> | 映像合成システム |
WO1994005118A1 (en) | 1992-08-12 | 1994-03-03 | British Broadcasting Corporation | Derivation of studio camera position and motion from the camera image |
JPH06105232A (ja) * | 1992-09-24 | 1994-04-15 | Fujita Corp | 画像合成装置 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4200890A (en) * | 1977-07-11 | 1980-04-29 | Nippon Electric Company, Ltd. | Digital video effects system employing a chroma-key tracking technique |
JPS5793788A (en) * | 1980-12-03 | 1982-06-10 | Nippon Hoso Kyokai <Nhk> | Chroma-key device |
JPS5992678A (ja) * | 1982-11-19 | 1984-05-28 | Nec Corp | キ−信号検出装置 |
JPH0342788A (ja) * | 1989-07-11 | 1991-02-22 | Nec Corp | データ収集装置 |
US5056928A (en) * | 1989-09-12 | 1991-10-15 | Snow Brand Milk Products Co., Ltd. | Method and apparatus for measuring a change in state of a subject fluid |
US5886747A (en) * | 1996-02-01 | 1999-03-23 | Rt-Set | Prompting guide for chroma keying |
-
1997
- 1997-12-26 US US09/051,001 patent/US6104438A/en not_active Expired - Fee Related
- 1997-12-26 EP EP97950431A patent/EP0895429A4/en not_active Withdrawn
- 1997-12-26 WO PCT/JP1997/004896 patent/WO1998030029A1/ja active IP Right Grant
- 1997-12-26 KR KR10-1998-0702733A patent/KR100495191B1/ko not_active IP Right Cessation
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0342788Y2 (ja) * | 1984-12-26 | 1991-09-06 | ||
JPS63271664A (ja) * | 1987-03-17 | 1988-11-09 | クォンテル リミテッド | 画像処理装置 |
JPH02199971A (ja) * | 1988-09-20 | 1990-08-08 | Quantel Ltd | ビデオ処理装置 |
JPH02292986A (ja) * | 1989-05-08 | 1990-12-04 | Nippon Hoso Kyokai <Nhk> | 画像合成用キー信号発生方法 |
JPH02306782A (ja) * | 1989-05-22 | 1990-12-20 | Asutoro Design Kk | 画像合成装置 |
JPH05207502A (ja) * | 1992-01-29 | 1993-08-13 | Nippon Hoso Kyokai <Nhk> | 映像合成システム |
WO1994005118A1 (en) | 1992-08-12 | 1994-03-03 | British Broadcasting Corporation | Derivation of studio camera position and motion from the camera image |
JPH07500470A (ja) * | 1992-08-12 | 1995-01-12 | ブリティッシュ・ブロードキャスティング・コーポレーション | カメラ画像からのスタジオカメラ位置及び移動の導出 |
JPH06105232A (ja) * | 1992-09-24 | 1994-04-15 | Fujita Corp | 画像合成装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP0895429A4 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10145673A (ja) * | 1996-11-12 | 1998-05-29 | Sony Corp | ビデオ信号処理装置及びビデオ信号処理方法 |
JP2008046103A (ja) * | 2006-07-19 | 2008-02-28 | Shimatec:Kk | 表面検査装置 |
JP2020081756A (ja) * | 2018-11-30 | 2020-06-04 | 国立大学法人静岡大学 | 顔画像処理装置、画像観察システム、及び瞳孔検出システム |
Also Published As
Publication number | Publication date |
---|---|
KR100495191B1 (ko) | 2005-11-21 |
KR19990064245A (ko) | 1999-07-26 |
EP0895429A4 (en) | 2002-05-02 |
US6104438A (en) | 2000-08-15 |
EP0895429A1 (en) | 1999-02-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO1998030029A1 (fr) | Dispositif et procede pour synthese d'images | |
JP4153146B2 (ja) | カメラアレイの画像制御方法、及びカメラアレイ | |
US7855752B2 (en) | Method and system for producing seamless composite images having non-uniform resolution from a multi-imager system | |
US6031941A (en) | Three-dimensional model data forming apparatus | |
JP4115117B2 (ja) | 情報処理装置および方法 | |
US9756277B2 (en) | System for filming a video movie | |
JP2005500721A (ja) | Vtvシステム | |
WO2003036565A2 (en) | System and method for obtaining video of multiple moving fixation points within a dynamic scene | |
JP2000215311A (ja) | 仮想視点画像生成方法およびその装置 | |
CN110691175B (zh) | 演播室中模拟摄像机运动跟踪的视频处理方法及装置 | |
JPH10155109A (ja) | 撮像方法及び装置並びに記憶媒体 | |
KR19980041972A (ko) | 비디오 신호 처리 장치 및 비디오 신호 처리 방법 | |
JP3561446B2 (ja) | 画像生成方法及びその装置 | |
US9143700B2 (en) | Image capturing device for capturing an image at a wide angle and image presentation system | |
JP2003179800A (ja) | 多視点画像生成装置、画像処理装置、および方法、並びにコンピュータ・プログラム | |
Bartczak et al. | Integration of a time-of-flight camera into a mixed reality system for handling dynamic scenes, moving viewpoints and occlusions in real-time | |
JPH10304244A (ja) | 画像処理装置およびその方法 | |
JP2003143477A (ja) | 映像合成装置および方法 | |
JP3230481B2 (ja) | テレビジョン画像の合成方式 | |
Fukui et al. | A virtual studio system for TV program production | |
JP3577202B2 (ja) | 映像作成装置 | |
JP4099013B2 (ja) | バーチャルスタジオ映像生成装置およびその方法ならびにそのプログラム | |
JP4006105B2 (ja) | 画像処理装置およびその方法 | |
Foote et al. | Enhancing distance learning with panoramic video | |
JPH1091790A (ja) | 三次元形状抽出方法及び装置並びに記憶媒体 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 1997950431 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 09051001 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1019980702733 Country of ref document: KR |
|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): JP KR US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): DE FR GB |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWP | Wipo information: published in national office |
Ref document number: 1997950431 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1019980702733 Country of ref document: KR |
|
WWG | Wipo information: grant in national office |
Ref document number: 1019980702733 Country of ref document: KR |