US11954867B2 - Motion vector generation apparatus, projection image generation apparatus, motion vector generation method, and program - Google Patents

Motion vector generation apparatus, projection image generation apparatus, motion vector generation method, and program Download PDF

Info

Publication number
US11954867B2
US11954867B2 US17/296,464 US201917296464A US11954867B2 US 11954867 B2 US11954867 B2 US 11954867B2 US 201917296464 A US201917296464 A US 201917296464A US 11954867 B2 US11954867 B2 US 11954867B2
Authority
US
United States
Prior art keywords
parameter
motion vector
image
projection
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US17/296,464
Other languages
English (en)
Other versions
US20210398293A1 (en
Inventor
Taiki FUKIAGE
Shinya Nishida
Takahiro Kawabe
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nippon Telegraph and Telephone Corp
Original Assignee
Nippon Telegraph and Telephone Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Telegraph and Telephone Corp filed Critical Nippon Telegraph and Telephone Corp
Assigned to NIPPON TELEGRAPH AND TELEPHONE CORPORATION reassignment NIPPON TELEGRAPH AND TELEPHONE CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KAWABE, TAKAHIRO, FUKIAGE, Taiki, NISHIDA, SHINYA
Publication of US20210398293A1 publication Critical patent/US20210398293A1/en
Application granted granted Critical
Publication of US11954867B2 publication Critical patent/US11954867B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3191Testing thereof
    • H04N9/3194Testing thereof including sensor feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B21/00Projectors or projection-type viewers; Accessories therefor
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B21/00Projectors or projection-type viewers; Accessories therefor
    • G03B21/14Details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • H04N5/145Movement estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • H04N9/3185Geometric adjustment, e.g. keystone or convergence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • H04N9/3188Scale or resolution adjustment

Definitions

  • the present invention relates to a technique for making a target that is not actually moving feel as if it is moving.
  • Projection mapping has begin to be widely used as a technique for changing the appearance of a target which is a real object.
  • projection mapping the appearance of an object (a projection target) is manipulated by projecting an image (or picture) on the surface of the object using a projector.
  • Patent Literature 1 proposes a method of giving an impression of motion to a stationary projection target by applying this technique.
  • a picture is generated by adding a motion to a grayscale image of a projection target on a computer, and a picture corresponding to the difference between each frame of the generated picture and the original grayscale image is obtained as a projection image.
  • the projection image By setting the projection image in grayscale, it is possible to selectively stimulate a notion information detection mechanism of the human visual system because the human visual system perceives motion information mainly based on luminance information. On the other hand, it is possible to give only an impression of motion to the projection target while maintaining the natural appearance of the projection target because it maintains the shape, texture, and color information of the original appearance. Thus, it is possible to make the viewer feel as if the projection target that is not actually moving is moving.
  • the projection image containing motion information and the original shape, texture, and color information (the projection target that is not actually moving). If the discrepancy is not so large, it is acceptable to the human visual system and causes no problem in appearance. However, if the discrepancy is large, the projection image does not look fit for the projection target that is not actually moving, giving an unnatural impression. In general, it is known that the degree of discrepancy between the projection image and the projection target tends to increase as the magnitude of the motion given increases.
  • Patent Literature 1 regarding the projection mapping technique that gives an impression of motion to areal object, the magnitude of motion is manually adjusted to eliminate the sense of discrepancy (the unnaturalness of the projection result) between the projection image and the projection target.
  • the magnitudes of motion optimal for regions and frames of given nodal information are generally different, it is a very difficult task to manually optimize all of them.
  • Non Patent Literature 1 proposes a perceptual model that estimates the unnaturalness of a projection result of a projection target when three elements, motion information given to the projection target, an image of the projection target before projection, and an image obtained by photographing the projection result, are given.
  • a perceptual model that estimates the unnaturalness of a projection result of a projection target when three elements, motion information given to the projection target, an image of the projection target before projection, and an image obtained by photographing the projection result, are given.
  • how to optimize the motion information based on such results has not been proposed so far.
  • a motion vector generation apparatus includes a first parameter generation unit configured to generate a first parameter that is a parameter for scaling a motion vector based on a perceptual difference between a projection result reproduction image which is an image that is obtained when a projection target onto which a projection image obtained based on the motion vector has been projected is photographed and a warped image which is an image generated by distorting an image obtained when the projection target is photographed by a perceptual amount of motion perceived when the projection result reproduction image is viewed, and a motion vector reduction unit configured to scale the motion vector using the first parameter.
  • the present invention has an advantage of being able to automatically adjust a motion given to a projection target using a perceptual model.
  • FIG. 1 is a functional block diagram of a projection image generation apparatus according to a first embodiment.
  • FIG. 2 is a diagram illustrating an example of a processing flow of the projection image generation apparatus according to the first embodiment.
  • FIG. 3 is a functional block diagram of a first parameter generation unit according to the first embodiment.
  • FIG. 4 is a diagram showing an example of a processing flow of the first parameter generation unit according to the first embodiment.
  • FIG. 5 is a functional block diagram of an unnaturalness estimation unit according to the first embodiment.
  • FIG. 6 is a diagram illustrating an example of a processing flow of the unnaturalness estimation unit according to the first embodiment.
  • FIG. 7 is a diagram showing an example of an algorithm for three-dimensionally smoothing parameters.
  • FIG. 8 is a functional block diagram of a second parameter generation unit according to the first embodiment.
  • FIG. 9 is a diagram illustrating an example of a processing flow of the second parameter generation unit according to the first embodiment.
  • FIG. 10 is a diagram for explaining a projection method of a projector.
  • FIG. 11 is a diagram shoring an example of an algorithm for two dimensionally smoothing parameters.
  • FIG. 12 is a functional block diagram of a projection image generation apparatus according to a third embodiment.
  • FIG. 13 is a diagram illustrating an example of a processing flow of the projection image generation apparatus according to the third embodiment.
  • FIG. 14 is a functional block diagram of a projection image generation apparatus according to a fourth embodiment.
  • FIG. 15 is a diagram illustrating an example of a processing flow of the projection image generation apparatus according to the fourth embodiment.
  • FIG. 16 is a functional block diagram of an unnaturalness estimation unit according to a fifth embodiment.
  • FIG. 17 is a diagram illustrating an example of a processing flow of the unnaturalness estimation unit according to the fifth embodiment.
  • FIG. 1 is a functional block diagram of a projection image generation apparatus according to a first embodiment and FIG. 2 illustrates a processing flow thereof.
  • the projection image generation apparatus includes a projection target photographing unit 110 , a camera-projector pixel correspondence acquisition unit 120 , an addition unit 125 , a first parameter generation unit 130 , a motion vector reduction unit 140 , a non-rigid vector extraction unit 150 , a second parameter generation unit 160 , a motion vector combining unit 170 , a projection image generation omit 180 , and a projection unit 190 .
  • the projection image generation apparatus acquires an input image via a camera and tried in the projection target photographing unit 110 . Apart from this, the projection image generation apparatus takes a motion vector v(x, y, t) given to the projection target as an input. However, if a projection image is generated using the input motion vector as it is, the projection result may have an appearance aberration (unnaturalness) because the magnitude of the vector is too large. In order to prevent this, the first parameter generation unit 130 generates a parameter (hereinafter also referred to as a first parameter) ⁇ (x, y, t) for ring the motion vector v(x, y, t) such that unnaturalness does not occur.
  • a parameter hereinafter also referred to as a first parameter
  • the non-rigid vector extraction unit 150 extracts a non-rigid motion vector component ⁇ v h (x, y, t) included in the motion vector v(x, y, t) and adds the extracted component to the motion vector to increase the magnitude of the motion vector.
  • the second parameter generation unit 160 generates a coefficient (hereinafter also referred to as a second parameter) ⁇ 2 (x, y, t) for scaling the non-rigid motion vector component ⁇ v h .
  • the motion vector combining unit 170 calculates ⁇ (x, y, t)v(x, y, t)+ ⁇ 2 (x, y, t) ⁇ v h (x, y, t) as an optimal motion vector (hereinafter also referred to as a combined vector).
  • the projection image generation tint 180 generates a projection image (a projection pattern) using the optimal motion vector.
  • the projection unit 190 projects the generated projection image onto the projection target.
  • the projection target photographing unit 110 of the projection image generation apparatus includes a photographing device such as a camera and is configured to acquire an input innate captured by the photographing device.
  • the projection tart photographing unit 110 may not include a photographing device and may be configured to receive an image captured by a photographing device which is a separate device as an input.
  • the projection unit 190 of the projection image generation apparatus includes a projection device such as a projector and is configured to project a generated projection image onto the projection target.
  • the projection unit 190 may be configured to output the projection image to a projection device which is a separate device and this projection device may be configured to project the projection image onto the projection target.
  • the present embodiment will be described assuring that the photographing device is a camera and the projection device is a projector.
  • the projection image generation appoints is, for example, a special apparatus formed by loading a special program into a known or dedicated computer having a central processing unit (CPU), a main storage device (a random access memory (RAM)), and the like.
  • the projection image generation apparatus executes, for example, each proms under the control of the CPU.
  • Data input to the projection image generation apparatus and data obtained through each process are stored, for example in the main storage device, and the data stored in the main storage device is read out to the central processing unit as needed and used for other processing.
  • Each processing unit of the projection image generation apparatus many be at least partially configured by hardware such as a combined circuit.
  • Each storage unit included in the projection image generation apparatus can be configured for example, by a main storage device such as a random access memory (RAM) or by middleware such as a relational database or a key-value store.
  • a main storage device such as a random access memory (RAM)
  • middleware such as a relational database or a key-value store.
  • each storage unit does not necessarily have to be provided inside the projection image generation apparatus and may be configured by a hand disk, an optical disc, or an auxiliary storage device formed of a semiconductor memory device such as a flash memory and may be provided outside the projection image generation apparatus.
  • the projection target photographing unit 110 takes images captured by a camera included in the projection target photographing unit 110 as inputs and uses the input images to acquire and output a minimum luminance image I Min (x, y) and a maximum luminance image I Max (x, y) which are used as inputs to the first parameter generation unit 130 and the projection image generation unit 180 .
  • I Min minimum luminance image
  • I Max maximum luminance image
  • (x, y) represents the coordinates of each pixel.
  • the minimum luminance image I Min (x, y) can be acquired from an image that the camera has obtained by photographing the projection target when the projector projects minimum luminance toward the projection target.
  • the maximum luminance image I Max (x, y) can be acquired from an image that the camera has obtained by photographing the projection target when the projector projects maximum luminance toward the projection target.
  • the projection target photographing unit 110 stores the minimum and maximum luminance images I Min (x, y) and I Max (x, y) in a storage unit (not illustrated).
  • the images are acquired in grayscale or are acquired in color and converted to grayscale and used in grayscale.
  • the luminance of an location in a region photographed by the camera is measured using a luminance meter or the like.
  • a ratio ⁇ obtained by dividing a luminance value at this location by a corresponding pixel value of the camera is stored in the storage unit.
  • Unnaturalness estimation units 134 and 165 in the first and second parameter generation units 130 and 160 use the ratio ⁇ when converting a pixel value of an image captured by the camera into a luminance value.
  • the camera be corrected suds that the physical brightness (luminance) of the photographing target and the pixel value of the cognized image have a linear relationship.
  • the camera-projector pixel correspondence acquisition unit 120 acquires and outputs the correspondence between a camera coordinate system and a projector coordinate system. For example, the camera-projector pixel correspondence acquisition unit 120 acquires and outputs mapping to the projector coordinates (p x , p y ) when viewed from the camera coordinates (c x , c y ) (a C2P map) and mapping to the camera coordinates (c x , c y ) when viewed from the projector coordinates (p x , p y ) (a P2C map).
  • Map acquisition methods induce, for example, a method according to Reference 1 in which, while a projector projects a sequence of Gray code patterns, images that a camera has obtained by photographing the projection results are taken as inputs to decode the Gray code, thereby obtaining a C2P map.
  • the P2C map is obtained by referring back to coordinates (c x , c y ) in the C2P map to which the coordinates (p x , p y ) of the projector coordinate system are mapped.
  • a defect in the P2C reap that occurs wines corresponding coordinates (p x , p y ) do not exist in the C2P map can be interpolated using a median value of the values of a range of surrounding 5 pixels ⁇ 5 pixels or the like.
  • the range of pixels used for interpolation is not limited to this and it is desirable that the range be adjusted according to the size of the defect.
  • the P2C map is used in the first parameter generation unit 130 , the second parameter generation unit 160 , and the projection image generation unit 180 .
  • the C2P is used in the first and second parameter generation units 130 and 160 .
  • the addition unit 125 takes the minimum and maximum luminance images I Min (x, y) and I Max (x, y) as inputs and obtains and outputs an intermediate luminance image I 0 (x, y).
  • g has a value in a range of [0, 1].
  • a final projection image is generated to give an impression of notion while preserving the appearance in color and shape of this intermediate luminance image I 0 (x, y).
  • the final projection image gives an impression of motion while maintaining the appearance under ambient light excluding light kindle projector.
  • the contrast polarity of the pattern of the projection target can only shift in the direction of bright ⁇ dark.
  • the contrast polarity of the patter of the projection target can only shift in the direction of dark ⁇ bright.
  • g needs to be greater than 0 and less than 1.
  • the nasal appearance of the projection target may be impaired. This, in many cases, a value of g of about 0.1 to 0.3 can be said to be appropriate. However, it may be better to set g lager than this if the ambient light is very bright.
  • the intermediate luminance image I 0 (x, y) is output to the first parameter generation unit 130 and the projection image generation unit 180 .
  • the above processes of the projection target photographing unit 110 , the camera-projector pixel correspondence acquisition unit 120 , and the addition unit 125 are performed before the notion vector v(x, y, t) is input to obtain the minimum luminance image I Min (x, y), the maximum luminance image I Max (x, y), the intermediate luminance image I 0 (x, y), the P2C map, the C2P map, and the ratio ⁇ .
  • the first parameter generation unit 130 takes the minimum luminance image I Min (x, y), the maximum luminance image I Max (x, y), the intermediate luminance image I 0 (x, y), and the notion vector v(x, y, t) as inputs, obtains a first parameter ⁇ (x, y, t) using these inputs (S 130 ), and outputs the first parameter ⁇ (x, y, t).
  • the first parameter is a parameter for soling the magnitude of the motion vector v(x, y, t).
  • t represents the flame number.
  • the motion vector is also ailed a distortion map.
  • it is assumed that the ratio ⁇ , the P2C map, and the C2P map are input to and set in the first parameter generation unit 130 in advance before the mourn vector v(x, y, t) is input.
  • the first parameter generation unit 130 geneses the first parameter ⁇ (x, y, t) based on a perceptual difference d i (t) between a projection result reproduction image I P i (x, y, t) which will be described later and an ideal distorted image without unnaturalness I W( ⁇ ) i (x, y, t) which will be described later.
  • FIG. 3 is a functional block diagram of the first parameter generation unit 130 and FIG. 4 illustrates an example of a processing flow thereof.
  • the first parameter generation unit 130 includes a region division unit 131 , a projection result generation unit 132 , a multiplication unit 133 , an unnaturalness estimation unit 134 , a first parameter update unit 135 , and a first parameter smoothing unit 136 .
  • Processing is performed in the following order. First processing is executed by the region division unit 131 . That processing of a loop starting from the first parameter update unit 135 is performed in the order of the first parameter update unit 135 ⁇ the multiplication unit 133 ⁇ the projection result generation unit 132 ⁇ the unnaturalness estimation unit 134 ⁇ the first parameter update writ 135 . When a certain condition is satisfied, the loop ads and the process staffs from the first parameter update unit 135 to the first parameter smoothing unit 136 . The control of the loop is included in the processing of the first parameter update unit 135 . Details will be described later.
  • the region division unit 131 takes the minimum luminance image I Min (x, y), the maximum luminance image I Max (x, y), the intermediate luminance image I 0 (x, y), and the motion vector v(x, y, t) as inputs and divides each into a predetermined number of divisions or into small regions having a predetermined size (for example, 64 pixels ⁇ 64 pixels) (S 131 ).
  • the sine of each small region is not limited to this, but needs to be large enough that a Laplacian pyramid which will be described later is generated within one region.
  • a region-divided minimum luminance image I Min i (x, y) and a region-divided minimum luminance image I Max i (x, y) are output to the projection result generation unit 132 , a region-divided intermediate luminance image I 0 i (x, y) is output to the projection result generation unit 132 and the unnaturalness estimation unit 134 , and a region-divided motion vector v i (x, y, t) is output to the multiplication unit 133 .
  • a set of the region-divided minimum luminance image I Min i (x, y), the region-divided maximum luminance image I Max i (x, y), and the region-divided intermediate luminance image I 0 i (x, y) is stored in a storage unit (not illustrated).
  • the region-divided minimum luminance image I Min i (x, y), the region-divided maximum luminance image I Max i (x, y), and the region-divided intermediate luminance image I 0 i (x, y) stored in the storage unit are read and used by the projection result generation unit 162 and the unnaturalness estimation unit 165 of the second parameter generation unit 160 .
  • the subsequent processing of the first parameter generation unit 130 is performed independently for each frame t of each region i.
  • One first parameter ⁇ i (t) is output for each frame t of each region i, and when first parameters ⁇ i (t) are obtained for all regions/frames, they are collectively input to the first parameter smoothing unit 136 .
  • the multiplication unit 133 takes the region-divided motion vector v i (x, y, t) and a currant first parameter ⁇ i (t) of the region i as inputs.
  • a value output firm the first parameter update unit 135 is used as the current first parameter ⁇ i (t).
  • the multiplication unit 133 multiplies the region-divided motion vector v i (x, y, t) by the anent first parameter ⁇ i (t) of the region i (S 133 ) and outputs the product (vector ⁇ i (t)v i (x, y, t)) to the projection result generation unit 132 and the unnaturalness estimation unit 134 .
  • the projection result generation unit 132 takes the region-divided minimum luminance image I Min i (x, y), the region-divided maximum luminance image I Max i (x, y), the region-divided intermediate luminance image I 0 i (x, y), the motion vector ⁇ i (t)v i (x, y, t) scaled by the arrant first parameter, the P2C map, and the C2P map as inputs and outputs a projection result reproduction image I P i (x, y, t) of the region i to which the current first parameter has been applied.
  • the projection result generation unit 132 generates the projection result reproduction image I P i (x, y, t) to which the current first parameter ⁇ i (t) has been applied as follows (S 132 ).
  • the projection result reproduction image is an image that is assumed to be obtained when the camera photographs the projection target onto which a projection image obtained based on the motion vector ⁇ i (t)v i (x, y, t) has been projected.
  • the projection result generation unit 132 obtains the projection result reproduction image through simulation on a computer.
  • the projection result generation unit 132 distorts the intermediate luminance image I 0 i (x, y) based on the motion vector ⁇ i (t)v i (x, y, t) scaled by the current first parameter ⁇ i (t) to obtain a distorted image I W i (x, y, t). Any distortion method is applied.
  • the image is divided into grid cells having a size of 4 pixels ⁇ 4 pixels, vertices are moved by motion vectors ⁇ i (t)v i (x, y, t) corresponding to the coordinates of the vertices, and regions surrounded by the vertices are filled with the original images of squares while the original images of squares are stretched (or shrunk) ruing a bilinear interpolation method or the like.
  • the cell size of the grid is not limited to 4 pixels ⁇ 4 pixels and it is desirable that the image be divided at a resolution with a cell size which is smaller than the region size in image division of the region division unit 131 and is sufficient to express the characteristics of the motion vector v i (x, y, t).
  • the projection result generation unit 132 obtains an ideal projection image I M i (x, y, t) (a projection image without consideration of the physical restrictions of the projector used) for reproducing the distorted image I W i (x, y, t) using the following equation.
  • Equation (2) The value of I M i (x, y, t) obtained using Equation (2) is limited to a physically projectable range [0, 1] of the projector.
  • the projection result generation unit 132 maps the image obtained in the previous step to the projector coordinate system based on the P2C map and then maps it to the camera coordinate system again based on the C2P map. This makes the projection image coarse in the camera coordinate system according to the resolution of the projector. For accurate reproduction, the resolution of the camera needs to be sufficiently higher than the resolution of the projector.
  • the image obtained here is I ⁇ circumflex over ( ) ⁇ M i (x, y, t).
  • the projection result generation unit 132 obtains the projection result reproduction image I i P (x, y, t) based on the following equation and outputs it to the unnaturalness estimation unit 134 .
  • I P i ( x,y,t ) Î M i ( x,y,t ) I Max i ( x,y )+(1 ⁇ Î M i ( x,y,t )) I Min i ( x,y ) [Math. 3]
  • the projection result reproduction image I P i (x, y, t) represents the value of light emitted from the projector and can be obtained by linearly interpolating a pixel value of the region-divided minimum luminance image I Min i (x, y) and a pixel value of the region-divided maximum luminance image I Max i (x, y) using a pixel value of the image I ⁇ circumflex over ( ) ⁇ M i (x, y, t) as a weight.
  • the unnaturalness estimation unit 134 takes the ratio ⁇ , the intermediate luminance image I 0 i (x, y), the projection result reproduction image I P i (x, y, t), and the motion vector ⁇ i (t)v i (x, y, t) multiplied by the first parameter ⁇ i (t) as inputs, obtains an unnaturalness estimate d i Min (t) of the projection result using these inputs (S 134 ), and outputs the unnaturalness estimate d i Min (t).
  • the processing is performed independently for each region i and each frame t.
  • the unnaturalness estimation unit 134 estimates the unnaturalness of the projection based on the method proposed in Non Patent Literature 1. An overview of the process will be briefly described below.
  • the unnaturalness estimation unit 134 outputs a minimum value d i Min (t) of the perceptual difference d i (t) between the projection result reproduction image I P i (x, y, t) and the ideal distorted image without naturalness also refaced to as a warped image) I W( ⁇ ) i (x, y, t) as an “unnaturalness of the projection result”.
  • Obtaining the minima value of the perceptual difference d i (t) corresponds to obtaining a smallest value of the distance (a smallest distance) between a feature vector representing the perceptual representation of the warped image I W( ⁇ ) i (x, y, t) and a failure vector representing the perceptual representation of the projection result reproduction image I P i (x, y, t) which are obtained by applying a perceptual model that will be described later.
  • This “ideal distorted image without unnaturalness I W( ⁇ ) i (x, y, t)” is generated by distorting the original intermediate luminance image I 0 i (x, y) by the “perceptual oxys of motion ⁇ i ⁇ i (t)v i (x, y, t) perceived when the projection result reproduction image I P i (x, y, t) is viewed” on the computer.
  • ⁇ i is a coefficient (hereinafter referred to as a third parameter) for sailing the input motion vector to make it correspond to the perceptual amount of motion.
  • the third parameter ⁇ i is estimated as a value which minimizes the perceptual difference d i (t) between the projection result reproduction image I P i (x, y, t) and the warped image I W( ⁇ ) i (x, y, t). That is, the unnaturalness estimation unit 134 simultaneously estimates the third parameter ⁇ i that determines the “perceptual amount of motion perceived when the projection result reproduction image I P i (x, y, t) is viewed” and the unnaturalness estimate d i Min (t).
  • FIG. 5 is a functional block diagram of the unnaturalness estimation unit 134 and FIG. 6 illustrates an example of a processing flow thereof.
  • the unnaturalness estimation unit 134 includes a third parameter multiplication unit 134 A, a warped image generation unit 134 B, a third parameter update unit 134 C, a perceptual model application unit 134 D, and a perceptual difference calculation unit 134 E. Processing is performed in the following order.
  • Processing of a loop starting from the third parameter update unit 134 C is performed in the order of the third parameter update unit 1340 ⁇ the third parameter multiplication unit 134 A ⁇ the warped image generation unit 134 B ⁇ the perceptual model application unit 134 D ⁇ the perceptual difference calculation unit 134 E ⁇ the third parameter update unit 134 C.
  • the loop ends and the third parameter update unit 134 C outputs the unnaturalness estimate d i Min (t) to the first parameter update unit 135 .
  • the control of the loop is included in the processing of the third parameter update unit 1340 .
  • the process will be described in order.
  • the third parameter multiplication unit 134 A takes the motion vector ⁇ i (t)v i (x, y, t) multiplied by the first parameter ⁇ i (t) and the current third parameter ⁇ i as inputs.
  • a value output from the third parameter update unit 1340 is used as the current third parameter ⁇ i .
  • the third parameter multiplication unit 134 A multiplies the motion vector ⁇ i (t)v i (x, y, t) multiplied by the first parameter ⁇ i (t) by the current third parameter ⁇ i (S 134 A) and outputs the product (vector ⁇ i ⁇ i (t)v i (x, y, t)) to the warped image generation unit 134 B.
  • the warped image generation unit 134 B takes the intermediate lumina ice image I O i (x, y) and the motion vector ⁇ i ⁇ i (t)v i (x, y, t) scaled by the first and third parameters as inputs, distorts the intermediate luminance image I O i (x, y) based on the motion vector ⁇ i ⁇ i (t)v i (x, y, t) to obtain a warped image I W( ⁇ ) i (x, y, t), and outputs the warped image I W( ⁇ ) i (x, y, t) (S 134 B). Any distortion method is applied.
  • the image is divided into grid cells having a size of 4 pixels ⁇ 4 pixels, vertices are moved by vectors ⁇ i ⁇ i (t)v i (x, y, t) corresponding to the coordinates of the vertices, and regions surrounded by the vertices are filled with the original images of squares while the original images of squares are stretched (or shrank) using a bilinear interpolation method or the like.
  • the cell sire of the grid is not limited to 4 pixels ⁇ 4 pixels and it is desirable that the image be divided at a resolution with a cell size which is smaller than the region size in image division of the region division unit 131 and is sufficient to express the characteristics of the motion vector v i (x, y, t).
  • the perceptual model application unit 134 D takes the warped image I W( ⁇ ) i (x, y, t), the projection result reproduction image I P i (x, y, t), and the ratio ⁇ as inputs and obtains and outputs a perceptual response r′(x, y, t) to the warped image I W( ⁇ ) i (x, y, t) and a perceptual response r(x, y, t) to the projection result reproduction image I P i (x, y, t).
  • each of the input images (the warped image I W( ⁇ ) i (x, y, t) and the projection result reproduction image I P i (x, y, t)) will be hereinafter referred to as I(x, y) (where the indices i and t indicating the region and the frame are omitted for the sake of simplicity).
  • the perceptual model application unit 134 D applies the perceptual model to the input image to obtain the perceptual response (S 134 D).
  • a model that models up to the primary visual cortex corresponding to an initial stage of the human visual system is adopted as a perceptual model.
  • This model that models up to the primary visual cortex takes an image as an input and outputs a response to the input image at spatial frequency components and orientation components of each pixel (region) of the input image (a result of simulating the response of nerve cells).
  • This model can also be said to be a model for obtaining a feature vector representing the perceptual representation of the warped image I W( ⁇ ) i (x, y, t) and a feature vector representing the perceptual representation of the projection result reproduction image I P i (x, y, t).
  • this model uses a linear filter to decompose the input image into a plurality of spatial frequency bands and orientations.
  • the model non-linearly corrects (controls the gains of) values, corresponding to each pixel, of the components obtained through decomposition and outputs the corrected values as the response described above.
  • the present embodiment for example, omits the process of analyzing the orientation components of the image in consideration of calculation speed.
  • the model of the perceptual response is not limited to the implementation described here, and a model including the analysis of orientation components or a model that reproduces a response of the higher-order visual cortex may be used.
  • the pixel value of the input image I(x, y) is multiplied by the ratio ⁇ acquired by the projection target photographing unit 110 to convert the pixel value into a luminance unit.
  • the input image converted into the luminance unit is converted into a just noticeable difference (JND) scale image L(x, y) using a method described in Reference 2.
  • the luminance is mapped such that a aluminance change corresponding to a threshold above which thaws are perceivable is defined as 1. That is, when ⁇ (L) is defined as a function that converts the JND scale value L into luminance, the following equation is obtained.
  • tvi is a function that gives a threshold of the luminance change for adaptive luminance.
  • the present embodiment uses the following equation for tvi as leaned from Reference 2.
  • ( ⁇ 1 , ⁇ 2 , ⁇ 3 , ⁇ 4 ) (30.162, 4.0627, 1.66596, 0.2712) and Y is the adaptive luminance.
  • is obtained as a numerical solution of Equation (3) and stored in a lockup table, and a JND scale value is obtained from luminance by referring to the lookup table.
  • the lookup table stores values that are discrete to some extent in order to save storage space, and when intermediate values between them are obtained, sufficient results can be obtained using linear interpolation.
  • a Laplacian pyramid is generated from the JND scale image L(x, y) and a plurality of bandpass images b 0 (x, y), b 1 (x, y), b 2 (x, y), . . . , and b N-1 (x, y) are obtained.
  • the number of bandpass images N 5.
  • the value of N is not limited to this and it is considered better to increase N as the projection target is photographed at a higher resolution.
  • the resolution decreases toward a bandpass image in a lower spatial frequency band due to downsampling.
  • downsampling is not performed in order to improve the accuracy.
  • the weight w j is represented by the following function.
  • s and ⁇ are constants that determine the shape of the weighting function.
  • the weight function is not limited to this and the parameters may be reset according to observation conditions or the like.
  • p and ⁇ are constants that determine the shape of the contrast gain adjustment function.
  • the contest gain adjustment friction is not limited to this, and any function may be used as long as it can approximate the response of the visual system.
  • the above processing is performed for each of the warped image I W( ⁇ ) i (x, y, t) and the projection result reproduction image I P i (x, y, t) to obtain a perceptual response r′ j i (x, y, t) to the warped image I W( ⁇ ) i (x, y, t) and a perceptual response r j i (x, y, t) to the projection result reproduction image I P i (x, y, t) and the obtained perceptual responses are output to the perceptual difference catenation unit 134 E.
  • a vector having the perceptual response r′ j i (x, y, t) as elements is the feature vector representing the perceptual representation of the warped image I W( ⁇ ) i (x, y, t) described above and a vector having the perceptual response r j i (x, y, t) as elements is the feature vector representing the perceptual representation of the projection result reproduction image I P i (x, y, t) described above.
  • the perceptual difference calculation unit 134 E takes the perceptual response r′ j i (x, y, t) to the warped image and the perceptual response r j i (x, y, t) to the projection result reproduction image as inputs and obtains and outputs a distance d i (t) between the input perceptual responses.
  • the perceptual difference calculation unit 134 E calculates the distance d i (t) between the perceptual responses using the following equation (S 134 E).
  • N x and N y represent the horizontal and vertical sines dune perceptual response r j i (x, y, t) or r′ j i (x, y, t), respectively.
  • the perceptual responses r j i (x, y, t) and r′ j i (x, y, t) have the same size.
  • In is a function that calculates the natural logarithm. The distance calculation method is not limited to this, and for example, a normal Euclidean distance or a Manhattan distance may be used.
  • the perceptual responses r j i (x, y, t) and r′ j i (x, y, t) may be spatially pooled into local regions of p x pixels ⁇ p y pixels such that their size is reduced to 1/p x and 1/p y in the horizontal and vertical directions and then may be substituted into Equation (7).
  • the third parameter update unit 1340 controls a process of searching for the third parameter.
  • the third parameter update unit 134 C searches for a third parameter which minimizes the perceptual difference d i (t) obtained by the perceptual difference calculation unit 134 E.
  • the third parameter update unit 1340 estimates the third parameter as a value (a coefficient for scaling the motion vector) which minimizes the distance between a feature vector representing the perceptual representation of the warped image I W( ⁇ ) i (x, y, t) (a vector having the perceptual response r′ j i (x, y, t) as dements) and a feature vector representing the perceptual representation of the projection result reproduction image I P i (x, y, t) (a vector having the perceptual response r j i (x, y, t) as elements).
  • a golden section search method is used to search for the third parameter will be described, although another search algorithm, for example, a ternary search method, may
  • the third parameter update unit 134 C takes a perceptual difference d i (t) obtained with a third parameter of the previous cycle as an input and outputs the third parameter ⁇ i of the next cycle. However, in the first cycle the third parameter update unit 134 C performs only the output because there is no input. In the final cycle, the third parameter update unit 134 C outputs the minimum perceptual difference d i (t) as an unnaturalness estimate d i Min (t).
  • the third parameter update unit 134 C updates the third parameter such that the perceptual difference d i (t) becomes smaller (S 134 C).
  • the thin parameter update unit 134 C uses, for example, the golden section search method.
  • the third parameter update unit 134 C defines L(k) and H(k) as lower and upper limits of a search section in a kth cycle.
  • the third parameter update unit 134 C ffles the search section at two points into three sections and compares outputs (the perceptual differences d i (t) in this example) of the function wren values of the division points (values of the third parameter in this example) are taken as inputs and shortens the search section.
  • the values of L(0), H(0), A(0), and B(0) are stored in the storage unit.
  • B(k)) (B(k ⁇ 1), (L(k)+ ⁇ H(k))/(1+ ⁇ )).
  • This is output to the first parameter update unit 135 when the unnaturalness estimation unit is used in the first parameter generation unit 130 (as the unnaturalness estimation unit 134 ) and output to the second parameter update unit 166 when the unnaturalness estimation unit is used in the second parameter generation unit 160 (as the unnaturalness estimation unit 165 ).
  • a model (which is also called a perceptual model in the second example of unnaturalness estimation) that takes the warped image I W( ⁇ ) i (x, y, t) and the projection result reproduction image I P i (x, y, t) as inputs and directly outputs the perceptual difference may also be used to obtain the perceptual difference d i (t).
  • the perceptual difference d i (t) is obtained directly from the warped image I W( ⁇ ) i (x, y, t) and the projection result reproduction image I P i (x, y, t), rather than obtaining a perceptual response r′ j i (x, y, t) to the warped image and a perceptual response r j i (x, y, t) to the projection result reproduction image to obtain the distance d i (t) between them as in the first example of unnaturalness estimation.
  • the unnaturalness estimation unit 134 does not include the perceptual difference calculation unit 134 E, and the perceptual model application unit 134 D takes the warped image I W( ⁇ ) i (x, y, t) and the projection result reproduction image I P i (x, y, t) as inputs, applies values of these images to the perceptual model to obtain the perceptual difference d i (t) (S 134 D, S 134 E), and outputs the obtained perceptual difference d i (t).
  • the processing of the other parts of the unnaturalness estimation unit 134 is similar to that of the first example of unnaturalness estimation.
  • the third parameter update unit 134 C estimates, as a result of its processing, the third parameter as a value (a coefficient for scaling the motion vector) which minimizes the distance between a feature vector representing the perceptual representation of the warped image I W( ⁇ ) i (x, y, t) and a feature vector representing the perceptual representation of the projection result reproduction image I P i (x, y, t).
  • a smallest value (a smallest distance) of the distance between a feature vector wing the perceptual representation of the warped image I W( ⁇ ) i (x, y, t) and a feature vector representing the perceptual representation of the projection result reproduction image I P i (x, y, t) is obtained as a minimum value of the perceptual difference d i (t).
  • the first parameter update unit 135 controls a process of searching for the first parameter. For example, the first parameter update unit 135 searches for a first parameter at which the unnaturalness estimate d i Min (t) obtained by the unnaturalness estimation unit 134 is closest to a predetermined threshold ⁇ .
  • the first parameter update unit 135 takes an unnaturalness estimate d i Min (t) obtained with a first parameter of the previous cycle as an input and outputs the first parameter ⁇ i (t) of the next cycle. However, in the first cycle, the first parameter update unit 135 performs only the output because there is no input.
  • the first parameter update unit 135 updates the first parameter ⁇ i (t) such that the unnaturalness estimate d i Min (t) is closest to the predetermined threshold ⁇ (S 135 ).
  • the first parameter update unit 135 updates ⁇ i (t) as follows based on a result of comparison between the input unnaturalness estimate d i Min (t) and the threshold ⁇ .
  • the first parameter smoothing unit 136 takes the first parameter ⁇ i (t) obtained from each region/frame as an input, smooths the input first parameter ⁇ i (t), and outputs the smoothed first parameter ⁇ (x, y, t) of each pixel (S 136 ).
  • the first parameter smoothing unit 136 spatially and temporally smooths the first parameter ⁇ i (t) obtained from each region/frame using the following: (i) First parameters obtained from regions spatially adjacent to the region i and the frame t. (ii) First parameters obtained from the region i and frames temporally adjacent to the frame t. (iii) First parameters obtained from regions spatially adjacent to the region i and frames temporally adjacent to the frame t.
  • the first parameter of each region/frame will be referred to as ⁇ (m, n, t) for the sake of explanation.
  • m represents the horizontal position of the region
  • n represents the vertical position of the region
  • t represents the time flame to which the region belongs.
  • Constraint 1 ⁇ ′(m, n, t) ⁇ (m, n, t) must be satisfied for all m, n, and t. This restricts the unnaturalness from exceeding an unnaturalness threshold due to the smoothing process.
  • Constraint 2 The following mist be satisfied for all m, n, and t.
  • (m′, n′, t′) represents a set of regions around (m, n, t), where m′ ⁇ m ⁇ 1, m, m+1 ⁇ , n′ ⁇ n ⁇ 1, n, n+1 ⁇ , and t′ ⁇ t ⁇ 1, t, t+1 ⁇ .
  • s s and s t are permissible values for the magnitude of the gradient between adjacent regions. These values need to be set sufficiently small because it is required that the first parameter not qualitatively change the input original motion vector (such that a rigid motion remains rigid).
  • (s s , s t ) (0.06, 0.03). It is desirable that these values be adjusted according to the region size and the frame rate for projection. In other words, s s may increase as the region size increases and st may increase as the frame rate decreases. In the present embodiment, it is assumed that the region size is 641 pixels ⁇ 64 pixels and the franc rate is 60 FPS.
  • the preset embodiment uses the method described in Reference 3 as an algorithm for updating ⁇ (m, n, t) such that the above constraints are satisfied.
  • FIG. 7 shows an example of a specific processing algorithm
  • a basic processing flow involves scanning the values of ⁇ (m, n, t) of regions in order and updating the values of ⁇ such that the above constraints 1 and 2 are satisfied.
  • the update method follows the following procedure.
  • step 2 If the difference calculated in the above step 1 is larger than the restricted value on the right side of the constraint 2, the value of the current region is reduced until the difference becomes equal to the value on the right side.
  • a process of spreading the value over pixels (x, y, t) is performed.
  • a process of expanding the first parameter ⁇ ′(m, n, t) of each region through bilinear interpolation is performed for each frame t to obtain the first parameters ⁇ (x, y, t) of pixels.
  • the interpolation method used for expansion is not limited to this, and for example, bicubic interpolation or the like may be used.
  • the obtained ⁇ (x, y, t) is output to the differential motion vector calculation unit, the second parameter generation unit 160 , and the motion vector combining unit 170 .
  • the non-rigid vector extraction unit 150 takes the motion vector v(x, y, t) and the reduced motion vector v s (x, y, t) as inputs, extracts a non-rigid motion vector component ⁇ v h (x, y, t) included in the difference between the motion vector v(x, y, t) and the reduced motion vector v s (x, y, t) (S 150 ), and outputs the extracted non-rigid motion vector component ⁇ v h (x, y, t) to the second parameter generation unit 160 and the motion vector combining unit 170 .
  • the non-rigid vector extraction unit 150 includes a differential motion vector calculation unit and a filtering unit (not illustrated).
  • the non-rigid motion vector component ⁇ v h (x, y, t) corresponds to a high-pass component (a high spatial frequency component) of the motion vector v(x, y, t) and the filtering unit functions as a high-pass filter.
  • the filtering unit takes the motion vector difference ⁇ v(x, y, t) as an input and obtains and outputs a non-rigid motion vector component ⁇ v h (x, y, t) of the motion vector difference.
  • the filtering unit convolves a Gaussian filter with the difference ⁇ v(x, y, t) to obtain a low spatial frequency component ⁇ v l (x, y, t) of the difference ⁇ v(x, y, t).
  • the standard deviation of the Gaussian filter kernel is 8 pixels. The standard deviation is not limited to this and any value can be set. However, if the standard deviation is too small, almost no non-rigid components remain to be extracted in the neat step, and if it is too large, non-rigid components are likely to include a large amount of rigid motion components.
  • the second parameter generation unit 160 takes the reduced motion vector v s (x, y, t), the non-rigid motion vector component ⁇ v h (x, y, t), the region-divided minimum luminance image I Min i (x, y), the region-divided maximum luminance image I Max i (x, y), the region-divided intermediate luminance image I 0 i (x, y), the ratio ⁇ , the P2C map, and the C2P map as inputs.
  • the second parameter generation unit 160 uses the reduced motion vector v s (x, y, t) scaled by the first parameter output from the motion vector reduction unit 140 and the non-rigid motion vector component ⁇ v h (x, y, t) output from the non-rigid vector extraction unit 150 to generate a second parameter ⁇ 2 (S 160 ) and outputs the generated second parameter ⁇ 2 .
  • the second parameter ⁇ 2 (x, y, t) is a parameter for scaling the non-rigid motion vector component ⁇ v h (x, y, t) as in “v s (x, y, t)+ ⁇ 2 (x, y, t) ⁇ v h (x, y, t)” when a motion lost due to reduction with the first parameter is compensated for with the non-rigid motion vector component.
  • FIG. 8 is a functional block diagram of the second parameter generation unit 160 and FIG. 9 illustrates an example of a processing flow thereof.
  • the second parameter generation unit 160 includes a second region division unit 161 , a projection result generation unit 162 , a second multiplication unit 163 , a motion vector addition unit 164 , an unnaturalness estimation unit 165 , a second parameter update unit 166 , and a second parameter smoothing unit 167 . Details of the processing of each part will be described below.
  • the second region division unit 161 takes the reduced motion vector v s (x, y, t) scaled by the first parameter and the non-rigid motion vector component ⁇ v h (x, y, t) output from the non-rigid vector extraction unit 150 as inputs and obtains and outputs a region-divided reduced motion vector v s i (x, y, t) and a region-divided non-rigid motion vector component ⁇ v h i (x, y, t).
  • i represents the region number.
  • the second region division unit 161 divides the input vectors (the reduced motion vector v s (x, y, t) and the non-rigid motion vector component ⁇ v h (x, y, t)) into regions (S 161 ).
  • a region-divided reduced motion vector v s i (x, y, t) is output to the motion vector addition unit 164 and a region-divided non-rigid motion vector component ⁇ v h i (x, y, t) is output to the second multiplication unit 163 .
  • the subsequent processing of the second parameter generation unit 160 is performed independently for each frame t of each region i.
  • One second parameter ⁇ 2 i (t) is output for each flame t of each region i, and when second parameters ⁇ 2 i (t) are obtained for all regions/frames, they are collectively input to the second parameter smoothing unit 167 .
  • the second multiplication unit 163 takes the region-divided non-rigid motion vector component ⁇ v h i (x, y, t) and the current second parameter ⁇ 2 i (t) of the region i as inputs, multiplies the region-divided non-rigid motion vector component ⁇ v h i (x, y, t) by the current second parameter ⁇ 2 i (t) of the region i (S 163 ), and outputs the product ( ⁇ 2 i (t) ⁇ v h i (x, y, t)) to the motion vector addition unit 164 .
  • a value output from the second parameter update unit 166 is used as the current nt second parameter ⁇ 2 i (t).
  • the motion vector addition unit 164 takes the region-divided reduced motion vector v s i (x, y, t) and the non-rigid motion vector component ⁇ 2 i (t) ⁇ v h i (x, y, t) multiplied by the current second parameter ⁇ 2 i (t) as inputs and obtains and outputs a vector v ⁇ circumflex over ( ) ⁇ i (x, y, t) that combines the reduced motion vector and the non-rigid motion vector component.
  • the projection result generation unit 162 and the unnaturalness estimation unit 165 of the second parameter generation unit 160 perform the same processing S 162 and S 165 as that of the projection result generation unit 132 and the unnaturalness estimation unit 134 of the first parameter generation unit 130 , respectively, except that the “motion vector ⁇ i (t)v i (x, y, t) scaled by the current first parameter” taken as an input motion vector is replaced with the “vector v ⁇ circumflex over ( ) ⁇ i (x, y, t) that combines the reduced motion vector and the non-rigid motion vector component”.
  • the second parameter update unit 166 takes an unnaturalness estimate d i Min (t) obtained with a previous second parameter as an input and obtains and outputs a second parameter ⁇ 2 i (t) of the next cycle. However, in the first cycle, the second parameter update unit 166 performs only the output because there is no input.
  • the second parameter update unit 166 controls a process of searching for the second parameter. For example, the second primmer update unit 166 searches for a second parameter at which the unnaturalness estimate d i Min (t) obtained by the unnaturalness estimation unit 165 is closest to app threshold ⁇ . The value of ⁇ is the same as that used in the first parameter update unit 135 . A binary search method is used for the search, similar to the first parameter update unit 135 .
  • the second parameter update unit 166 performs the same processing S 166 and S 166 A as the first parameter update unit 135 , except that the first parameter is replaced with the second parameter.
  • the second parameter smoothing unit 167 performs the same processing S 167 as the first parameter smoothing unit 136 .
  • the second parameter smoothing unit 167 takes the second parameter ⁇ 2 i (t) obtained from each region/frame as an input, smooths the input second parameter ⁇ 2 i (t) (S 167 ), and outputs the smoothed second parameter ⁇ 2 (x, y, t) of each pixel.
  • the parameters (s s , s t ) that determine permissible levels for the magnitude of the gradient between adjacent regions are set greater than those of the first parameter smoothing unit 136 because non-rigid motion vector components do not significantly charge their qualitative impression of motion even if the magnitude of motion changes locally.
  • (s s , s t ) (0.3, 0.06).
  • these parameters send limited to the values defined here and any value may be set as Icing as the spatial and temporal discontinuities of the magnitude of motion are not a concern.
  • the generated second parameter ⁇ 2 (x, y, t) is output to the motion vector combining unit 170 .
  • the motion vector combining unit 170 takes the second parameter ⁇ 2 (x, y, t), the non-rigid motion vector component ⁇ v h (x, y, t), and the reduced motion vector v s (x, y, t) as inputs and obtains and outputs a combined motion vector v ⁇ circumflex over ( ) ⁇ (x, y, t).
  • the motion vector combining unit 170 outputs the combined motion vector v ⁇ circumflex over ( ) ⁇ (x, y, t) to the projection image generation unit 180 .
  • the projection image generation unit 180 takes the minimum luminance image I Min (x, y), the maximum luminance image I Max (x, y), the intermediate luminance image I 0 (x, y), the combined motion vector v ⁇ circumflex over ( ) ⁇ (x, y, t), and the P2C map as inputs and obtains and outputs a projection image I P (x, y, t).
  • the projection image generation unit 180 distorts the intermediate luminance image I 0 (x, y) based on the combined motion vector v ⁇ circumflex over ( ) ⁇ (x, y, t) to obtain a distorted image I W (x, y, t) (S 180 ).
  • the distortion method is similar to that of the projection result generation unit 132 in the first parameter generation unit 130 .
  • the projection image generation unit 180 obtains an ideal projection image I M (x, y, t) for reproducing a distorted image using Equation (2), similar to the projection result generation unit 132 in the first parameter generation unit 130 .
  • the projection image generation unit 180 units the value of I M (x, y, t) to the physically projectable rangy [0, 1] of the projector.
  • the projection image generation unit 180 maps the image thus obtained to the projector coordinate system based on the P2C map, sets the resulting image as I P (x, y, t), and outputs it to the projection unit 190 .
  • the projection unit 190 takes the projection image I P (x, y, t) as an input and projects the input projection image from the projector toward the projection target (S 190 ).
  • the projection image I P (x, y, t) is projected such that edges included in the projection image I P (x, y, t) overlap the contour of the projection target or edges included in the projection target.
  • alignment of the projection image I P (x, y, t) is unnecessary because the projection image I P (x, y, t) is generated based on the P2C map obtained through camera calibration.
  • a commercially available projector may be used, but it is necessary to use a projector with high luminance when used in a bright room.
  • the projection unit 190 projects the projection image I P (x, y, t) onto the projection target M static using a known optical production technique (see for example, Reference 4) to display a moving image M 2 .
  • M 2 M static ⁇ I P ( x,y,t ) [Math. 12]
  • represents a state in which the projection image I P (x, y, t) is added to/multiplied by (applied to) the luminance component of the projection target M static in a combined manner.
  • represents a state in which an operation including at least one of addition and multiplication is performed on the luminance component of the projection target M static and the projection image I P (x, y, t). That is, when light is projected alto a printed matter, it is assumed that the reflection pattern differs depending on the characteristics of paper or ink and the luminance changes multiplicatively in some parts while changing additively in other parts.
  • c indicates a calculation that makes the luminance change in those two ways.
  • motion information to be projected can be automatically adjusted and optimized for each region and each frame according to the projection target and the projection environment. Further, fine adjustments that are difficult to perform manually can be performed in a short time.
  • the projection target photographing unit 110 , the camera-projector pixel correspondence acquisition unit 120 , and the addition unit 125 may be provided as separate devices and a projection image generation apparatus including the remaining components may take their output values (I Max , I Min , I 0 , ⁇ , the P2C map, and the C2P map) as inputs.
  • the projection unit 190 may be provided as a separate device and the projection image generation apparatus may be configured to output the projection image I P (x, y, t) to the projection unit 190 which is a separate device.
  • the first der generation unit 130 , the motion vector reduction unit 140 , the non-rigid vector extraction unit 150 , the second parameter generation unit 160 , and the motion vector combining unit 170 may be extracted from the projection image generation apparatus of the present embodiment and implemented to function as a motion vector generation apparatus.
  • the motion vector generation apparatus takes, I Max , I Min , I 0 , ⁇ , the P2C map, the C2P map, and v(x, y, t) as inputs and outputs a combined motion vector v ⁇ circumflex over ( ) ⁇ (x, y, t).
  • Patent Literature 1 When the magnitude of motion is manually adjusted as in Patent Literature 1, it is not possible to realize an application that interactively gives motions to a target (for example, an application that gives motions based on changes in the facial expression of a person to a photograph or painting through projection mapping while capturing the facial expression of the person in real time with a camera).
  • a target for example, an application that gives motions based on changes in the facial expression of a person to a photograph or painting through projection mapping while capturing the facial expression of the person in real time with a camera.
  • Processing of the first embodiment is performed such that the first parameter generation unit 130 and the second parameter generation unit 160 obtain first parameters ⁇ i (t) (or second parameters ⁇ 2 i (t)) of regions of each flame over all regions of all frames and then the first parameter smoothing unit 136 (or the second parameter smoothing unit 167 ) collectively smooths them at once to obtain first parameters ⁇ (x, y, t) (or second parameters ⁇ 2 (x, y, t)).
  • the method of the first embodiment cannot be used in cases where it is required that input motion vectors v(x, y, t) be optimized sequentially (in real time) (for example, in applications that require interactivity).
  • a second embodiment will be described with regard to a method of performing processing for optimizing input motion vectors v(x, y, t) sequentially frame by flame.
  • changes from the first embodiment will be mainly described.
  • the motion vector reduction unit 140 , the non-rigid vector extraction unit 150 , the neon vector combining unit 170 , and the projection image generation unit 180 perform only processing relating to the current frame.
  • the region division unit 131 performs region division of the motion vector v(x, y, t 0 ) of the current flame in the same manner as in the first embodiment.
  • the processing performed for each region is performed in the same manner as in the first embodiment.
  • the processing of the first parameter smoothing unit 136 is replaced with the following processing.
  • the first parameter smoothing unit 136 takes the first pander ⁇ i (t) obtained from each region/frame as an input and obtains and outputs a smoothed first parameter ⁇ (x, y, t 0 ) of each pixel.
  • the first parameter smoothing unit 136 in the second embodiment separately performs smoothing in the spatial direction and smoothing in the temporal direction.
  • the smoothing in the spatial direction is performed through the same procedure as in the first embodiment as follows.
  • the first parameter of each region will be referred to as ⁇ (m, n) for the sake of explanation.
  • m represents the horizontal position of the region and n represents the vertical position of the region.
  • smoothing is performed such that extreme value changes do not occur between adjacent first parameters ⁇ (m, n).
  • smoothing is performed by replacing ⁇ (m, n) with ⁇ ′(m, n) such that the following two constraints are satisfied.
  • Constraint 1 ⁇ ′(m, n) ⁇ (m, n) nest be satisfied for all m and n. This can restrict the unnaturalness from exceeding an unnaturalness threshold due to the smoothing process.
  • Constraint 2 The following must be satisfied for all m and n.
  • (m′, n′) represents a set of regions around (m, n), where m′ ⁇ m ⁇ 1, m, m+1 ⁇ , n′ ⁇ n ⁇ 1, n, n+1 ⁇ .
  • s s is a permissible value for the magnitude of the gradient between adjacent regions.
  • s s 0.06.
  • the method described in Reference 3 can be used as an algorithm for updating ⁇ (m, n, t), similar to the first embodiment. The specific processing is as illustrated in FIG. 11 .
  • Smoothing is performed in the temporal direction after smoothing in the spatial direction.
  • a first parameter ⁇ ′′(m, n, t 0 ⁇ 1) of an immediately previous frame that has been smoothed in the spatial and temporal directions thereinafter referred to as ⁇ ′′(t 0 ⁇ 1) for the sake of simplicity) is read from the storage unit, and a first parameter ⁇ ′(m, n, t 0 ) of the current frame that has been smoothed in the spatial direction (hereinafter referred to as ⁇ ′(t 0 ) for the sake of simplicity) is smoothed in the following manner to obtain a first parameter ⁇ ′′(m, n, t 0 ) that has been smoothed in the temporal direction referred to as ⁇ ′′(t 0 ) for the sake of simplicity).
  • F represents the overall frame rate of the system and s′ t is a parameter that determines the permissible value (maximum value) of the magnitude of the gradient from the previous frame.
  • the permissible magnitude of the gradient in the temporal direction of the first parameter is 0.033.
  • the permissible magnitude of the gradient does not necessarily have to be this value, but the discontinuity of the magnitude of motion may be noticeable if it is too large, while the number of frames in which the unnaturalness of the projection result becomes greater than the threshold t increases if it is too small. In consideration of these factors, the user may be allowed to select an optimum parameter.
  • the obtained first parameter ⁇ ′′(t 0 ) that has been smoothed is stored in the storage omit and used for the smoothing process of the next frame.
  • the first parameter smoothing unit 136 smooths the first parameter ⁇ ′(t 0 ) in the temporal direction using the first parameter ⁇ ′′(t 0 ⁇ 1) and the predetermined value (s′ t /F or ⁇ s′ t /F).
  • ⁇ ′′(t 0 ) is expanded through the bilinear interpolation method or the like as in the first embodiment to obtain the first parameter ⁇ (x, y, t 0 ) of each pixel.
  • the second region division unit 161 performs region division of the reduced motion vector v s (x, y, t 0 ) of the current frame and the non-rigid motion vector component ⁇ v h (, y, t 0 ) of the cu ent frame in the same manner as in the first embodiment.
  • the processing performed for each region is performed in the same manner as in the first embodiment.
  • the processing of the second parameter smoothing unit 167 is replaced with the same processing as that of the first parameter smoothing unit 136 in the second embodiment.
  • the second parameter smoothing unit 167 first performs smoothing in the spatial direction using the method described in Reference 3 and then performs smoothing in the temporal direction.
  • the second parameter ⁇ ′ 2 (t 0 ) is smoothed in the temporal direction using the second parameter ⁇ ′′ 2 (t 0 ⁇ 1) and the predetermined value (s′ t /F or ⁇ s′ t /F), similar to the first parameter smoothing unit 136 in the second embodiment.
  • the parameter s′ t that determines the permissible level of the magnitude of the gradient is set greater than that of the first parameter smoothing unit.
  • s′ t 4.
  • the value of s′ t is not limited to the value defined here and any value may be set as long as the temporal discontinuity of the magnitude of motion is not a concern.
  • the motion vector v(x, y, t) can be optimized sequentially (in real time).
  • the present invention can be applied to an application that interactively gives motions to a target.
  • a plurality of bandpass components may be extracted using a plurality of bandpass filters, whereas in the first and second embodiments, the filtering unit of the non-rigid vector extraction unit 150 extracts a high-frequency component of the motion vector as a non-rigid motion vector component ⁇ v h (x, y, t).
  • a non-rigid vector extraction unit 150 may be configured to decompose a motion vector into a plurality of (N P ) bandpass components ⁇ v b_1 , ⁇ v b_2 , . . . , ⁇ v b_N_P (where N P is an integer of 2 or more) using a Laplacian pyramid or the like and to obtain nth parameters of different spatial frequency components (n ⁇ 2, . . . , N P+1 ).
  • FIG. 12 is a functional block diagram of a projection image generation apparatus according to the third embodiment and FIG. 13 illustrates an example of a processing flow thereof.
  • FIG. 12 omits illustration of a projection target photographing unit 110 , an addition unit 125 , a camera-projector pixel correspondence acquisition unit 120 , and a projection unit 190 .
  • the projection image generation apparatus includes N P pieces of nth parameter generation units 160 - n and N P pieces of nth motion vector combining units 170 - n (n ⁇ 2, . . . , N P +1) instead of the second parameter generation unit 160 and the motion vector combining unit 170 of the projection image generation apparatus of the first embodiment or the second embodiment.
  • Each nth parameter generation unit 160 - n perform the same processing as that of the second parameter generation unit 160 of the first embodiment (or the second embodiment) except for points described below.
  • the nth parameter ⁇ n (x, y, t) is a parameter for scaling the (n ⁇ 1)th bandpass component ⁇ v b_n-1 (x, y, t) as in “v s (x, y, t)+ ⁇ 2 (x, y, t) ⁇ v 1 (x, y, t)+ . . . + ⁇ n (x, y, t) ⁇ v b_n-1 (x, y, t)+ . . .
  • the nth parameter generation unit 160 - n replaces the non rigid motion vector component ⁇ v h (x, y, t) with the (n ⁇ 1)th bandpass component ⁇ v b_n-1 (x, y, t) of the motion vector.
  • the reduced motion vector v s (x, y, t) is replaced with the combined motion vector v n-1 (x, y, t) output from the (n ⁇ 1)th motion vector combining unit 170 -( n ⁇ 1) and the second parameter ⁇ 2 is replaced with the nth parameter ⁇ n .
  • the constraints in the magnitude of the gradient s s and s t (s′ t when real-time processing is performed as in the second embodiment) used in a second parameter smoothing unit 167 in the nth parameter generation unit 160 - n gradually increases with n (for example, increases by 2 times each time n increases by 1).
  • the obtained nth parameter ⁇ n (x, y, t) is output to the nth motion vector combining unit 170 - n.
  • the nth motion vector combining unit 170 takes the nth parameter ⁇ n (x, y, t), the (n ⁇ 1)th bandpass component ⁇ v b_n-1 (x, y, t) of the motion vector, and the combined motion vector v n-1 (x, y, t) output from the (n ⁇ 1)th motion vector combining unit 170 -( n ⁇ 1) as inputs and obtains and outputs a combined motion vector v n (x, y, t).
  • the nth motion vector combining unit 170 adds the (n ⁇ 1)th bandpass component ⁇ n (x, y, t) ⁇ v b_n-1 (x, y, t) scaled using the nth parameter and the (n ⁇ 1) combined vector v n-1 (x, y, t) according to the following equation to calculate the combined motion vector v n (x, y, t) (S 170 - n ).
  • v n ( x,y,t ) v n-1 ( x,y,t )+ ⁇ n ( x,y,t ) ⁇ v b n-1 ( x,y,t ) [Math. 15]
  • the combined motion vector v n (x, y, t) is output to the (n+1)th parameter generation unit 160 -( n +1) and the (n+1)th motion vector combining unit 170 -( n +1).
  • the combined motion vector v N_P+1 (y, t) is output to the projection image generation unit 180 as v ⁇ circumflex over ( ) ⁇ (x, y, t).
  • the non-rigid vector extraction unit 150 , the second parameter generation unit 160 , and the motion vector combining unit 170 may be omitted and a motion vector obtained by the motion vector reduction unit 140 may be used as a final motion vector in the projection image generation unit 180 .
  • the parameters used in the first parameter smoothing unit (s s and s t in the first embodiment and s s and s′ t in the second embodiment) are replaced with those used in the second parameter smoothing unit 167 .
  • FIG. 14 is a functional block diagram of the projection image generation apparatus according to the fourth embodiment and FIG. 15 illustrates a processing flow thereof.
  • the unnaturalness estimation unit 134 described in the first embodiment it is necessary to run a loop to simultaneously obtain the third parameter ⁇ i that determines a perceptual magnitude of motion with respect to a projection result and the unnaturalness estimate d i Min (t) and thus the processing takes time.
  • the present embodiment will be described with regard to a method in which a third parameter ⁇ i is first analytically obtained and an unnaturalness estimate d i Min (t) is calculated using the obtained third parameter ⁇ i , thereby allowing d i Min (t) to be output without ruining the loop.
  • only the unnaturalness estimation unit 134 is replaced with an unnaturalness estimation unit 534 of FIG. 16 , while any types can be used for other processes and components.
  • FIG. 16 is a functional block diagram of the unnaturalness estimation unit 534 according to the fifth embodiment and FIG. 17 illustrates an example of a processing flow thereof.
  • the third parameter update unit 134 C is removed, and instead, a third parameter estimation unit 534 C is newly added.
  • the other can on processing units (a third parameter multiplication unit 134 A, a warped image generation unit 134 B, a perceptual model application writ 134 D, and a perceptual difference calcination twit 134 E) perform the sane processing as those of the unnaturalness estimation writ 134 of the first embodiment, except for the following two points.
  • a third parameter ⁇ i which is input to the third parameter multiplication unit 134 A, is provided by the third parameter estimation alit 534 C.
  • a perceptual difference d i (t) obtained by the perceptual difference calculation unit 134 E is directly output from the unnaturalness estimation unit 534 C as an unnaturalness estimate d Min i (t).
  • the third parameter estimation unit 534 C takes an intermediate luminance image I 0 i (x, y), a motion vector ⁇ i (t)v i (x, y, t) scaled by the first parameter, and a projection result reproduction image I P i (x, y, t) as inputs, obtains a third parameter ⁇ i (S 534 C), and outputs the third parameter ⁇ i .
  • the third parameter estimation unit 534 C uniquely obtains the third parameter ⁇ i without repeatedly obtaining the perceptual difference d i (t).
  • the third parameter ⁇ i is a parameter that determines the “perceptual amount of motion ⁇ i ⁇ i (t)v i (x, y, t)” perceived when the projection result reproduction image I P i (x, y, t) is Viewed ⁇ i which minimizes the perceptual difference d i (t) between the projection result reproduction image I P i (x, y, t) and the image I W( ⁇ ) i (x, y, t) generated by distorting the original intermediate luminance image I 0 i (x, y) by ⁇ i ⁇ i (t)v i (x, y, t) on the computer is obtained as “ ⁇ i that determines the perceptual amount of motion”.
  • the first embodiment converts the projection result reproduction image I P i (x, y, t) and the image I W( ⁇ ) i (x, y, t) into perceptual responses r(x, y, t) and r′(x, y, t), respectively, and then explicitly calculates the distance d i (t) between the perceptual responses r(x, y, t) and r′(x, y, t) as a perceptual difference and obtains ⁇ i that minimizes d i (t) through a search including iterative processing.
  • a method of directly estimating ⁇ i without calculating d i (t) will be described.
  • the superscript i which indicates belonging to the region i
  • the time frame t will be omitted to simplify the description. (Processing is performed independently for each region i and each frame t)
  • v x (x, y) and v y (x, y) represent x- and y-axis elements of the motion vector ⁇ v (x, y), respectively.
  • pixel movement will be described as inverse warping (a mode in which the original image is referred to by the image after movement).
  • forward warping a mode in which the image after movement is referred to by the origins image because it is assumed that ⁇ is spatially smooth.
  • Equation (11) can be expressed as follows by a first-order approximation of Taylor expansion.
  • the first-order approximation of Taylor expansion is performed.
  • Equation (16) a method of solving Equation (16) by replacing I P , I W(1) , and I 0 with responses of the perceptual model which are conversion results through the same processing as that of the perceptual model application unit 134 D can be considered first.
  • conversion may be made into up to weighted bandpass images represented by Equation (4) and these may be substituted into Equation (16) to obtain ⁇ .
  • This may be adopted because it is possible to obtain sufficient accuracy to estimate the perceptual amount of motion without reproducing the contrast gain adjustment process represented by Equation (6).
  • the conversion of Equation (6) is very important for the unnaturalness estimation.
  • a specific procedure for obtaining the third parameter ⁇ i is as follows.
  • the third parameter estimation unit 534 C distorts the intermediate luminance image I 0 i (x, y) based on the motion vector ⁇ i (t)v i (x, y, t) scaled by the first parameter ⁇ i (t) to obtain I i W(1) (x, y, t).
  • the distortion method is similar to that of the projection result generation unit 132 in the first parameter generation unit 130 .
  • the third parameter estimation unit 534 C converts each of I i W(1) (x, y, t), I i P (x, y, t), and I 0 (x, y) into weighted bandpass images c j (x, y) according to processing 1 to 3 of the perceptual model application unit 134 D.
  • the estimate ⁇ i of the third parameter is output to the third parameter multiplication unit 134 A.
  • the present embodiment may be combined with the second to fourth embodiments.
  • the first parameter is kept lowered until the number of update cycles reaches N s , unless the unnaturalness estimate d i Min (t) is equal to or less than the threshold ⁇ .
  • the first parameter becomes very small and the magnitude of motion may be reduced more than expected.
  • the first parameter may be constrained such that the first parameter does not fall below a certain lower limit.
  • ⁇ i representing how much the perceptual magnitude of motion is compared with the physical magnitude of the vector
  • the first parameter update unit 135 takes an unnaturalness estimate d i Min (t) obtained with a first parameter of the previous cycle and a third parameter ⁇ i (which is indicated by ( ⁇ i ) in FIG. 3 ) as inputs, obtains a first parameter ⁇ i (t) of the next cycle (S 135 ), and outputs the obtained first parameter ⁇ i (t). However, in the first cycle, the first parameter update unit 135 performs only the output because there is no input.
  • the first parameter update unit 135 updates ⁇ i (t) as follows based on a result of comparison between the input unnaturalness estimate d i Min (t) and the threshold ⁇ .
  • the first parameter update unit 135 ends the search and outputs ⁇ i (t) to the first parameter smoothing unit 136 .
  • the present embodiment may be combined with the second to fifth embodiments.
  • the projection image generation method may be performed based on another method.
  • a method of JJP 2018-50216 A can be used.
  • the projection unit 190 projects uniform light of luminance B 1 and B 2 (B 1 ⁇ B 2 ) onto the projection target and the projection target photographing unit 110 obtains images I B1 and I B2 by photographing the projection target under the respective conditions.
  • the projection result generation unit 132 and the projection image generation unit 180 generate I M using the following equation.
  • K is a value that reflects the albedo (reflectance) of each pixel of the projection target and is calculated as follows.
  • K ( x,y ) I B2 ( x,y ) ⁇ I B1 ( x,y )/ B 2 ⁇ B 1 [Math. 24]
  • the projection result generation unit 132 obtains I ⁇ circumflex over ( ) ⁇ M (x, y, t) through the same procedure as in the first embodiment and calculates I P using the following equation.
  • I P ⁇ ( x , y , t ) ( I B ⁇ ⁇ 2 ⁇ ( x , y ) - I B ⁇ ⁇ 1 ⁇ ( x , y ) ) ⁇ ( I ⁇ M ⁇ ( x , y , t ) - B 1 ) B 2 - B 1 + I B ⁇ ⁇ 1 ⁇ ( x , y )
  • the present embodiment may be combined with the second to sixth embodiments.
  • each device or apparatus
  • the various processing furriers of each device (or apparatus) described in the above embodiments and modifications may be realized by a computer.
  • the processing details of the functions that each device may have are described in a program.
  • the program is executed by a computer, the various processing functions of the device are implemented on the computer.
  • the program in which the processing details are described can be recorded on a computer-readable recording medium.
  • the computer-readable recording medium can be any type of medium such as a magnetic recording device, an optical disc, a magneto-optical recording medium, or a semiconductor memory.
  • the program is distributed for example, by selling giving, or lending a portable recording medium such as a DVD or a CD-ROM with the program recorded on it.
  • the program may also be distributed by storing the program in a storage device of a server computer and transmitting the program from the server computer to another computer through a network.
  • a computer configured to execute such a program first stores, in its storage unit, the program recorded on the portable recording medium or the program transmitted from the server computer. That the computer reads the program stored in its storage unit and executes processing in accordance with the read program.
  • the computer may read the program directly from the portable recording medium and execute processing in accordance with the read program.
  • the computer may also sequentially execute processing in accordance with the program transmitted from the server computer each time the program is received from the server computer.
  • the processing may be executed through a so-called application service provider (ASP) service in which functions of the processing are implemented just by issuing an won to execute the program and obtaining results without transmission of the program from the server computer to the computer.
  • the program includes information that is provided for use in processing by a computer and is equivalent to the program (such as data having properties defining the processing executed by the computer rather than direct commands to the computer).
  • the device is described as being configured by executing the predetermined program on the computer, but at least apart of the processing may be realized by hardware.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • Transforming Electric Information Into Light Information (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)
  • Projection Apparatus (AREA)
US17/296,464 2018-11-28 2019-11-14 Motion vector generation apparatus, projection image generation apparatus, motion vector generation method, and program Active 2040-10-03 US11954867B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2018221942A JP7040422B2 (ja) 2018-11-28 2018-11-28 動きベクトル生成装置、投影像生成装置、動きベクトル生成方法、およびプログラム
JP2018-221942 2018-11-28
PCT/JP2019/044619 WO2020110738A1 (ja) 2018-11-28 2019-11-14 動きベクトル生成装置、投影像生成装置、動きベクトル生成方法、およびプログラム

Publications (2)

Publication Number Publication Date
US20210398293A1 US20210398293A1 (en) 2021-12-23
US11954867B2 true US11954867B2 (en) 2024-04-09

Family

ID=70853193

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/296,464 Active 2040-10-03 US11954867B2 (en) 2018-11-28 2019-11-14 Motion vector generation apparatus, projection image generation apparatus, motion vector generation method, and program

Country Status (3)

Country Link
US (1) US11954867B2 (ja)
JP (1) JP7040422B2 (ja)
WO (1) WO2020110738A1 (ja)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023073913A1 (ja) * 2021-10-29 2023-05-04 日本電信電話株式会社 画像補正装置、画像補正方法、およびプログラム

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2557466B2 (ja) * 1988-05-13 1996-11-27 日本電気ホームエレクトロニクス株式会社 Museデコーダの低域置換回路
US20040252230A1 (en) * 2003-06-13 2004-12-16 Microsoft Corporation Increasing motion smoothness using frame interpolation with motion analysis
US20060257048A1 (en) * 2005-05-12 2006-11-16 Xiaofan Lin System and method for producing a page using frames of a video stream
US20140218569A1 (en) * 2013-02-01 2014-08-07 Canon Kabushiki Kaisha Imaging apparatus, control method therefor, and storage medium
US20140292817A1 (en) * 2011-10-20 2014-10-02 Imax Corporation Invisible or Low Perceptibility of Image Alignment in Dual Projection Systems
WO2015163317A1 (ja) 2014-04-22 2015-10-29 日本電信電話株式会社 映像表示装置、映像投影装置、動的錯覚呈示装置、映像生成装置、それらの方法、データ構造、プログラム
US20170006284A1 (en) * 2013-01-30 2017-01-05 Neelesh Gokhale Projected interpolation prediction generation for next generation video coding
US20190124332A1 (en) * 2016-03-28 2019-04-25 Lg Electronics Inc. Inter-prediction mode based image processing method, and apparatus therefor
WO2020077198A1 (en) * 2018-10-12 2020-04-16 Kineticor, Inc. Image-based models for real-time biometrics and marker-less motion tracking in imaging applications

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6430420B2 (ja) * 2016-02-12 2018-11-28 日本電信電話株式会社 情報呈示システム、および情報呈示方法

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2557466B2 (ja) * 1988-05-13 1996-11-27 日本電気ホームエレクトロニクス株式会社 Museデコーダの低域置換回路
US20040252230A1 (en) * 2003-06-13 2004-12-16 Microsoft Corporation Increasing motion smoothness using frame interpolation with motion analysis
US20060257048A1 (en) * 2005-05-12 2006-11-16 Xiaofan Lin System and method for producing a page using frames of a video stream
US20140292817A1 (en) * 2011-10-20 2014-10-02 Imax Corporation Invisible or Low Perceptibility of Image Alignment in Dual Projection Systems
US20170006284A1 (en) * 2013-01-30 2017-01-05 Neelesh Gokhale Projected interpolation prediction generation for next generation video coding
US20140218569A1 (en) * 2013-02-01 2014-08-07 Canon Kabushiki Kaisha Imaging apparatus, control method therefor, and storage medium
WO2015163317A1 (ja) 2014-04-22 2015-10-29 日本電信電話株式会社 映像表示装置、映像投影装置、動的錯覚呈示装置、映像生成装置、それらの方法、データ構造、プログラム
US10571794B2 (en) 2014-04-22 2020-02-25 Nippon Telegraph And Telephone Corporation Video presentation device, dynamic illusion presentation device, video generation device, method thereof, data structure, and program
US20200150521A1 (en) 2014-04-22 2020-05-14 Nippon Telegraph And Telephone Corporation Video presentation device, method thereof, and program recording medium
US20190124332A1 (en) * 2016-03-28 2019-04-25 Lg Electronics Inc. Inter-prediction mode based image processing method, and apparatus therefor
WO2020077198A1 (en) * 2018-10-12 2020-04-16 Kineticor, Inc. Image-based models for real-time biometrics and marker-less motion tracking in imaging applications

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Search machine translation: Low-frequency Replacement Circuit For MUSE Decoder of JP 2557466 B2 to Ryuichi, retrieved May 10, 2023, 7 pages. (Year: 2023). *
Taiki Fukiage et al., "A model of V1 metamer can explain perceived deformation of a static object induced by light projection", Vision Sciences Society, Florida, U. S. A., May 2016.

Also Published As

Publication number Publication date
JP7040422B2 (ja) 2022-03-23
US20210398293A1 (en) 2021-12-23
WO2020110738A1 (ja) 2020-06-04
JP2020087069A (ja) 2020-06-04

Similar Documents

Publication Publication Date Title
CN110650368B (zh) 视频处理方法、装置和电子设备
US8958484B2 (en) Enhanced image and video super-resolution processing
US9485435B2 (en) Device for synthesizing high dynamic range image based on per-pixel exposure mapping and method thereof
US9865032B2 (en) Focal length warping
US20150154783A1 (en) Augmenting physical appearance using illumination
CN113689539B (zh) 基于隐式光流场的动态场景实时三维重建方法
US8903195B2 (en) Specification of an area where a relationship of pixels between images becomes inappropriate
US20230043791A1 (en) Holographic image processing with phase error compensation
CN104954710A (zh) 影像处理装置和使用它的投影仪装置
CN108140359A (zh) 用于检测和/或校正显示器中的像素亮度和/或色度响应变化的系统和方法
CN114937050A (zh) 绿幕抠图方法、装置及电子设备
WO2011018878A1 (ja) 画像処理システム、画像処理方法および画像処理用プログラム
CN114049464A (zh) 一种三维模型的重建方法及设备
CN114616587A (zh) 基于学习的镜头眩光移除
US11954867B2 (en) Motion vector generation apparatus, projection image generation apparatus, motion vector generation method, and program
US20100149319A1 (en) System for projecting three-dimensional images onto a two-dimensional screen and corresponding method
US9135746B2 (en) Image processing apparatus and control method thereof
JP6757004B2 (ja) 画像処理装置及び方法、画像処理プログラム、並びに投影装置
CN112954313A (zh) 一种对全景图像感知质量的计算方法
CN111696034A (zh) 图像处理方法、装置及电子设备
CN116980549A (zh) 视频帧处理方法、装置、计算机设备和存储介质
JP7387029B2 (ja) ソフトレイヤ化および深度認識インペインティングを用いた単画像3d写真技術
CN115049559A (zh) 模型训练、人脸图像处理、人脸模型处理方法及装置、电子设备及可读存储介质
US20230368340A1 (en) Gating of Contextual Attention and Convolutional Features
CN113658068A (zh) 基于深度学习的cmos相机的去噪增强系统及方法

Legal Events

Date Code Title Description
AS Assignment

Owner name: NIPPON TELEGRAPH AND TELEPHONE CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FUKIAGE, TAIKI;NISHIDA, SHINYA;KAWABE, TAKAHIRO;SIGNING DATES FROM 20210119 TO 20210125;REEL/FRAME:056332/0736

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STCF Information on status: patent grant

Free format text: PATENTED CASE