WO2016152634A1 - 情報処理装置および情報処理方法、並びにプログラム - Google Patents
情報処理装置および情報処理方法、並びにプログラム Download PDFInfo
- Publication number
- WO2016152634A1 WO2016152634A1 PCT/JP2016/058066 JP2016058066W WO2016152634A1 WO 2016152634 A1 WO2016152634 A1 WO 2016152634A1 JP 2016058066 W JP2016058066 W JP 2016058066W WO 2016152634 A1 WO2016152634 A1 WO 2016152634A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- moving image
- image
- subject
- performer
- information processing
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/036—Insert-editing
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
- G11B27/28—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/633—Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
- H04N23/635—Region indicators; Field of view indicators
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2628—Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/643—Hue control means, e.g. flesh tone control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- the present disclosure relates to an information processing apparatus, an information processing method, and a program, and in particular, an information processing apparatus and information that can easily generate a moving image in which performers imaged at different points are combined in a balanced manner.
- the present invention relates to a processing method and a program.
- the chroma key composition technique used in movies and TV broadcasts mainly captures images of performers against a green background or blue background. Then, after performing the work of cutting out performers from the captured moving image, an operation of correcting or adjusting the moving image prepared separately is synthesized with the background and the appropriate size and position are performed. In addition, when the composition is broadcast in real time, it is necessary to match the composition of the composition destination moving image with the composition of capturing the performer.
- Patent Document 1 the position and size of a person portion cut out from a captured moving image based on data designating an appropriate person composition layout that matches the content of the moving image serving as the background are described as the background.
- a synthesizing method for adjusting according to the content of the moving image is disclosed.
- Patent Document 1 requires time and labor for registering data for designating a person composition layout, and therefore generates a moving image in which performers imaged at different points are synthesized in a balanced manner. It was supposed to take various troubles. Therefore, it has been demanded to generate a moving image in which performers imaged at different points are combined in a balanced manner with easier means.
- This disclosure has been made in view of such a situation, and makes it possible to easily generate a moving image in which performers imaged at different points are combined in a balanced manner.
- An information processing apparatus includes an image analysis of a region in which a first subject is captured, and a second moving image different from the first moving image, included in the first moving image. And an adjustment unit that adjusts a synthesis condition for synthesizing the first moving image and the second moving image based on an analysis result obtained by at least one of the image analyses.
- An information processing method or program includes an image analysis of a region in which a first subject is captured, and a second moving image different from the first moving image, included in the first moving image A step of adjusting a synthesis condition for synthesizing the first moving image and the second moving image based on an analysis result obtained by at least one of the image analyzes of the image;
- the synthesis condition for synthesizing the first moving image and the second moving image is adjusted.
- FIG. 18 is a block diagram illustrating a configuration example of an embodiment of a computer to which the present technology is applied.
- FIG. 1 is a block diagram illustrating a configuration example of an embodiment of a distribution system to which the present technology is applied.
- a distribution system 11 includes performer-side information processing devices 13-1 and 13-2, a distribution server 14, and N viewer-side information processing devices via a network 12 such as the Internet. 15-1 to 15-N are connected.
- moving images captured by the performer side information processing device 13-1 or 13-2 are sequentially transmitted to the distribution server 14 via the network 12, and the network 12 is transmitted from the distribution server 14. Via the viewer side information processing devices 15-1 to 15-N.
- the viewers of the viewer-side information processing devices 15-1 to 15-N can view the moving images in which the users of the performer-side information processing devices 13-1 or 13-2 appear as performers. Can do.
- the distribution system 11 can distribute a composite moving image in which a performer of the performer-side information processing device 13-1 and a performer of the performer-side information processing device 13-2 are combined.
- the performer-side information processing device 13-2 transmits a moving image obtained by capturing the performer to the performer-side information processing device 13-1 via the distribution server 14.
- the performer-side information processing device 13-1 then performs, for example, the performer-side information processing so as to line up next to the performer shown in the moving image transmitted from the performer-side information processing device 13-2.
- a synthesized moving image obtained by synthesizing the performers of the device 13-1 is generated and transmitted to the distribution server 14.
- synthesized moving images in which performers imaged at different points are synthesized are delivered to the viewer-side information processing devices 15-1 to 15-N via the delivery server 14.
- FIG. 2 is a block diagram showing a configuration example of the performer side information processing apparatus 13-1.
- the performer side information processing apparatus 13-1 includes an imaging unit 21, a communication unit 22, a display 23, a speaker 24, and an image processing unit 25.
- the imaging unit 21 includes an optical system and an imaging element (not shown), and supplies a moving image obtained by imaging a performer of the performer side information processing apparatus 13-1 to the image processing unit 25.
- the communication unit 22 communicates with the performer-side information processing device 13-2 and the distribution server 14 via the network 12.
- the communication unit 22 receives the moving image transmitted from the performer-side information processing device 13-2 and supplies it to the image processing unit 25, or the synthesized moving image output from the image processing unit 25 is distributed to the distribution server. 14 or the like.
- the display 23 displays a synthesized moving image supplied from the synthesis processing unit 36 of the image processing unit 25 or displays a guidance image supplied from the guide unit 38 of the image processing unit 25.
- the speaker 24 outputs a guidance voice supplied from the guide unit 38 of the image processing unit 25.
- the image processing unit 25 cuts out performers from a moving image captured by the image capturing unit 21 (hereinafter referred to as “captured moving image” as appropriate), and a moving image supplied from the communication unit 22 (hereinafter referred to as “composite destination moving image” as appropriate). ) Is processed. Then, the image processing unit 25 supplies the synthesized moving image generated by performing the image processing to the communication unit 22 and the display 23. As illustrated, the image processing unit 25 includes a cutout unit 31, a composition position adjustment unit 32, a layer setting unit 33, a size adjustment unit 34, an image quality adjustment unit 35, a composition processing unit 36, a part creation unit 37, and a guide. A portion 38 is provided.
- the clipping unit 31 performs a face detection process and a person detection process on the captured moving image supplied from the imaging unit 21, and detects a performer (hereinafter referred to as an imaging performer as appropriate) shown in the captured moving image. To do. Then, the cutout unit 31 performs image processing to cut out an area where the imaged performer is captured from the captured moving image, and the imaged performer cutout moving image (in the captured moving image) formed along the contour of the imaged performer. A moving image of an area in which the imaged performer is included is generated and supplied to the size adjusting unit 34.
- the composition position adjustment unit 32 adjusts the composition position (composition condition) when compositing the clipped moving image of the imaging performer generated by the clipping unit 31 to the composition destination moving image to be an appropriate arrangement, and the composition destination Set the composition position in the video.
- the composite position adjustment unit 32 performs image analysis on the composite destination moving image supplied from the communication unit 22, and performs the performers (hereinafter referred to as the composite destination moving image). (Referred to as “combined performer”).
- the composition position adjusting unit 32 acquires the position of the composition performer that is copied in the composition destination moving image as the analysis result.
- the composition position adjustment unit 32 considers the composition when the imaging performers and the composite destination performers are arranged based on the analysis result, so that the imaging performers are arranged in a balanced manner with respect to the composite performers. Next, adjust the composite position.
- the layer setting unit 33 performs an object detection process on the synthesized moving image supplied from the communication unit 22, and, as will be described later with reference to FIG. 4, the imaging appearance of the object detected from the synthesized moving image.
- a layer indicating the front or rear positional relationship with respect to the person is set in the composite destination moving image.
- the size adjustment unit 34 performs image processing for adjusting the size (combination condition) when the cutout moving image of the imaging performer supplied from the cutout unit 31 is combined with the combined position of the combined destination moving image.
- the adjusted moving image of the imaged performer is supplied to the image quality adjustment unit 35.
- the imaging performers are substantially the same in size. Adjust the size of the clipped moving image.
- the size adjusting unit 34 performs image analysis on the clipped moving image and the composite destination moving image of the imaging performer, the size of the face of the imaging performer in the clipped moving image of the imaging performer, and the composite destination moving image.
- the size of the face of the destination performer shown in the image is acquired as an analysis result, and the size of the clipped moving image of the imaging performer is adjusted based on the analysis result.
- the image quality adjustment unit 35 adjusts the image quality (for example, brightness, saturation, and hue) of the clipped moving image of the imaged performer when the image is combined with the combined destination moving image in accordance with the image quality of the combined performer of the combined moving image.
- Image processing for adjusting the color properties and the synthesis conditions such as resolution) supplies the resultant image to the synthesis processing unit 36.
- the image quality adjustment unit 35 considers the brightness balance between the imaging performer and the destination performer, Image processing for increasing the brightness of the clipped moving image of the imaging performer is performed.
- the image quality adjustment unit 35 performs image analysis on the clipped moving image and the synthesized moving image of the imaging performer, acquires the image quality of each performer as an analysis result, and based on the analysis result, the imaging appearance Adjust the quality of the clipped moving image.
- the synthesizing processing unit 36 performs a synthesizing process that superimposes the clipped moving image of the imaging performer on the synthesized moving image, and generates a synthesized moving image.
- the composition processing unit 36 uses the composition position adjusting unit 32 to combine the cutout moving image of the imaging performer having the image quality adjusted by the image quality adjusting unit 35 with the size adjusted by the size adjusting unit 34.
- Composite at the composite position set to.
- the layer setting unit 33 sets a layer indicating the forward positional relationship with respect to the imaged performer for the object detected from the combined moving image
- the combining processing unit 36 sets the layer. After the object is cut out from the synthesized moving image and the cut out moving image of the imaging performer is synthesized, the moving image obtained by cutting out the object is superimposed.
- the parts creation unit 37 has a part of the body of the imaging performer synthesized with the synthesized moving image as a result of the composition processing unit 36 performing composition analysis on the synthesized moving image. If so, a part (interpolated image) for complementing the missing part (insufficient part) is generated and supplied to the synthesis processing unit 36. Thereby, the composition processing unit 36 performs composition processing for compositing the parts generated by the part creating unit 37 so as to hide the missing parts of the imaging performer.
- the guide unit 38 performs image analysis on the synthesized moving image generated by the synthesis processing unit 36, and based on the analysis result, the imaging performer and the synthesis destination performer are not identified.
- guidance for giving various instructions to the imaging performer is output (instructions are presented).
- the guide unit 38 instructs the direction of the imaging performer, instructs the position where the imaging unit 21 is disposed, and displays a guidance image that instructs the brightness of the surrounding environment that captures the imaging performer. Is supplied and displayed, or a guidance voice is supplied to the speaker 24 for output.
- the performer-side information processing device 13-1 is configured, and the image processing unit 25 considers the position and size of the imaging performer and the composition destination performer in the image processing unit 25, and takes the imaging performer and the composition. It is possible to generate a composite video that combines the performers in a balanced manner. Further, the performer side information processing apparatus 13-1 sequentially performs image processing in the image processing unit 25 in response to the captured moving image in which the imaging unit 21 captures the captured performer being supplied to the image processing unit 25. And a synthesized moving image can be output in real time (including some time lag due to processing) of the imaging performer. The synthesized moving image output from the image processing unit 25 is transmitted to the distribution server 14 via the communication unit 22 and distributed to the viewer-side information processing devices 15-1 to 15-N via the network 12.
- the synthesized moving image output from the image processing unit 25 is transmitted to the distribution server 14 via the communication unit 22 and distributed to the viewer-side information processing devices 15-1 to 15-N via the network 12.
- FIG. 3 shows a captured moving image A1 captured by the imaging unit 21 and a synthesized destination moving image B1 transmitted from the performer side information processing device 13-2. Shows a synthesized moving image C ⁇ b> 1 that has been synthesized by the synthesis processing unit 36.
- the imaging performer D1 in the captured moving image A1 and the position of the synthesis destination performer E1 in the synthesis destination moving image B1 are in a positional relationship such that they overlap each other.
- the imaging performer D1 is cut out from the captured moving image A1 and synthesized without performing position adjustment, the imaging performer D1 is synthesized so as to overlap the synthesis destination performer E1.
- a synthesized moving image in which the synthesized performer E1 is hidden behind D1 is generated. Therefore, conventionally, the imaging performer D1 needs to change the standing position at the time of imaging while understanding the composite destination moving image B1 well so that the composite destination performer E1 is not hidden, and performs appropriate alignment. It was difficult.
- the composition position adjustment unit 32 performs image analysis (for example, person detection processing, face detection processing, composition recognition, etc.) on the composite destination moving image B1, and performs composition.
- the position of the composite destination performer E1 shown in the previous moving image B1 can be specified.
- combination position adjustment part 32 can adjust the synthetic
- the synthesis position adjustment unit 32 sets the initial synthesis position in an arrangement that is arranged next to the synthesis destination performer E1, as indicated by a broken line in the synthesis destination moving image B1.
- the composite position of the imaging performer D1 is appropriately arranged automatically by the composite position adjustment unit 32, that is, without the imaging performer D1 adjusting his / her standing position. Is set as follows. Thereby, the image processing unit 25 can easily generate the synthesized moving image C1 arranged so that the synthesized performer E1 and the imaging performer D1 do not overlap.
- the composite position adjusting unit 32 can accurately recognize the position of the composite destination performer E1 by using the distance information.
- the synthesis position can be set more appropriately.
- the combining position adjustment unit 32 can adjust the combining position with an appropriate arrangement so that the respective performers do not overlap.
- FIG. 4 shows a captured moving image A2 captured by the imaging unit 21 and a synthesized destination moving image B2 transmitted from the performer side information processing device 13-2. Shows a synthesized moving image C ⁇ b> 2 that has been synthesized by the synthesis processing unit 36.
- the imaging performer D2 is cut out from the captured moving image A2 and synthesized as it is, the object F is synthesized so as to overlap the synthesis destination performer E1, and although not shown, the object F is hidden behind the imaging performer D1.
- a synthesized moving image in which the context is reversed is generated. That is, originally, a composition in which the object F is arranged in front of the imaging performer D1 is desirable, but it is difficult to generate a composite moving image having such a composition.
- the layer setting unit 33 performs object detection processing on the composite destination moving image B2, and displays a layer indicating the context of the object F with respect to the imaging performer D2 as a composite destination video.
- the layer setting unit 33 displays an area surrounding the object F detected from the composite destination moving image B2 (an area indicated by a broken line) on the display 23 as an area where a layer can be set.
- the layer setting part 33 displays the graphical user interface which instruct
- the layer setting unit 33 obtains the distance to the subject imaged in the composite destination moving image B2 using the distance measuring technique, and when the distance information is added to the composite destination moving image B2, the distance information
- the layer can be set with reference to FIG. For example, if the distance to the object F is less than a predetermined value, the layer setting unit 33 sets a forward layer for the imaging performer D2, and if the distance to the object F is equal to or greater than a predetermined value, the imaging appearance A rear layer is set for the person D2.
- the layer setting unit 33 sets a layer on the object F to be displayed in front of the imaging performer D1.
- the image processing unit 25 can generate a composite moving image C2 having an originally desirable composition in which the object F is arranged in front of the imaging performer D1.
- FIG. 5 shows a captured moving image A3 imaged by the imaging unit 21 and a composite destination moving image B3 transmitted from the performer-side information processing device 13-2. 5 shows a synthesized moving image C3 that has been subjected to size adjustment processing by the size adjustment unit 34 and synthesized by the synthesis processing unit 36, and on the right side of FIG. A synthesized moving image C3 ′ in which the parts created in the parts creating unit 37 are synthesized with the moving image C3 is shown.
- the imaged performer D3 in the captured moving image A3 is photographed larger than the imaged performer D3 in the composite destination moving image B3.
- the imaged performer D3 is cut out from the imaged moving image A3 and synthesized as it is without adjusting the size, the imaged performer D3 becomes very large with respect to the composition-destination performer E3, which is not shown.
- a synthesized moving image is generated. Therefore, conventionally, the imaging performer D3 has to move to a standing position away from the imaging unit 21 so as to have the same size as the synthesis destination performer E3.
- the size adjustment unit 32 sets the composition position
- the size of the face of the composition destination performer E3 shown in the composition destination moving image B3 is set. It can be recognized and notified to the size adjustment unit 34.
- the size adjusting unit 34 has the face size (dotted circle) of the imaging performer D3 supplied from the cutout unit 31 substantially the same as the face size (dashed circle) of the destination performer E3. Image processing for adjusting the size (enlargement operation or reduction operation) of the imaging performer D3 is performed.
- the size adjusting unit 34 may compare the size of the face of the imaging performer D3 with the size of the object G, and may adjust the size of the imaging performer D3 based on the comparison result. Moreover, you may perform size comparison with a chair or a table other than the flower as shown in figure.
- the imaging performer D3 whose size has been adjusted by the size adjustment unit 34 is synthesized with the synthesis destination moving image B3, so that the imaging performer D3 and the synthesis destination performer E3 are synthesized in a balanced manner.
- a synthesized moving image C3 can be generated.
- the composition processing unit 36 performs a skeleton estimation for the imaging performer D3 synthesized with the synthesized moving image C3, and the missing parts of the imaging performer D3 (in the example of FIG. 5).
- combination process part 36 requests
- the synthesis processing unit 36 recognizes the floor below the imaging performer D3 by performing image recognition on the synthesized moving image C3, a part that is arranged on the floor and hides the missing part of the imaging performer D3. Is requested to the parts creation unit 37.
- the part creation unit 37 creates a part H (a speech stand in the example of FIG. 5) according to a request from the synthesis processing unit 36 by computer graphics and supplies it to the synthesis processing unit 36.
- the composition processing unit 36 then composes the synthesized moving image C3 ′ in which the part H created by the parts creating unit 37 is arranged on the floor of the synthesized moving image C3 to compensate for the missing parts of the imaging performer D3. Is generated.
- the size adjustment unit 34 adjusts the size of the imaging performer D3, and the part H created by the parts creation unit 37 is synthesized so as to hide the missing parts of the imaging performer D3.
- the image processing unit 25 can generate a composite moving image C3 'that avoids the imaging performer D3 from entering an unnatural state.
- FIG. 6 shows a display image J displayed on the display 23.
- the display 23 displays a display image J in which the guide image K output from the guide unit 38 is superimposed on the composite moving image C4 output from the composite processing unit 36.
- the guide unit 38 provides guidance for giving various instructions to the imaging performer D4. Do.
- the guide unit 38 analyzes the synthesized moving image C4 supplied from the synthesis processing unit 36, thereby detecting the faces of the imaging performer D4 and the synthesis destination performer E4 and recognizing the direction of each face. Then, the guide unit 38 is composed of an arrow indicating the direction of the destination performer E4 and a message “please see here to speak” so that the composition is such that a natural conversation is performed. K is output to the display 23. The guide image K is only displayed on the display 23 so as to overlap the synthesized moving image C4, and is not transmitted via the communication unit 22.
- the guide unit 38 uses the guide image K for the imaging performer D4 such that the imaging performer D4 and the composition destination performer E4 have a conversation.
- Guidance can be provided.
- the image processing unit 25 can generate a composite moving image C4 having a natural composition with no sense of incongruity for the imaging performer D4 and the composite destination performer E4.
- the guide unit 38 instructs the imaging performer D4 of the performer side information processing apparatus 13-1
- guidance that instructs installation of the imaging unit 21 can be output.
- the guide unit 38 displays a message such as “Please increase the height of the camera”, “Place the camera a little more to the right” or “Please move away from the camera” on the display 23. It can be displayed or output from the speaker 24.
- the guide unit 38 analyzes the white balance of the imaging performer D4, and when it is detected that the image is taken in an extremely dark place, the guidance unit 38 points out the darkness of the room and brightens it. For example, the message “Please brighten the room” can be displayed on the display 23 or output from the speaker 24.
- FIG. 7 a process of synthesizing the imaging performer D5 clipped from the captured moving image and the synthesized performer E5 clipped from the synthesized moving image with the background still image L serving as the background thereof will be described. To do. On the left side of FIG. 7, an imaging performer D5 and a composition destination performer E5, and a background still image L are shown. On the right side of FIG. 7, a synthesized moving image C5 in which the imaging performer D5 and the synthesized performer E5 are synthesized by the synthesis processing unit 36 with the background still image L as the background is shown.
- the brightness of the imaging performer D5 and the synthesis destination performer E5 are different as illustrated.
- the imaged performer D5 is imaged in a backlight environment, and is a darker image than the combined performer E5.
- the imaged performer D5 and the combined performer E5 are combined as they are, they are not shown. Unnatural synthesized moving images with different brightness are generated.
- the image quality adjustment unit 35 adjusts the white balance of the imaging performer D5 so that the imaging performer D5 and the composite destination performer E5 have the same brightness. Thereby, the image processing unit 25 can generate the composite moving image C5 in which the imaging performer D5 and the composite destination performer E5 are combined with the background still image L with the same brightness and the unnaturalness is eliminated. .
- the image quality adjustment unit 35 may perform relative adjustment so that the imaging performer D5 and the synthesis destination performer E5 have the same brightness. Furthermore, the image quality adjustment unit 35 can adjust the image pickup performer D5 and the composition destination performer E5 to have the same degree of saturation and hue as well as brightness.
- FIG. 8 is a flowchart for explaining image processing by the image processing unit 25.
- processing is started when an operation on an operation unit (not shown) is performed so as to synthesize a performer of the performer-side information processing device 13-1 with a synthesized moving image.
- the cutout unit 31 performs image processing to cut out an area where the imaged performer is captured from the captured moving image supplied from the image capture unit 21, generates a cutout moving image of the imaged performer, and generates a size adjusting unit. 34.
- step S ⁇ b> 12 the composition position adjustment unit 32 performs image analysis on the composite destination moving image supplied from the communication unit 22, and the composition position when combining the clipped moving image of the imaging performer with the composition destination moving image is appropriate.
- the position is adjusted so as to be the position, and the composition position is set in the composition destination moving image.
- the composite position adjustment unit 32 supplies the composite destination moving image in which the composite position is set to the image quality adjustment unit 35.
- the composition position adjustment unit 32 recognizes the size of the face of the composition destination performer shown in the composition destination moving image and notifies the size adjustment unit 34 of the size of the face of the composition destination performer.
- step S13 the size adjustment unit 34 approximates the size of the clipped moving image of the imaging performer supplied from the clipping unit 31 in step S11, and the size of the imaging performer's face is approximately the size of the face of the destination performer.
- the image quality is adjusted to be the same and supplied to the image quality adjustment unit 35.
- step S14 the image quality adjustment unit 35 matches the image quality of the destination performer of the synthesis destination moving image supplied from the synthesis position adjustment unit 32 in step S12, and the imaging appearance supplied from the size adjustment unit 34 in step S13. Image processing for adjusting the image quality of the clipped moving image of the person is performed and supplied to the synthesis processing unit 36.
- step S ⁇ b> 15 the layer setting unit 33 performs object detection processing on the composite destination moving image supplied from the communication unit 22, and the position of the object detected from the composite destination moving image in front of or behind the imaging performer.
- a layer indicating the relationship is set in the synthesis destination moving image and is supplied to the synthesis processing unit 36.
- step S16 the composition processing unit 36 extracts the cutout moving image of the imaging performer having the image quality adjusted by the image quality adjusting unit 35 in step S14 in the size adjusted by the size adjusting unit 34 in step S13.
- the composition position adjustment unit 32 performs composition processing for composition at the composition position set in the composition destination moving image.
- the composition processing unit 36 cuts out the object for which the layer is set from the composite destination moving image. Then, after synthesizing the cutout moving image of the imaging performer, a synthesizing process is performed to superimpose the image obtained by clipping the object. Then, the synthesis processing unit 36 performs composition analysis on the synthesized moving image generated by such synthesis processing.
- step S ⁇ b> 17 the composition processing unit 36 lacks the imaging performer synthesized with the composition destination moving image as described with reference to FIG. 5 based on the result of the composition analysis performed on the composition moving image in step S ⁇ b> 16. Determine if there are parts.
- step S17 if the composition processing unit 36 determines that the imaging performer has a missing part, the process proceeds to step S18, and the composition processing unit 36 creates a part so as to create a part that compensates for the missing part.
- Request to unit 37 The parts creation unit 37 creates computer graphic parts in response to a request from the synthesis processing unit 36 and supplies them to the synthesis processing unit 36.
- the synthesis processing unit 36 compensates for the missing parts of the imaging performer.
- a synthesized moving image is generated by synthesizing the part generated by the creating unit 37 with the synthesized destination moving image.
- step S18 the process proceeds to step S19, and the synthesis processing unit 36 outputs the synthesized moving image to the display 23. To display. Further, the synthesis processing unit 36 supplies the synthesized moving image to the communication unit 22 and transmits it to the distribution server 14 via the network 12.
- step S20 the guide unit 38 analyzes the synthesized moving image generated by the synthesis processing unit 36 and determines whether or not it is necessary to output guidance.
- step S20 If it is determined in step S20 that the guidance unit 38 needs to output guidance, the process proceeds to step S21, and the guidance unit 38 displays a guidance image on the display 23 or outputs guidance voice from the speaker 24.
- step S21 After step S21 or when it is determined in step S20 that the guide unit 38 does not need to output guidance, the process returns to step S11, and the same process is repeated thereafter.
- the image processing unit 25 considers the cutout moving image of the imaged performer to be a combined moving image in consideration of the position, size, image quality, and the like of the imaged performer and the combined performer being appropriate. By adjusting the conditions at the time of synthesis, it is possible to easily generate an image in which performers imaged at different points are synthesized in a balanced manner.
- the performer-side information processing device 13-2 captures a composite destination moving image.
- image analysis may be performed, and a composite destination moving image in which a composite position is set may be transmitted.
- the performer side information processing apparatus 13-2 may detect an object shown in the composite destination moving image by using a distance measuring technique and perform layer setting.
- the performer side information processing apparatus 13-1 may transmit the cutout moving image of the imaged performer to the distribution server 14, and the distribution server 14 may perform the combining process with the combined destination moving image.
- the distribution server 14 may perform image processing by the image processing unit 25. That is, the performer-side information processing devices 13-1 and 13-2 transmit moving images obtained by capturing the performers to the distribution server 14, respectively. Then, the distribution server 14 cuts out the performer from the moving image of the performer-side information processing device 13-1, and performs a combining process using the moving image of the performer-side information processing device 13-2 as a combined moving image. At this time, the distribution server 14 can perform the layer setting process, the size adjustment process, the image quality adjustment process, and the part creation process as described above.
- the region that becomes the insufficient part of the imaging performer is trimmed to enlarge a part of the synthesized moving image. You may make it synthesize an imaging performer.
- the captured moving image of the performer imaged at different points in addition to the captured moving image in which the performer is captured at a physically distant location, a certain short distance A captured moving image in which each performer is captured by a different imaging device (for example, in the same room) is also included.
- a different imaging device for example, in the same room
- captured moving images captured at different imaging times or imaging locations are also included.
- the synthesized moving image is first captured and recorded (recorded), and the captured moving image showing the performer is reproduced during real-time distribution. It can be combined with an image and distributed.
- the captured moving image is not limited to that captured at different points.
- the size or image quality of the cutout moving image of the imaged performer is adjusted to match the combined moving image, and the cutout moving image of the imaged performer and the combined video You may perform relative adjustment with respect to an image.
- the processes described with reference to the flowcharts described above do not necessarily have to be processed in chronological order in the order described in the flowcharts, but are performed in parallel or individually (for example, parallel processes or objects). Processing).
- the program may be processed by one CPU, or may be distributedly processed by a plurality of CPUs.
- the above-described series of processing can be executed by hardware or can be executed by software.
- a program constituting the software executes various functions by installing a computer incorporated in dedicated hardware or various programs.
- the program is installed in a general-purpose personal computer from a program recording medium on which the program is recorded.
- FIG. 9 is a block diagram showing an example of the hardware configuration of a computer that executes the above-described series of processing by a program.
- a CPU Central Processing Unit
- ROM Read Only Memory
- RAM Random Access Memory
- An input / output interface 105 is further connected to the bus 104.
- the input / output interface 105 includes an input unit 106 including a keyboard, a mouse, and a microphone, an output unit 107 including a display and a speaker, a storage unit 108 including a hard disk and nonvolatile memory, and a communication unit 109 including a network interface.
- a drive 110 for driving a removable medium 111 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory is connected.
- the CPU 101 loads, for example, the program stored in the storage unit 108 to the RAM 103 via the input / output interface 105 and the bus 104 and executes the program. Is performed.
- the program executed by the computer (CPU 101) is, for example, a magnetic disk (including a flexible disk), an optical disk (CD-ROM (Compact Disc-Read Only Memory), DVD (Digital Versatile Disc), etc.), a magneto-optical disc, or a semiconductor.
- the program is recorded on a removable medium 111 that is a package medium including a memory or the like, or is provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
- the program can be installed in the storage unit 108 via the input / output interface 105 by attaching the removable medium 111 to the drive 110. Further, the program can be received by the communication unit 109 via a wired or wireless transmission medium and installed in the storage unit 108. In addition, the program can be installed in the ROM 102 or the storage unit 108 in advance.
- this technique can also take the following structures.
- An information processing apparatus comprising: an adjustment unit that adjusts a combining condition for combining the first moving image and the second moving image based on an analysis result in the obtained image analysis.
- the image analysis processing unit performs image analysis using the position of the second subject imaged in the second moving image as the analysis result, The adjustment unit avoids the first subject from overlapping the second subject based on the position of the second subject in the second moving image, and the first moving image is The information processing apparatus according to (1), wherein a synthesis position to be synthesized with the second moving image is adjusted as the synthesis condition.
- the image analysis processing unit performs image analysis using the size of the second subject captured in the second moving image as the analysis result,
- the adjustment unit adjusts the size of the first moving image with respect to the second moving image as the synthesis condition based on the size of the second subject in the second moving image.
- the information processing apparatus according to (2) or the information processing apparatus according to (2).
- the image analysis processing unit captures the size of the face of the first person captured in the first moving image as the first subject and the second moving image as the second subject. Image analysis using the size of the second person's face as the analysis result,
- the information processing apparatus according to (3) wherein the adjustment unit adjusts the synthesis condition so that a face size of the first person is substantially the same as a face size of the second person.
- the image analysis processing unit captures the size of the face of the first person captured in the first moving image as the first subject and the second moving image as the second subject. Image analysis with the size of a given object being the analysis result, The information processing apparatus according to (3), wherein the adjustment unit adjusts the synthesis condition based on a comparison result between the size of the face of the first person and the size of the object.
- the image analysis processing unit performs image analysis using the analysis result as the image quality of the first subject and the image quality of the second subject imaged in the second moving image, The information processing apparatus according to any one of (1) to (5), wherein the adjustment unit adjusts image quality of the first subject and image quality of the second subject as the synthesis condition.
- the image analysis processing unit performs image analysis using the color property of the first subject and the color property of the second subject captured in the second moving image as the analysis result,
- the information processing apparatus according to any one of (1) to (6), wherein the adjustment unit adjusts the color property of the first subject and the color property of the second subject as the synthesis condition. .
- the color property is brightness, The information processing apparatus according to (7), wherein the adjustment unit adjusts the brightness of the first subject and the brightness of the second subject as the synthesis condition. (9)
- the color property is saturation; The information processing apparatus according to (7) or (8), wherein the adjustment unit adjusts the saturation of the first subject and the saturation of the second subject as the synthesis condition.
- the color property is hue;
- the information processing apparatus according to any one of (7) to (9), wherein the adjustment unit adjusts the hue of the first subject and the hue of the second subject as the synthesis condition.
- the captured moving image obtained by capturing the first subject is supplied, image analysis by the image analysis processing unit and adjustment of synthesis conditions by the adjustment unit are sequentially performed, and the first moving image
- An information processing apparatus according to any one of (1) to (10), wherein a combined moving image is generated by combining an image and the second moving image.
- the image processing apparatus further includes a clipping processing unit that performs image processing for clipping an area where the first subject is captured from the captured moving image in which the first subject is captured, and generates the first moving image.
- a layer setting processing unit that detects an object shown in the second moving image and performs a process of setting a layer indicating a front or rear positional relationship of the detected object with respect to the first subject; When the object whose layer is set forward with respect to the first subject by the layer setting processing unit overlaps the first subject, the first moving image is combined with the second moving image.
- the information processing apparatus according to any one of (1) to (13), further comprising: a combining processing unit that performs a combining process of combining a moving image obtained by clipping the object from the second moving image.
- the information processing apparatus according to any one of (1) to (14), further including: an instruction processing unit that performs a process of presenting an instruction for.
- an information processing method including a step of adjusting a synthesis condition for synthesizing the first moving image and the second moving image based on an analysis result obtained.
- a program for causing a computer to execute information processing including a step of adjusting a synthesis condition for synthesizing the first moving image and the second moving image based on an analysis result obtained.
- 11 distribution system 12 networks, 13-1 and 13-2 performer side information processing device, 14 distribution server, 15-1 to 15-N viewer side information processing device, 21 imaging unit, 22 communication unit, 23 display, 24 speakers, 25 image processing section, 31 clipping section, 32 composition position adjustment section, 33 layer setting section, 34 size adjustment section, 35 image quality adjustment section, 36 composition processing section, 37 parts creation section, 38 guidance section
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
- Studio Circuits (AREA)
- Image Processing (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Description
(1)
第1の動画像に含まれる、第1の被写体が写された領域の画像解析、および、前記第1の動画像とは異なる第2の動画像に対する画像解析のうち、少なくとも一方の画像解析により得られる画像解析における解析結果に基づいて、前記第1の動画像と前記第2の動画像とを合成する合成条件を調整する調整部と
を備える情報処理装置。
(2)
前記画像解析処理部は、前記第2の動画像に写されている第2の被写体の位置を前記解析結果とする画像解析を行い、
前記調整部は、前記第2の動画像における前記第2の被写体の位置に基づいて、前記第1の被写体が前記第2の被写体に重なることを回避して、前記第1の動画像を前記第2の動画像に合成する合成位置を前記合成条件として調整する
上記(1)に記載の情報処理装置。
(3)
前記画像解析処理部は、前記第2の動画像に写されている第2の被写体の大きさを前記解析結果とする画像解析を行い、
前記調整部は、前記第2の動画像における前記第2の被写体の大きさに基づいて、前記第2の動画像に対する前記第1の動画像の大きさを前記合成条件として調整する
上記(1)または(2)に記載の情報処理装置。
(4)
前記画像解析処理部は、前記第1の被写体として前記第1の動画像に写されている第1の人物の顔の大きさ、および、前記第2の被写体として前記第2の動画像に写されている第2の人物の顔の大きさを前記解析結果とする画像解析を行い、
前記調整部は、前記第1の人物の顔の大きさが前記第2の人物の顔の大きさと略同一となるように前記合成条件を調整する
上記(3)に記載の情報処理装置。
(5)
前記画像解析処理部は、前記第1の被写体として前記第1の動画像に写されている第1の人物の顔の大きさ、および、前記第2の被写体として前記第2の動画像に写されている所定の物体の大きさを前記解析結果とする画像解析を行い、
前記調整部は、前記第1の人物の顔の大きさと前記物体の大きさとの比較結果に基づいて前記合成条件を調整する
上記(3)に記載の情報処理装置。
(6)
前記画像解析処理部は、前記第1の被写体の画質、および、前記第2の動画像に写されている第2の被写体の画質を前記解析結果とする画像解析を行い、
前記調整部は、前記第1の被写体の画質と前記第2の被写体の画質とを前記合成条件として調整する
上記(1)から(5)までのいずれかに記載の情報処理装置。
(7)
前記画像解析処理部は、前記第1の被写体における色の性質、および、前記第2の動画像に写されている第2の被写体における色の性質を前記解析結果とする画像解析を行い、
前記調整部は、前記第1の被写体における色の性質と前記第2の被写体における色の性質とを前記合成条件として調整する
上記(1)から(6)までのいずれかに記載の情報処理装置。
(8)
前記色の性質は、明度であり、
前記調整部は、前記第1の被写体の明度と前記第2の被写体の明度とを前記合成条件として調整する
上記(7)に記載の情報処理装置。
(9)
前記色の性質は、彩度であり、
前記調整部は、前記第1の被写体の彩度と前記第2の被写体の彩度とを前記合成条件として調整する
上記(7)または(8)に記載の情報処理装置。
(10)
前記色の性質は、色相であり、
前記調整部は、前記第1の被写体の色相と前記第2の被写体の色相とを前記合成条件として調整する
上記(7)から(9)までのいずれかに記載の情報処理装置。
(11)
前記第1の被写体が撮像された撮像動画像が供給されるのに応じて逐次的に、前記画像解析処理部による画像解析および前記調整部による合成条件の調整が行われ、前記第1の動画像と前記第2の動画像とが合成された合成動画像が生成される
上記(1)から(10)までのいずれかに記載の情報処理装置。
(12)
前記第1の被写体が撮像された撮像動画像から、前記第1の被写体が写された領域を切り抜く画像処理を行い、前記第1の動画像を生成する切り抜き処理部
をさらに備える上記(1)から(11)までのいずれかに記載の情報処理装置。
(13)
前記第1の動画像を前記第2の動画像に合成したときに、前記第1の被写体の身体の一部分が不足している場合、その一部分を補完する補完画像を作成する処理を行う補完画像作成処理部と、
前記補完画像作成処理部により作成された前記補完画像を、前記第2の動画像における前記第1の被写体の身体の不足している部分を隠すように合成する合成処理を行う合成処理部と
をさらに備える上記(1)から(12)までのいずれかに記載の情報処理装置。
(14)
前記第2の動画像に写されている物体を検出し、その検出した物体の、前記第1の被写体に対する前方または後方の位置関係を示すレイヤを設定する処理を行うレイヤ設定処理部と、
前記レイヤ設定処理部により前記第1の被写体に対して前方にレイヤが設定された前記物体が前記第1の被写体と重なる場合、前記第1の動画像を前記第2の動画像に合成した後に、前記第2の動画像から前記物体を切り抜いた動画像を合成する合成処理を行う合成処理部と
をさらに備える上記(1)から(13)までのいずれかに記載の情報処理装置。
(15)
前記第1の動画像と前記第2の動画像とが合成された合成動画像に対する画像解析を行った結果に基づいて、前記第1の被写体として前記第1の動画像に写されている人物に対する指示を提示する処理を行う指示処理部
をさらに備える上記(1)から(14)までのいずれかに記載の情報処理装置。
(16)
第1の動画像に含まれる、第1の被写体が写された領域の画像解析、および、前記第1の動画像とは異なる第2の動画像に対する画像解析のうち、少なくとも一方の画像解析により得られる解析結果に基づいて、前記第1の動画像と前記第2の動画像とを合成する合成条件を調整する
ステップを含む情報処理方法。
(17)
第1の動画像に含まれる、第1の被写体が写された領域の画像解析、および、前記第1の動画像とは異なる第2の動画像に対する画像解析のうち、少なくとも一方の画像解析により得られる解析結果に基づいて、前記第1の動画像と前記第2の動画像とを合成する合成条件を調整する
ステップを含む情報処理をコンピュータに実行させるプログラム。
Claims (17)
- 第1の動画像に含まれる、第1の被写体が写された領域の画像解析、および、前記第1の動画像とは異なる第2の動画像に対する画像解析のうち、少なくとも一方の画像解析により得られる解析結果に基づいて、前記第1の動画像と前記第2の動画像とを合成する合成条件を調整する調整部と
を備える情報処理装置。 - 前記画像解析処理部は、前記第2の動画像に写されている第2の被写体の位置を前記解析結果とする画像解析を行い、
前記調整部は、前記第2の動画像における前記第2の被写体の位置に基づいて、前記第1の被写体が前記第2の被写体に重なることを回避して、前記第1の動画像を前記第2の動画像に合成する合成位置を前記合成条件として調整する
請求項1に記載の情報処理装置。 - 前記画像解析処理部は、前記第2の動画像に写されている第2の被写体の大きさを前記解析結果とする画像解析を行い、
前記調整部は、前記第2の動画像における前記第2の被写体の大きさに基づいて、前記第2の動画像に対する前記第1の動画像の大きさを前記合成条件として調整する
請求項1に記載の情報処理装置。 - 前記画像解析処理部は、前記第1の被写体として前記第1の動画像に写されている第1の人物の顔の大きさ、および、前記第2の被写体として前記第2の動画像に写されている第2の人物の顔の大きさを前記解析結果とする画像解析を行い、
前記調整部は、前記第1の人物の顔の大きさが前記第2の人物の顔の大きさと略同一となるように前記合成条件を調整する
請求項3に記載の情報処理装置。 - 前記画像解析処理部は、前記第1の被写体として前記第1の動画像に写されている第1の人物の顔の大きさ、および、前記第2の被写体として前記第2の動画像に写されている所定の物体の大きさを前記解析結果とする画像解析を行い、
前記調整部は、前記第1の人物の顔の大きさと前記物体の大きさとの比較結果に基づいて前記合成条件を調整する
請求項3に記載の情報処理装置。 - 前記画像解析処理部は、前記第1の被写体の画質、および、前記第2の動画像に写されている第2の被写体の画質を前記解析結果とする画像解析を行い、
前記調整部は、前記第1の被写体の画質と前記第2の被写体の画質とを前記合成条件として調整する
請求項1に記載の情報処理装置。 - 前記画像解析処理部は、前記第1の被写体における色の性質、および、前記第2の動画像に写されている第2の被写体における色の性質を前記解析結果とする画像解析を行い、
前記調整部は、前記第1の被写体における色の性質と前記第2の被写体における色の性質とを前記合成条件として調整する
請求項1に記載の情報処理装置。 - 前記色の性質は、明度であり、
前記調整部は、前記第1の被写体の明度と前記第2の被写体の明度とを前記合成条件として調整する
請求項7に記載の情報処理装置。 - 前記色の性質は、彩度であり、
前記調整部は、前記第1の被写体の彩度と前記第2の被写体の彩度とを前記合成条件として調整する
請求項7に記載の情報処理装置。 - 前記色の性質は、色相であり、
前記調整部は、前記第1の被写体の色相と前記第2の被写体の色相とを前記合成条件として調整する
請求項7に記載の情報処理装置。 - 前記第1の被写体が撮像された撮像動画像が供給されるのに応じて逐次的に、前記画像解析処理部による画像解析および前記調整部による合成条件の調整が行われ、前記第1の動画像と前記第2の動画像とが合成された合成動画像が生成される
請求項1に記載の情報処理装置。 - 前記第1の被写体が撮像された撮像動画像から、前記第1の被写体が写された領域を切り抜く画像処理を行い、前記第1の動画像を生成する切り抜き処理部
をさらに備える請求項1に記載の情報処理装置。 - 前記第1の動画像を前記第2の動画像に合成したときに、前記第1の被写体の身体の一部分が不足している場合、その一部分を補完する補完画像を作成する処理を行う補完画像作成処理部と、
前記補完画像作成処理部により作成された前記補完画像を、前記第2の動画像における前記第1の被写体の身体の不足している部分を隠すように合成する合成処理を行う合成処理部と
をさらに備える請求項1に記載の情報処理装置。 - 前記第2の動画像に写されている物体を検出し、その検出した物体の、前記第1の被写体に対する前方または後方の位置関係を示すレイヤを設定する処理を行うレイヤ設定処理部と、
前記レイヤ設定処理部により前記第1の被写体に対して前方にレイヤが設定された前記物体が前記第1の被写体と重なる場合、前記第1の動画像を前記第2の動画像に合成した後に、前記第2の動画像から前記物体を切り抜いた動画像を合成する合成処理を行う合成処理部と
をさらに備える請求項1に記載の情報処理装置。 - 前記第1の動画像と前記第2の動画像とが合成された合成動画像に対する画像解析を行った結果に基づいて、前記第1の被写体として前記第1の動画像に写されている人物に対する指示を提示する処理を行う指示処理部
をさらに備える請求項1に記載の情報処理装置。 - 第1の動画像に含まれる、第1の被写体が写された領域の画像解析、および、前記第1の動画像とは異なる第2の動画像に対する画像解析のうち、少なくとも一方の画像解析により得られる解析結果に基づいて、前記第1の動画像と前記第2の動画像とを合成する合成条件を調整する
ステップを含む情報処理方法。 - 第1の動画像に含まれる、第1の被写体が写された領域の画像解析、および、前記第1の動画像とは異なる第2の動画像に対する画像解析のうち、少なくとも一方の画像解析により得られる解析結果に基づいて、前記第1の動画像と前記第2の動画像とを合成する合成条件を調整する
ステップを含む情報処理をコンピュータに実行させるプログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017508250A JP6610659B2 (ja) | 2015-03-26 | 2016-03-15 | 情報処理装置および情報処理方法、並びにプログラム |
EP16768537.9A EP3276943A4 (en) | 2015-03-26 | 2016-03-15 | Information processing apparatus, information processing method, and program |
US15/559,162 US10264194B2 (en) | 2015-03-26 | 2016-03-15 | Information processing device, information processing method, and program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015064805 | 2015-03-26 | ||
JP2015-064805 | 2015-03-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016152634A1 true WO2016152634A1 (ja) | 2016-09-29 |
Family
ID=56978263
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/058066 WO2016152634A1 (ja) | 2015-03-26 | 2016-03-15 | 情報処理装置および情報処理方法、並びにプログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US10264194B2 (ja) |
EP (1) | EP3276943A4 (ja) |
JP (1) | JP6610659B2 (ja) |
WO (1) | WO2016152634A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2021175174A (ja) * | 2020-04-27 | 2021-11-01 | ペキン シャオミ パインコーン エレクトロニクス カンパニー, リミテッド | ビデオ処理方法、装置および記憶媒体 |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6385543B1 (ja) * | 2017-09-29 | 2018-09-05 | 株式会社ドワンゴ | サーバ装置、配信システム、配信方法及びプログラム |
US10609332B1 (en) * | 2018-12-21 | 2020-03-31 | Microsoft Technology Licensing, Llc | Video conferencing supporting a composite video stream |
TWI746148B (zh) * | 2020-09-04 | 2021-11-11 | 宏碁股份有限公司 | 智能音箱、智能音箱運作系統與點陣圖案的動態調整方法 |
CN114205512B (zh) * | 2020-09-17 | 2023-03-31 | 华为技术有限公司 | 拍摄方法和装置 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000172826A (ja) * | 1998-12-02 | 2000-06-23 | Minolta Co Ltd | 画像合成装置 |
JP2005094741A (ja) * | 2003-08-14 | 2005-04-07 | Fuji Photo Film Co Ltd | 撮像装置及び画像合成方法 |
JP2010130040A (ja) * | 2008-11-25 | 2010-06-10 | Seiko Epson Corp | 画像処理装置並びに、この装置の制御用プログラムおよびこの制御用プログラムを記録したコンピュータ読取り可能な記録媒体 |
JP2011172103A (ja) * | 2010-02-19 | 2011-09-01 | Olympus Imaging Corp | 画像生成装置 |
JP2013197980A (ja) * | 2012-03-21 | 2013-09-30 | Casio Comput Co Ltd | 画像処理装置、画像処理方法、及び、プログラム |
WO2014013689A1 (ja) * | 2012-07-20 | 2014-01-23 | パナソニック株式会社 | コメント付き動画像生成装置およびコメント付き動画像生成方法 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004328788A (ja) | 2004-06-21 | 2004-11-18 | Daiichikosho Co Ltd | 記録済みの背景映像に別途撮影された人物映像を合成して表示出力する方法およびその方法を採用したカラオケ装置 |
TW201005583A (en) * | 2008-07-01 | 2010-02-01 | Yoostar Entertainment Group Inc | Interactive systems and methods for video compositing |
US20120002061A1 (en) * | 2010-07-01 | 2012-01-05 | Gay Michael F | Systems and methods to overlay remote and local video feeds |
CN102737383B (zh) * | 2011-03-31 | 2014-12-17 | 富士通株式会社 | 视频中的摄像机运动分析方法及装置 |
US8970704B2 (en) * | 2011-06-07 | 2015-03-03 | Verizon Patent And Licensing Inc. | Network synchronized camera settings |
US8866943B2 (en) * | 2012-03-09 | 2014-10-21 | Apple Inc. | Video camera providing a composite video sequence |
-
2016
- 2016-03-15 US US15/559,162 patent/US10264194B2/en active Active
- 2016-03-15 WO PCT/JP2016/058066 patent/WO2016152634A1/ja active Application Filing
- 2016-03-15 JP JP2017508250A patent/JP6610659B2/ja active Active
- 2016-03-15 EP EP16768537.9A patent/EP3276943A4/en not_active Withdrawn
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000172826A (ja) * | 1998-12-02 | 2000-06-23 | Minolta Co Ltd | 画像合成装置 |
JP2005094741A (ja) * | 2003-08-14 | 2005-04-07 | Fuji Photo Film Co Ltd | 撮像装置及び画像合成方法 |
JP2010130040A (ja) * | 2008-11-25 | 2010-06-10 | Seiko Epson Corp | 画像処理装置並びに、この装置の制御用プログラムおよびこの制御用プログラムを記録したコンピュータ読取り可能な記録媒体 |
JP2011172103A (ja) * | 2010-02-19 | 2011-09-01 | Olympus Imaging Corp | 画像生成装置 |
JP2013197980A (ja) * | 2012-03-21 | 2013-09-30 | Casio Comput Co Ltd | 画像処理装置、画像処理方法、及び、プログラム |
WO2014013689A1 (ja) * | 2012-07-20 | 2014-01-23 | パナソニック株式会社 | コメント付き動画像生成装置およびコメント付き動画像生成方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3276943A4 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2021175174A (ja) * | 2020-04-27 | 2021-11-01 | ペキン シャオミ パインコーン エレクトロニクス カンパニー, リミテッド | ビデオ処理方法、装置および記憶媒体 |
JP6990282B2 (ja) | 2020-04-27 | 2022-01-12 | ペキン シャオミ パインコーン エレクトロニクス カンパニー, リミテッド | ビデオ処理方法、装置および記憶媒体 |
US11368632B2 (en) | 2020-04-27 | 2022-06-21 | Beijing Xiaomi Pinecone Electronics Co., Ltd. | Method and apparatus for processing video, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
US20180070025A1 (en) | 2018-03-08 |
JPWO2016152634A1 (ja) | 2018-01-18 |
EP3276943A4 (en) | 2018-11-21 |
JP6610659B2 (ja) | 2019-11-27 |
US10264194B2 (en) | 2019-04-16 |
EP3276943A1 (en) | 2018-01-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6610659B2 (ja) | 情報処理装置および情報処理方法、並びにプログラム | |
US10805575B2 (en) | Controlling focus of audio signals on speaker during videoconference | |
US9679369B2 (en) | Depth key compositing for video and holographic projection | |
US10686985B2 (en) | Moving picture reproducing device, moving picture reproducing method, moving picture reproducing program, moving picture reproducing system, and moving picture transmission device | |
US20220279306A1 (en) | Associated Spatial Audio Playback | |
US20170061686A1 (en) | Stage view presentation method and system | |
EP2352290B1 (en) | Method and apparatus for matching audio and video signals during a videoconference | |
US10998870B2 (en) | Information processing apparatus, information processing method, and program | |
JP2007110582A (ja) | 画像表示装置および方法、並びにプログラム | |
KR101392406B1 (ko) | 영상합성기의 크로마키 피사체영상과 배경영상 합성장치 및 방법 | |
JP7074056B2 (ja) | 画像処理装置、画像処理システム、および画像処理方法、並びにプログラム | |
WO2020144937A1 (ja) | サウンドバー、オーディオ信号処理方法及びプログラム | |
EP3379379A1 (en) | Virtual reality system and method | |
WO2020031742A1 (ja) | 画像処理装置および画像処理方法、並びにプログラム | |
JP4644555B2 (ja) | 映像音声合成装置及び遠隔体験共有型映像視聴システム | |
KR101099369B1 (ko) | 다자간 화상 회의 시스템 및 방법 | |
KR101819984B1 (ko) | 실시간 영상 합성 방법 | |
US20220400244A1 (en) | Multi-camera automatic framing | |
KR101834925B1 (ko) | 객체 위치 변화를 벡터로 변환하여 영상 및 음향 신호를 동기화한 가상스튜디오 방송 편집 및 송출 기기와 이를 이용한 방법 | |
JP2007251355A (ja) | 対話システム用中継装置、対話システム、対話方法 | |
KR20170059310A (ko) | 텔레 프레젠스 영상 송신 장치, 텔레 프레젠스 영상 수신 장치 및 텔레 프레젠스 영상 제공 시스템 | |
WO2017211447A1 (en) | Method for reproducing sound signals at a first location for a first participant within a conference with at least two further participants at at least one further location | |
KR102374665B1 (ko) | 개인방송을 위한 방송영상 생성방법 및 이를 위한 방송영상 생성시스템 | |
JP2019003325A (ja) | 画像処理装置、画像処理方法及びプログラム | |
JP2019012179A (ja) | Hdr画像表示システム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16768537 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2017508250 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15559162 Country of ref document: US |
|
REEP | Request for entry into the european phase |
Ref document number: 2016768537 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |