WO2017184093A1 - Procédé et système de traitement d'image - Google Patents

Procédé et système de traitement d'image Download PDF

Info

Publication number
WO2017184093A1
WO2017184093A1 PCT/TR2016/050331 TR2016050331W WO2017184093A1 WO 2017184093 A1 WO2017184093 A1 WO 2017184093A1 TR 2016050331 W TR2016050331 W TR 2016050331W WO 2017184093 A1 WO2017184093 A1 WO 2017184093A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
processor unit
environment
base
Prior art date
Application number
PCT/TR2016/050331
Other languages
English (en)
Inventor
Kuban ALTAN
Original Assignee
Zerodensity Yazilim A.S.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zerodensity Yazilim A.S. filed Critical Zerodensity Yazilim A.S.
Publication of WO2017184093A1 publication Critical patent/WO2017184093A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/74Circuits for processing colour signals for obtaining special effects
    • H04N9/75Chroma key
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2224Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • H04N5/275Generation of keying signals

Definitions

  • the present invention relates to a computer-based image processing method and system in order to be used in combining a first image, comprising an object to be kept on the foreground and environment wherein said object is provided, and a second image, for composing the transparency information regarding the object and the environment of the first image.
  • Image combining is an image processing type where illusion is created and where two images shot from two sources are combined and where the elements existing in the two images are provided in the combined image together.
  • image combining one of the most frequently used methods is chroma-keying.
  • chroma-keying an image, wherein the actor exists, is keyed with respect to reference colors, and transparency information is created for making the background transparent except the actor and for bringing the actor into prominence.
  • the alpha image obtained by using this transparency information, and the transparent regions of the image in the foreground, provide the other image in the rear layer to become partially visible, and thereby the two images are combined.
  • the transparency information may comprise transparency values between 1 and 0. In other words, some part of the image may be completely transparent, it may be completely opaque, or it may have a specific transparency level.
  • Chroma-keying can be used in the weather forecast presentations, in virtual studios, in full- length films, and in video games.
  • the cyclorama placed behind the actor may be uniformly distributed, and it may have any color composed by a unique color, however, since it is generally used in humans, green or blue color, which is the furthest color from human skin, is preferred.
  • green or blue color which is the furthest color from human skin.
  • One of the most important points which shall be taken into consideration in chroma-keying is that the object shall not include the color used in the curtain. In the opposite case, it is going to be under the influence of this effect. Even if different methods are used in chroma-keying, the common point of these methods is the selection of the value of the color to be keyed.
  • the image based keying is essentially realized by means of shooting the image of the background without the actor under a fixed illumination by means of a fixed camera, and afterwards shooting the image with the actor existing, and keying by using the background image, shot without the actor, as reference.
  • no successful result is obtained in case the camera moves.
  • a video difference key generator has a recorded reference video image.
  • An input video image is compared with the reference video image by an absolute difference circuit which subtracts corresponding pixels of the two video images, the smaller from the larger, to produce a difference video image.
  • the difference video image may be filtered and then is input to a transfer function circuit to produce an output which may be used as a key signal for compositing video images.
  • the present invention relates to an image processing method and system, for eliminating the above mentioned disadvantages and for bringing new advantages to the related technical field.
  • Another object of the present invention is to provide an image processing system and method where transparency information is obtained with increased sensitivity and which can be keyed in a simultaneous manner with the image shooting.
  • the present invention is a computer based image processing method in order to be used in combining a first image, comprising an object to be kept on the foreground and the environment wherein said object is provided, and a second image, for composing the transparency information regarding the object and the environment of the first image. Accordingly, as an improvement, the present invention is characterized by comprising the steps of:
  • said environment is a green or blue colored cyclorama so as to exist behind the object.
  • the camera shoots the base image of all visible surfaces of the cyclorama.
  • the processor unit moreover relates the viewpoint angle, frame dimension, pan and tilt condition, lens zoom condition, lens focus condition of the camera, and at least one of the coordinate parameters inside the environment in accordance with the three-dimensional coordinate system, with the base image or the first image.
  • the processor unit eliminates the lens errors in said base images.
  • the processor unit creates the three- dimensional virtual model of said environment by means of a camera; and accordingly, the processor unit accesses said three-dimensional virtual model.
  • the three-dimensional virtual model can be created by means of the cameras, and at the same time, an already created image can be used.
  • the cyclorama is subjected to fixed illumination.
  • the reference image can be obtained in a precise manner.
  • the present invention is moreover a computer based image processing system in order to be used in combining a first image, comprising an object to be kept on the foreground and the environment wherein said object is provided, and a second image, for composing the transparency information regarding the object and the environment of the first image, and comprising a camera and a processor unit in communication with said camera. Accordingly, as an improvement, the present invention is characterized in that:
  • said processor unit accesses a three-dimensional virtual model of said environment
  • said camera shoots at least one base image of the environment when there is no object therein
  • the processor unit correlates each base image with the position and viewpoint parameter regarding the condition where the camera shoots the related base image
  • the processor unit virtually projects at least one base image together with the related position and viewpoint parameter onto the three-dimensional virtual model
  • the camera shoots a first image of the environment when there is object therein
  • the processor unit correlates said first image with the position and viewpoint parameter regarding the condition where the camera shoots the first image
  • the processor unit shoots a reference image of the three-dimensional virtual model, whereon base images are projected virtually, in accordance with the position and viewpoint parameter where the first image is shot,
  • the processor unit realizes chroma-keying to the first image by taking reference image as a base and creating the transparency information of the first image.
  • a camera tracing device which at least traces the position and viewpoint parameters of the camera.
  • said environment comprises a green or blue colored cyclorama so as to exist behind the object.
  • the camera shoots the base image of all visible surfaces of the cyclorama.
  • the processor unit moreover relates the viewpoint angle, frame dimension, pan and tilt condition, lens zoom condition, lens focus condition of the camera, and at least one of the coordinate parameters inside the environment in accordance with the three-dimensional coordinate system, with the base image or the first image.
  • the processor unit is configured in a manner eliminating the lens errors in said base images.
  • the processor unit creates three-dimensional virtual model of said environment by means of the camera, and accordingly, the processor unit accesses said three-dimensional virtual model.
  • the present invention is a memory unit wherein the visual items produced by means of the abovementioned system are recorded.
  • Figure 1 is a representative view of the image processing system.
  • Figure 2 is a top representative view of the environment.
  • Figure 3 is a lateral representative view of the environment wherein the object is provided.
  • the subject matter invention in general comprises a method and system for composing transparency information of the first image obtained for making transparent the background of the first image comprising the cyclorama (41 ) such that the second image can be viewed when the two images are combined, in order to be used in combining a first image (or a video created by bringing the images one after the other), where an object (5) is provided inside the environment (4) and in front of a cyclorama (41 ) (green display), and a second image (a second video or a computer based image, CGI).
  • the two images are combined.
  • the image processing system (1 ) comprises a processor unit (10) and a memory unit (1 1 ) connected to said processor unit (10).
  • the processor unit (10) can be any general-purpose or special-purpose processor, or it may comprise a GPU and a related CPU.
  • the processor unit (10) may describe pluralities of processors.
  • the processor unit (10) may comprise a computer processor and a camera (2) processor or it may comprise a device processor connected to the camera (2).
  • the memory unit (1 1 ) may comprise RAM, ROM, magnetic or optical fixed disc or any data storage device combination which can be read by the computer.
  • the memory unit (1 1 ) may moreover comprise at least one of a first database (1 1 1 ), a second database (1 12) and a third database (1 13). Said databases can be recorded in a memory unit (1 1 ), or they may be provided in separate memory units (1 1 ).
  • FIG 2 the top view of a camera (2) and a environment (4) where said camera (2) realizes shooting is given.
  • Said environment (4) comprises blue or green colored cyclorama (41 ) in the background thereof.
  • Said environment may be a shooting studio.
  • Said cyclorama (41 ) may have any color and form which can be used in chroma-keying.
  • Said camera (2) is connected to at least one camera tracing device (3).
  • Said camera tracing device (3) can relate each image, obtained by means of the camera (2), with the position and viewpoint parameters of the camera (2), and it can record said image when required.
  • the processor unit (10) moreover relates the parameters like viewpoint angle, frame, pan and tilt position, lens zoom condition, lens focus condition of the camera (2), and at least one of the coordinate parameters inside the environment (4) in accordance with the three- dimensional coordinate system, with the images, in addition to the position and viewpoint parameters of the camera (2) in a non-delimiting manner.
  • the abovementioned parameters are used as camera parameters in the specification.
  • the camera (2) shoots the image of an object (5) in the environment (4).
  • Said object (5) may be an actor, a speaker, a non-living object, etc.
  • a three-dimensional virtual model of the environment (4) where shooting is realized is created, and it is recorded in the first database (1 1 1 ).
  • the environment (4) is empty (when there is no object (5) therein)
  • pluralities of base images are obtained by realizing shooting through the different positions and viewpoints.
  • the base images are shot under fixed light, and when a shooting is realized where the object (5) is provided, the same illumination is used.
  • the base image of all surfaces of the cyclorama (41 ) is required, the image can be shot in the conditions where different camera parameters are provided.
  • the base images are shot such that the camera (2) faces the cyclorama (41 ) surface substantially vertically.
  • the base images are shot by the camera (2), and the camera parameters, where the base image is shot, are traced by the camera tracing device (3).
  • Each shot base image is related to the camera parameters where the camera (2) shoots the image, and it is recorded to a second database (1 12).
  • the camera tracing device (3) records the position of camera (2) inside the environment (4), the viewpoint thereof, lens adjustments, etc. in every image shot.
  • the object (5) is placed to the environment (4) where shooting is realized.
  • the image of the object (5) inside the environment (4) is shot.
  • the images are shot under illumination conditions where the base images are shot.
  • One of the shot images is defined as the first image.
  • the camera tracing device (3) relates the camera parameters, where each image is shot, to the images, and it records said parameters to a third database (1 13).
  • the third database (1 13) there is at least the first image and the first camera parameters where said image is shot.
  • Said first image is an image wherein the object (5) is provided and where the background of the object (5) (cyclorama (41 )) is preferably green screen.
  • the basic object in image combining is to eliminate this cyclorama (41 ) as mentioned before and to add a second image instead of the cyclorama (41 ) existing in the image.
  • the cameras (2) where the first image and the base images are shot may be the same cameras (2) in order to obtain the camera parameters in a more precise manner, or they may be different cameras (2).
  • At least one of the base images, where the camera parameters are shot, is projected and rendered in accordance with the camera parameters where the camera parameters are provided which are shot onto the three-dimensional virtual model in the environment (4). In other words, the base images are projected onto the three-dimensional virtual model from the viewpoints where the images are shot.
  • a reference image is shot from a viewpoint and a position where the camera parameters are provided where the camera (2) shoots the first image for the three-dimensional virtual model whereon the base images are projected.
  • the three-dimensional virtual model is covered by means of base images at first, and afterwards, the image is shot by means of the position of the three-dimensional virtual model where the camera (2) shoots the object (5).
  • the image of the environment (4) is virtually obtained.
  • the differences between the first image and the reference image are the images created by the object (5) and by the interaction of the object (5) with the environment.
  • the object (5) and the semi-transparent images (shadows, projections, etc.), created as a result of interaction of the object (5) with the environment can be determined in a precise manner.
  • chroma-keying is realized to the first image by taking the reference image as a base.
  • the differences between the reference image and the first image are taken into consideration, and the pixels which are the same as the first image and the reference image are defined as transparent, and the different pixels are defined as opaque.
  • the pixels which are close to the reference image are defined by the intermediate values.
  • the transparency information of the first image is obtained.
  • This transparency information may comprise the information that the background is transparent, and that the object (5) is opaque, and that the shadow created by the object (5) is semi-transparent.
  • the second image is added as a layer under the first image, the second image supersedes the regions which are transparent and semi-transparent. Thus, the object seems to be within the second image.
  • the chroma-keying can be realized by means of various keyers which can realize color- based keying. These keyers can be executed by the processor unit (10).
  • the processor unit (10) When another image, comprising the object (5), is shot by the camera (2) following the first image, the abovementioned processes are applied also to this image.
  • a video can be obtained which is created by bringing pluralities of images, comprising the object (5), one after the other.
  • the present invention can be applied simultaneously with the time when the images, comprising the object (5), are shot.
  • the image combining process can be realized in a live manner by means of the cameras (2) which are movable inside the environment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Circuits (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

La présente invention concerne un procédé de traitement d'image basé sur un ordinateur devant être utilisé en combinant une première image, consistant en un objet (5) à maintenir au premier plan et l'environnement (4) ledit objet (5) étant fourni et une seconde image, destinée à composer les informations de transparence se rapportant à l'objet (5) et à l'environnement (4) de la première image. Les étapes d'accès à un modèle virtuel tridimensionnel dudit environnement (4) par une unité de processeur (10), sont les suivantes : capture d'au moins une image de base de l'environnement (4) par une caméra (2) lorsqu'il n'y a pas d'objet (5) dans ce dernier, corrélation de chaque image de base avec la position et le paramètre de point de vue de la caméra (2) se rapportant à la condition, dans laquelle l'image de base associée de la caméra (2) est capturée, par l'unité de processeur (10), projection de manière virtuelle d'au moins une image de base sur le modèle tridimensionnel virtuel, conformément à la position associée et au paramètre de point de vue associés, par l'unité de processeur (10), capture d'une première image de l'environnement (4) par la caméra (2) lorsqu'il y a un objet (5) dans ce dernier, corrélation de ladite première image avec la position et le paramètre de point de vue dans la séquence à laquelle la caméra (2) capture la première image par l'unité de processeur (10), capture d'une image de référence du modèle virtuel tridimensionnel, sur lequel des images de base sont projetées de manière virtuelle, conformément à la position et à l'angle de point de vue dans lesquels la première image est capturée, par l'unité de processeur (10), incrustation couleur sur la première image en prenant l'image de référence comme une base par l'unité de processeur (10) et création des informations de transparence de la première image.
PCT/TR2016/050331 2016-04-18 2016-09-05 Procédé et système de traitement d'image WO2017184093A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TR2016/04985A TR201604985A2 (tr) 2016-04-18 2016-04-18 Görüntü i̇şleme yöntemi̇ ve si̇stemi̇
TR2016/04985 2016-04-18

Publications (1)

Publication Number Publication Date
WO2017184093A1 true WO2017184093A1 (fr) 2017-10-26

Family

ID=57227069

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/TR2016/050331 WO2017184093A1 (fr) 2016-04-18 2016-09-05 Procédé et système de traitement d'image

Country Status (2)

Country Link
TR (1) TR201604985A2 (fr)
WO (1) WO2017184093A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0264965A2 (fr) 1986-10-24 1988-04-27 The Grass Valley Group, Inc. Générateur de signaux de commutation à partir de la différence de signaux vidéo
WO2007142643A1 (fr) * 2006-06-08 2007-12-13 Thomson Licensing Approche en deux passes pour une reconstruction tridimensionnelle
US20090262217A1 (en) * 2007-10-24 2009-10-22 Newton Eliot Mack Method and apparatus for an automated background lighting compensation system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0264965A2 (fr) 1986-10-24 1988-04-27 The Grass Valley Group, Inc. Générateur de signaux de commutation à partir de la différence de signaux vidéo
WO2007142643A1 (fr) * 2006-06-08 2007-12-13 Thomson Licensing Approche en deux passes pour une reconstruction tridimensionnelle
US20090262217A1 (en) * 2007-10-24 2009-10-22 Newton Eliot Mack Method and apparatus for an automated background lighting compensation system

Also Published As

Publication number Publication date
TR201604985A2 (tr) 2016-10-21

Similar Documents

Publication Publication Date Title
Matsuyama et al. 3D video and its applications
US9041899B2 (en) Digital, virtual director apparatus and method
JP4879326B2 (ja) 3次元画像を合成するシステム及び方法
US11425283B1 (en) Blending real and virtual focus in a virtual display environment
CN108475327A (zh) 三维采集与渲染
US11176716B2 (en) Multi-source image data synchronization
US11615755B1 (en) Increasing resolution and luminance of a display
CN108600729A (zh) 动态3d模型生成装置及影像生成方法
US20160037148A1 (en) 3d-mapped video projection based on on-set camera positioning
CN113692734A (zh) 用于采集和投影图像的系统和方法,以及该系统的应用
CN114915699A (zh) 一种基于ue系统的虚拟影棚模拟方法及系统
CN208506731U (zh) 图像展示系统
CN104584075B (zh) 用于描述对象空间的对象点以及用于其执行的连接方法
WO2017184093A1 (fr) Procédé et système de traitement d'image
Bimber et al. Digital illumination for augmented studios
KR102654323B1 (ko) 버추얼 프로덕션에서 2차원 이미지의 입체화 처리를 위한 방법, 장치 및 시스템
US11677928B1 (en) Method for image processing of image data for varying image quality levels on a two-dimensional display wall
US20020030692A1 (en) Method for generating a picture in a virtual studio
Sawicki et al. So, you don’t have a million dollars
Sawicki et al. Creating photo-real environments
Taherkhani et al. Designing a high accuracy 3D auto stereoscopic eye tracking display, using a common LCD monitor
CN108616743A (zh) 用于生成3d模型的成像装置及方法
Mitchell Shooting live action for combination with computer animation
CN108391113A (zh) 基于3d模型的数据获取方法及模型生成装置
Демиденко SHOOTING AND DEVELOPMENT OF A FILM.

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16790759

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 16790759

Country of ref document: EP

Kind code of ref document: A1