CN106254849B - A kind of method and terminal that foreground object is locally replaced - Google Patents
A kind of method and terminal that foreground object is locally replaced Download PDFInfo
- Publication number
- CN106254849B CN106254849B CN201610643133.XA CN201610643133A CN106254849B CN 106254849 B CN106254849 B CN 106254849B CN 201610643133 A CN201610643133 A CN 201610643133A CN 106254849 B CN106254849 B CN 106254849B
- Authority
- CN
- China
- Prior art keywords
- replaced
- depth
- camera
- real
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 21
- 239000003550 marker Substances 0.000 claims abstract description 57
- 238000006073 displacement reaction Methods 0.000 abstract description 6
- 238000009877 rendering Methods 0.000 description 6
- 210000000746 body region Anatomy 0.000 description 4
- 241001465754 Metazoa Species 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 239000004744 fabric Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
-
- G06T3/04—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/282—Image signal generators for generating image signals corresponding to three or more geometrical viewpoints, e.g. multi-view systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
Abstract
The present invention provides a kind of method and terminal that foreground object is locally replaced, and this method includes:Obtain the real video images of real camera shooting;Real video images scratch as generating foreground video image and foreground image profile information;Obtain contour of object information to be replaced and depth information;According to the position relationship of depth image, the profile information of object to be replaced, real camera and depth camera calculate object to be replaced to real camera distance;Position and ratio of the object to be replaced in real video images are calculated according to distance and real camera zoom parameters;The marker region is searched in foreground image according to position and ratio;According to the edge of marker, object to be replaced profile and default marker and object to be replaced between relationship obtain region contour to be replaced;By regional replacement to be replaced at preset virtual image.The above method carries out local displacement with terminal-pair foreground object, enhances the expressive force of program making.
Description
Technical field
The present invention relates to Virtual Studio Technology, more particularly to a kind of method and terminal that foreground object is locally replaced.
Background technology
Currently, being scratched based on blue case as the Virtual Studio System of technology is widely used to all kinds of television program designings at present
In.When Virtual Studio System implements TV programme for making, foreground object is usually host or other preset
Object, and background object is usually other stage properties and scene in addition to foreground object.Common processing is using three-dimensional empty
Quasi- technology carries out dummy object displacement to background object, and foreground object results in generally without three dimensional virtual technique processing
Program making lacks expressive force.
Invention content
In view of this, it is really necessary to provide one kind three-dimensional skill can be carried out to foreground object in Virtual Studio System
Art processing, the method and terminal locally replaced with the foreground object locally replaced.
A kind of method that foreground object is locally replaced, the method includes:
The real video images of real camera shooting are obtained, the foreground object in the real video images includes being equipped with
The object to be replaced of marker.
Real video images scratch as generating foreground video image and corresponding foreground image profile information, the mark
Will object forms transparent region in the foreground image.
Depth camera is obtained to shoot the depth image of the foreground object and obtain object to be replaced according to depth image
Profile information and object to be replaced depth information.
According to the position of the depth image, the profile information of the object to be replaced, real camera and depth camera
The relationship of setting calculates the object to be replaced to the distance of real camera.
The object to be replaced is calculated in the real video figure according to the distance and real camera zoom parameters
Position as in and ratio.
The marker region is searched in the foreground image according to the position and ratio.
Go out the edge of the marker from the marker region recognition.
According to the edge of the marker, the profile of the object to be replaced and default marker and object to be replaced
Between relationship obtain the profile in region to be replaced.
According to the profile in the region to be replaced, by the regional replacement to be replaced at preset virtual image.
Further, the color of the marker is identical as the background color of the virtual studio.
Further, the object to be replaced is behaved, and is obtained the profile information of object to be replaced according to depth image and is waited for
Displacement object depth information include:
Set the profile information for the human body that depth transducer identifies to the profile information of object to be replaced.And
Set the corresponding depth information of the profile information of human body to the depth information of the object to be replaced.
Further, the object to be replaced is calculated according to the profile information of the depth image and the object to be replaced
Distance to real camera includes:
According to the depth information of object to be replaced calculate object to be replaced to depth camera distance;
According to the distance of the position relationship and object to be replaced of depth camera and real camera to depth camera
The object to be replaced is calculated the distance between to the real camera.
Further, the real camera zoom parameters are obtained according to Camera Tracking Technology.
A kind of terminal includes:
True picture acquiring unit, the real video images for obtaining real camera shooting, the real video figure
Foreground object as in includes the object to be replaced equipped with marker.
It scratches as unit, for real video images scratch as generating foreground video image and corresponding foreground image wheel
Wide information, the marker form transparent region in the foreground image.
Depth image acquiring unit shoots the depth image of the foreground object and according to depth for obtaining depth camera
Degree image obtains the depth information of the profile information and object to be replaced of object to be replaced.
Metrics calculation unit, for profile information, the real camera according to the depth image, the object to be replaced
With the position relationship of depth camera calculate the object to be replaced to real camera distance.
Position calculation unit, for being joined according to the distance and real camera zoom of the object to be replaced to real camera
Number calculates position and ratio of the object to be replaced in the real video images.
Searching unit, for searching the marker location in the foreground image according to the position and ratio
Domain.
Recognition unit, the edge for going out the marker from the marker region recognition.
Profile acquiring unit, for according to the edge of the marker, the profile of the object to be replaced and pre- bidding
Relationship between will object and object to be replaced obtains the profile in region to be replaced.
Unit is replaced, for the profile according to the region to be replaced, by the regional replacement to be replaced at preset void
Quasi- image.
Further, background color is identical as the color of the marker in the real video images.
Further, the object to be replaced is behaved, and the depth image acquiring unit is for identifying depth transducer
The profile information of human body out is set as the profile information of object to be replaced;The depth image acquiring unit is additionally operable to people
The corresponding depth information of profile information of body is set as the depth information of the object to be replaced.
Further, the metrics calculation unit is used to calculate object to be replaced according to the depth information of object to be replaced
To the distance of depth camera;The metrics calculation unit is additionally operable to be closed according to the position of depth camera and real camera
System and object to be replaced to depth camera distance calculate the object to be replaced between the real camera away from
From.
Further, the real camera zoom parameters are obtained according to Camera Tracking Technology.
The above method and terminal on foreground object to be replaced by being arranged marker, to using three-dimensional skill
Art carries out local displacement to foreground object, greatly enhances the expressive force of program making.
Description of the drawings
Technical solution in order to illustrate the embodiments of the present invention more clearly, below will be to needed in embodiment description
Attached drawing is briefly described, it should be apparent that, drawings in the following description are some embodiments of the invention, general for this field
For logical technical staff, without creative efforts, other drawings may also be obtained based on these drawings.
Fig. 1 is a kind of flow chart for the method that foreground object is locally replaced in present pre-ferred embodiments;
Fig. 2 is a kind of functional block diagram of terminal in present pre-ferred embodiments.
Specific implementation mode
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation describes.Obviously, described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, shall fall within the protection scope of the present invention.
As shown in Figure 1, the flow chart of its method locally replaced for a kind of foreground object of preferred embodiment.This method packet
Include step S101~S109.
S101 obtains the real video images of real camera shooting, the foreground object packet in the real video images
Include the object to be replaced equipped with marker.
Specifically, it can also be dynamic object which, which can be stationary body, can also be stationary body and moves
The arbitrary combination of state object.Stationary body includes but is not limited to desk, the chair being arranged in virtual studio.Dynamic object
Moveable object including but not limited in virtual studio, as carried out the host of host discourse, carrying out floor show
Animal etc..The object to be replaced can be the part in foreground object, can also be the whole in foreground object.The present embodiment selects
Using host and desk as foreground object, while the artificial object to be replaced of preferred hosting, the head zone of host are to wait setting
Change region.
Specifically, the marker includes but is not limited to necklace, silk scarf, cloth, bracelet etc..Preferably as one, institute
The color for stating marker is identical as the background color of the virtual studio.The present embodiment is chosen identical as the virtual studio color
Silk scarf be marker.
Specifically, the action of marker is arranged on foreground object to be replaced can be considered as the color and the background
The identical silk scarf of color is centered around at the neck of host.The effect of the silk scarf be in order in subsequent step by host
Head zone is distinguished with body region.
S102 scratch as generating foreground video image and corresponding foreground image profile information real video images,
The marker forms transparent region in the foreground image.
Specifically, color is carried out according to virtual studio background color and marker color and scratches picture.According to the preceding scenery
The key signals of body mask scratch as processing, generate foreground object video image.Preferably as one, it scratches as processing is by chroma key
Device carries out.The chroma key device scratch as processing, generates foreground object video figure according to the key signals of the foreground object mask
Picture.
S103 obtains depth camera and shoots the depth image of the foreground object and obtained according to depth image to be replaced
The depth information of the profile information of object and object to be replaced.
Specifically, it if the object to be replaced is behaved, sets the profile information for the human body that depth transducer identifies to
The profile information of object to be replaced.Set the corresponding depth information of the profile information of human body to the depth of the object to be replaced
Information.If the object to be replaced is to preside over other artificial objects, it can be compared by image and obtain profile information.
S104, according to profile information, real camera and the depth camera of the depth image, the object to be replaced
Position relationship calculate the object to be replaced to real camera distance.
Specifically, first, according to the depth information of object to be replaced calculate object to be replaced to depth camera away from
From;Then, according to the distance of the position relationship and object to be replaced of depth camera and real camera to depth camera
The object to be replaced is calculated the distance between to the real camera.
S105 calculates the object to be replaced according to the distance and real camera zoom parameters and is really regarded described
Position in frequency image and ratio.
Specifically, the profile information in every frame true picture of host obtain can using Camera Tracking Technology come
It completes.First, start video camera.When host does not enter scene, to entirely being scanned one time at scene, correspondence is recorded
Every frame foreground video image and corresponding foreground camera posture, the posture includes the yawing angle of foreground camera
And pitch angle.Then, when host enters in scene, the foreground camera is used to be believed with the posture recorded just now
Breath is imaged.Because foreground camera is imaged using same posture, the scape of the foreground object video image obtained
The data such as deep and angle are the same, and unique difference is exactly more hosts in scene.Based on the difference, image is utilized
The method compared pixel-by-pixel, you can obtain the characteristics such as position, ratio, profile of the host in foreground object mask.
S106 searches the marker region according to the position and ratio in the foreground image.
S107 goes out the edge of the marker from the marker region recognition.
S108, according to the edge of the marker, the profile of the object to be replaced and default marker and to be replaced
Relationship between object obtains the profile in region to be replaced.
Specifically, because the region to be replaced to be redefined for the head zone of host in the present embodiment,
The namely described region to be replaced is known.Furthermore it is possible to learn that the silk scarf being set at host's neck draws backer
It is divided into two big regions:Body region and head zone.In conjunction with the known region to be replaced, we can be with normal condition following
Portion region is positioned at the top of host, that is to say, that head zone is positioned at the top of the silk scarf.
S109, according to the profile in the region to be replaced, by the regional replacement to be replaced at preset virtual image.
Specifically, using rendering engine to after the dummy object progress three-dimensional rendering for the displacement and region to be replaced
Second contour edge is aligned, into line replacement, you can the head of host is virtually replaced in completion.
Preferably as one, during dummy object carries out three-dimensional rendering, before the depth of field data host of use arrives
The depth of field data of scape video camera is identical.
As shown in Fig. 2, its functional block diagram for a kind of terminal of preferred embodiment.
True picture acquiring unit 101, the real video images for obtaining real camera shooting.The real video
Foreground object in image includes the object to be replaced equipped with marker.
Specifically, it can also be dynamic object which, which can be stationary body, can also be stationary body and moves
The arbitrary combination of state object.Stationary body includes but is not limited to desk, the chair being arranged in virtual studio.Dynamic object
Moveable object including but not limited in virtual studio, as carried out the host of host discourse, carrying out floor show
Animal etc..The object to be replaced can be the part in foreground object, can also be the whole in foreground object.The present embodiment selects
Using host and desk as foreground object, while the artificial object to be replaced of preferred hosting, the head zone of host are to wait setting
Change region.
Specifically, the marker includes but is not limited to necklace, silk scarf, cloth, bracelet etc..Preferably as one, institute
State the color of background color and the marker in real video images.The present embodiment is chosen identical as the virtual studio color
Silk scarf be marker.
Specifically, the action of marker is arranged on foreground object to be replaced can be considered as the color and the background
The identical silk scarf of color is centered around at the neck of host.The effect of the silk scarf be in order in subsequent step by host
Head zone is distinguished with body region.
It scratches as unit 102, for real video images scratch as generating foreground video image and corresponding foreground picture
As profile information, the marker forms transparent region in the foreground image.
Specifically, color is carried out according to virtual studio background color and marker color and scratches picture.According to the preceding scenery
The key signals of body mask scratch as processing, generate foreground object video image.Preferably as one, it scratches as processing is by chroma key
Device carries out.The chroma key device scratch as processing, generates foreground object video figure according to the key signals of the foreground object mask
Picture.
Depth image acquiring unit 103 shoots the depth image and root of the foreground object for obtaining depth camera
The depth information of the profile information and object to be replaced of object to be replaced is obtained according to depth image.
Specifically, it if the object to be replaced is behaved, sets the profile information for the human body that depth transducer identifies to
The profile information of object to be replaced.Set the corresponding depth information of the profile information of human body to the depth of the object to be replaced
Information.If the object to be replaced is to preside over other artificial objects, it can be compared by image and obtain profile information.
Metrics calculation unit 104, for according to the depth image, the profile information of the object to be replaced, really take the photograph
Camera and the position relationship of depth camera calculate the object to be replaced to the distance of real camera.
Specifically, first, according to the depth information of object to be replaced calculate object to be replaced to depth camera away from
From;Then, according to the distance of the position relationship and object to be replaced of depth camera and real camera to depth camera
The object to be replaced is calculated the distance between to the real camera.
Position calculation unit 105, for being become according to the distance and real camera of the object to be replaced to real camera
Burnt parameter calculates position and ratio of the object to be replaced in the real video images.
Specifically, the profile information in every frame true picture of host obtain can using Camera Tracking Technology come
It completes.First, start video camera.When host does not enter scene, to entirely being scanned one time at scene, correspondence is recorded
Every frame foreground video image and corresponding foreground camera posture, the posture includes the yawing angle of foreground camera
And pitch angle.Then, when host enters in scene, the foreground camera is used to be believed with the posture recorded just now
Breath is imaged.Because foreground camera is imaged using same posture, the scape of the foreground object video image obtained
The data such as deep and angle are the same, and unique difference is exactly more hosts in scene.Based on the difference, image is utilized
The method compared pixel-by-pixel, you can obtain the characteristics such as position, ratio, profile of the host in foreground object mask.
Searching unit 106, where searching the marker in the foreground image according to the position and ratio
Region.
Recognition unit 107, the edge for going out the marker from the marker region recognition.
Profile acquiring unit 108, for according to the edge of the marker, the profile of the object to be replaced and pre-
If the relationship between marker and object to be replaced obtains the profile in region to be replaced.
Specifically, because the region to be replaced to be redefined for the head zone of host in the present embodiment,
The namely described region to be replaced is known.Furthermore it is possible to learn that the silk scarf being set at host's neck draws backer
It is divided into two big regions:Body region and head zone.In conjunction with the known region to be replaced, we can be with normal condition following
Portion region is positioned at the top of host, that is to say, that head zone is positioned at the top of the silk scarf.
Unit 109 is replaced, for the profile according to the region to be replaced, by the regional replacement to be replaced at preset
Virtual image.
Specifically, using rendering engine to after the dummy object progress three-dimensional rendering for the displacement and region to be replaced
Second contour edge is aligned, into line replacement, you can the head of host is virtually replaced in completion.
Preferably as one, during dummy object carries out three-dimensional rendering, before the depth of field data host of use arrives
The depth of field data of scape video camera is identical.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can readily occur in various equivalent modifications or replace
It changes, these modifications or substitutions should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with right
It is required that protection domain subject to.
Claims (10)
1. a kind of method that foreground object is locally replaced, which is characterized in that the method includes:
The real video images of real camera shooting are obtained, the foreground object in the real video images includes being equipped with mark
The object to be replaced of object;
The real video images scratch as generating foreground video image and foreground image profile information, and the marker
Transparent region is formed in the foreground image;
Depth camera is obtained to shoot the depth image of the foreground object and obtain the wheel of object to be replaced according to depth image
The depth information of wide information and object to be replaced;
It is closed according to the position of the depth image, the profile information of the object to be replaced, real camera and depth camera
System calculates the object to be replaced to the distance of real camera;
The object to be replaced is calculated in the real video images according to the distance and real camera zoom parameters
Position and ratio;
The marker region is searched in the foreground image according to the position and ratio;
Go out the edge of the marker from the marker region recognition;
According to the edge of the marker, the object to be replaced profile and default marker and object to be replaced between
Relationship obtain the profile in region to be replaced;
According to the profile in the region to be replaced, by the regional replacement to be replaced at preset virtual image.
2. the method as described in claim 1, which is characterized in that background color and the marker in the real video images
Color it is identical.
3. the method as described in claim 1, which is characterized in that the object to be replaced is human body, is obtained according to depth image
The depth information of the profile information of object to be replaced and object to be replaced includes:
Set the profile information for the human body that depth transducer identifies to the profile information of object to be replaced;And by human body
The corresponding depth information of profile information be set as the depth information of the object to be replaced.
4. the method as described in claim 1, which is characterized in that according to the profile of the depth image and the object to be replaced
The distance that information calculates object to be replaced to the real camera includes:
According to the depth information of object to be replaced calculate object to be replaced to depth camera distance;
It is calculated according to the distance of the position relationship and object to be replaced of depth camera and real camera to depth camera
Go out the object to be replaced the distance between to the real camera.
5. the method as described in claim 1, which is characterized in that the real camera zoom parameters are according to Camera location skill
Art obtains.
6. a kind of terminal, which is characterized in that including:
True picture acquiring unit, the real video images for obtaining real camera shooting, in the real video images
Foreground object include the object to be replaced equipped with marker;
It scratches as unit, believes as generating foreground video image and corresponding foreground image profile for real video images scratch
Breath, the marker form transparent region in the foreground image;
Depth image acquiring unit shoots the depth image of the foreground object and according to depth map for obtaining depth camera
As the depth information of the profile information and object to be replaced that obtain object to be replaced;
Metrics calculation unit, for profile information, real camera and the depth according to the depth image, the object to be replaced
The position relationship of degree video camera calculates the object to be replaced to the distance of real camera;
Position calculation unit, based on the distance and real camera zoom parameters according to the object to be replaced to real camera
Calculate position and ratio of the object to be replaced in the real video images;
Searching unit, for searching the marker region in the foreground image according to the position and ratio;
Recognition unit, the edge for going out the marker from the marker region recognition;
Profile acquiring unit, for according to the edge of the marker, the profile of the object to be replaced and default marker
Relationship between object to be replaced obtains the profile in region to be replaced;
Unit is replaced, for the profile according to the region to be replaced, by the regional replacement to be replaced at preset virtual graph
Picture.
7. terminal as claimed in claim 6, which is characterized in that background color and the marker in the real video images
Color it is identical.
8. terminal as claimed in claim 6, which is characterized in that the object to be replaced is human body, and the depth image obtains
Unit is used to set the profile information for the human body that depth transducer identifies to the profile information of object to be replaced;The depth
Degree image acquisition unit is additionally operable to set the corresponding depth information of the profile information of human body to the depth of the object to be replaced
Information.
9. terminal as claimed in claim 6, which is characterized in that the metrics calculation unit is used for the depth according to object to be replaced
Degree information calculates object to be replaced to the distance of depth camera;The metrics calculation unit is additionally operable to according to depth camera
The object to be replaced is calculated to institute with the position relationship of real camera and the distance of object to be replaced to depth camera
State the distance between real camera.
10. terminal as claimed in claim 6, which is characterized in that the real camera zoom parameters are according to Camera location
Technology obtains.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610643133.XA CN106254849B (en) | 2016-08-08 | 2016-08-08 | A kind of method and terminal that foreground object is locally replaced |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610643133.XA CN106254849B (en) | 2016-08-08 | 2016-08-08 | A kind of method and terminal that foreground object is locally replaced |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106254849A CN106254849A (en) | 2016-12-21 |
CN106254849B true CN106254849B (en) | 2018-11-13 |
Family
ID=58079379
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610643133.XA Active CN106254849B (en) | 2016-08-08 | 2016-08-08 | A kind of method and terminal that foreground object is locally replaced |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106254849B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108694845A (en) * | 2018-06-20 | 2018-10-23 | 信利光电股份有限公司 | A kind of based reminding method and device of vehicle drive |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101110908A (en) * | 2007-07-20 | 2008-01-23 | 西安宏源视讯设备有限责任公司 | Foreground depth of field position identification device and method for virtual studio system |
CN101777180A (en) * | 2009-12-23 | 2010-07-14 | 中国科学院自动化研究所 | Complex background real-time alternating method based on background modeling and energy minimization |
CN102118576A (en) * | 2009-12-30 | 2011-07-06 | 新奥特(北京)视频技术有限公司 | Method and device for color key synthesis in virtual sports system |
CN104349020A (en) * | 2014-12-02 | 2015-02-11 | 北京中科大洋科技发展股份有限公司 | Virtual camera and real camera switching system and method |
CN105306862A (en) * | 2015-11-17 | 2016-02-03 | 广州市英途信息技术有限公司 | Scenario video recording system and method based on 3D virtual synthesis technology and scenario training learning method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20140106927A (en) * | 2013-02-27 | 2014-09-04 | 한국전자통신연구원 | Apparatus and method for making panorama |
-
2016
- 2016-08-08 CN CN201610643133.XA patent/CN106254849B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101110908A (en) * | 2007-07-20 | 2008-01-23 | 西安宏源视讯设备有限责任公司 | Foreground depth of field position identification device and method for virtual studio system |
CN101777180A (en) * | 2009-12-23 | 2010-07-14 | 中国科学院自动化研究所 | Complex background real-time alternating method based on background modeling and energy minimization |
CN102118576A (en) * | 2009-12-30 | 2011-07-06 | 新奥特(北京)视频技术有限公司 | Method and device for color key synthesis in virtual sports system |
CN104349020A (en) * | 2014-12-02 | 2015-02-11 | 北京中科大洋科技发展股份有限公司 | Virtual camera and real camera switching system and method |
CN105306862A (en) * | 2015-11-17 | 2016-02-03 | 广州市英途信息技术有限公司 | Scenario video recording system and method based on 3D virtual synthesis technology and scenario training learning method |
Also Published As
Publication number | Publication date |
---|---|
CN106254849A (en) | 2016-12-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105654471B (en) | Augmented reality AR system and method applied to internet video live streaming | |
CN104615234B (en) | Message processing device and information processing method | |
CN105556508B (en) | The devices, systems, and methods of virtual mirror | |
US9460340B2 (en) | Self-initiated change of appearance for subjects in video and images | |
CN103597518B (en) | Generate the incarnation of reflection player appearance | |
CN107358656A (en) | The AR processing systems and its processing method of a kind of 3d gaming | |
CN103597516B (en) | Control the object in virtual environment | |
CN104615233B (en) | Message processing device and information processing method | |
CN105513007A (en) | Mobile terminal based photographing beautifying method and system, and mobile terminal | |
CN107707839A (en) | Image processing method and device | |
WO2010038693A1 (en) | Information processing device, information processing method, program, and information storage medium | |
US20130251267A1 (en) | Image creating device, image creating method and recording medium | |
CN108416832A (en) | Display methods, device and the storage medium of media information | |
JP2014157557A (en) | Image generation device, image generation method and program | |
CN107705278A (en) | The adding method and terminal device of dynamic effect | |
JP2005222152A (en) | Image correcting device | |
CN109769326A (en) | One kind is followed spot method, device and equipment | |
CN106254849B (en) | A kind of method and terminal that foreground object is locally replaced | |
CN107613228A (en) | The adding method and terminal device of virtual dress ornament | |
JP2015224909A (en) | Three-dimensional measurement device and three-dimensional measurement method | |
JP6544970B2 (en) | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM | |
CN110266955A (en) | Image processing method, device, electronic equipment and storage medium | |
CN107590795A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107704808A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
WO2020110432A1 (en) | Learning device, foreground region deduction device, learning method, foreground region deduction method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |