CN105072314A - Virtual studio implementation method capable of automatically tracking objects - Google Patents

Virtual studio implementation method capable of automatically tracking objects Download PDF

Info

Publication number
CN105072314A
CN105072314A CN201510492552.3A CN201510492552A CN105072314A CN 105072314 A CN105072314 A CN 105072314A CN 201510492552 A CN201510492552 A CN 201510492552A CN 105072314 A CN105072314 A CN 105072314A
Authority
CN
China
Prior art keywords
virtual
video camera
scene
image
implementation method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510492552.3A
Other languages
Chinese (zh)
Inventor
黄喜荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201510492552.3A priority Critical patent/CN105072314A/en
Publication of CN105072314A publication Critical patent/CN105072314A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a virtual studio implementation method capable of automatically tracking objects. The virtual studio implementation method comprises the following steps: selecting a high-resolution real camera and setting a camera position so that the real camera is capable of getting a comprehensive shot of the whole range of the activities of objects without any operation; creating a virtual scene and an image matting plane; setting the position of a virtual camera; performing chroma key image matting on a picture picked up by the real camera and mapping the images of the objects into the image matting plane to form mapped images; calculating the positions of the objects during activities in a real scene by use of an image identification module; generating a group of attitude data based on the position information and controlling the operation of the virtual camera through the attitude data; generating a scene image by an image rendering system based on the picture picked up by the virtual camera; and synchronizing the mapped images and the scene image into an image for final presentation and generating a virtual studio to play videos. The virtual studio implementation method capable of automatically tracking the objects is capable of enabling the installation and debugging of the virtual studio to be simple and fast, and also capable of greatly saving cost.

Description

A kind of can the virtual studio implementation method of automatically track target
Technical field
The present invention relates to field of broadcast televisions, more specifically, relate to a kind of can the virtual studio implementation method of automatically track target.
Background technology
The cost and time built studio have been saved in the application of virtual studio greatly, make the replacing rapid and convenient of scene.
Because virtual studio uses the picture taken by virtual three-dimensional scene and the entity video camera of Practical computer teaching to scratch the picture method of synthesizing.Must ensure that the virtual video camera in virtual scene also does corresponding motion when pushing away entity video camera, draw, when shaking shooting, otherwise the personage with regard to there will be shooting swims in the situation in scene.The picture synchronization can taken with entity video camera by the virtual three-dimensional scene of Practical computer teaching in order to ensure to push away at entity video camera, draw, when shaking shooting, the people avoiding occurring that entity video camera is taken or thing swim in the situation in scene, and the virtual video camera in virtual scene needs the pushing away of real-time tracking entity video camera, draws, shakes work.Need a set of tracking system to obtain the pushing away of entity video camera, draw, shake work for this reason, and be transferred to virtual three-dimensional scenic, the virtual video camera controlled in virtual scene does and pushes away accordingly, draws, shakes work.The personage virtual scene generated in real time and entity video camera taken again scratches picture and synthesizes.So just achieve the synchronized tracking of virtual video camera to entity video camera, thus achieve the synchronous change of real person and virtual scene.
Realize camera tracking system at present and mainly contain following four kinds of modes: Mesh tracking technology, transducer tracking technique, infrared tracking technology, ultrasonic tracking techniques, its general principle is all the method adopting figure or machinery, obtain the parameter of video camera, comprise the pushing away of video camera, draw, shake.But these camera tracking systems are extremely expensive, debugging is quite complicated, and to tripod, camera lens and and the interface of the supporting virtual scene of tracking system have special requirement.
Figure 1 shows that the typical scene of above-mentioned application.When the personage be taken moves to position B from position A, shooting person can adopt the method for " pan " to follow the tracks of the personage be taken; The angle that system adopts the one or more combination of four kinds of above-mentioned trackings acquisition video camera " to shake ", is sent to virtual scene by this angle information, controls virtual video camera and also " shake " same angle.
Make in this way, if realize personage from motion tracking except will increasing a set of image identification system, also need the tracking system of supporting costliness, and need to increase a set of complexity, the camera tripod of band servo console, allow the personage that its track up is specified.
Summary of the invention
In view of this, main purpose of the present invention be to provide a kind of can the virtual studio implementation method of automatically track target, the problem such as the construction cost that can solve existing virtual studio is higher, installation and adjustment is complicated.
For achieving the above object, technical scheme of the present invention is achieved in that
Can the virtual studio implementation method of automatically track target, comprising:
Choose a high-resolution entity video camera, the shooting seat in the plane of this entity video camera is set, wherein, this seat in the plane meets: entity video camera is when this seat in the plane photographs, when do not need to do anyly to push away, draw, shake, move operation, just can photograph the four corner of coverage goal activity in real scene;
Create virtual scene, and in virtual scene, create stingy picture plane;
Arrange the position of virtual video camera in virtual scene, wherein, virtual video camera is equal respectively with angle with entity video camera range-to-go to the Distance geometry angle of scratching picture plane;
The picture that entity video camera is taken is carried out Chroma Key, and by the image mapped of the target generated after Chroma Key to scratching picture plane, forms map image;
By picture recognition module, calculate the position of target when real scene is movable, obtain the displacement information of target;
Based on upper displacement information, generate one group of attitude data, and control virtual video camera by attitude data and carry out pushing away, draw, shake, move operation, realize being taken target from motion tracking;
The picture generating scene image that image rendering system is taken based on virtual video camera;
Map image and scene image are synthesized the image finally presented, and video is broadcasted in generating virtual studio.
Preferably, the ratio of width to height of the stingy picture plane of establishment is consistent with the ratio of width to height of the maximum coverage of entity video camera.
Preferably, the ratio of width to height of the stingy picture plane of establishment and the ratio of width to height of the maximum coverage of entity video camera inconsistent, the image mapped of the target generated after by Chroma Key in the process of stingy picture plane, the ratio of width to height of adjustment map image and size.
Preferably, before the picture taken by entity video camera carries out Chroma Key, chroma key is adjusted.
Preferably, the resolution of entity video camera is at least 4 times of scene output resolution ratio.
Preferably, after the position arranging virtual video camera in virtual scene, arrange the angle of visual field of virtual video camera, wherein, the angle of visual field is more than or equal to 1/2 first angle of visual field, and this first angle of visual field is the angle of visual field when virtual video camera can photograph whole stingy picture plane.
Preferably, after the position that virtual video camera in virtual scene is set, the angle of visual field of virtual video camera is set, when target has depth direction to move, virtual video camera becomes S2 to the distance of scratching as plane from S1, and the angle of visual field (B) is obtained by following formulae discovery:
B = arctan ( ( S 1 S 2 ) × tan ( A ) ) ;
Wherein, A represents the angle of visual field of target virtual video camera before depth motion, and B represents the angle of visual field of target virtual video camera after depth motion.
Preferably, when there is multiple target, in virtual scene, arrange multiple virtual video camera to realize the track up to different target.
Preferably, when needing shooting not present the background frame of target, scene image is exported as the image finally presented.
Preferably, the position of adjustment aim in virtual scene is come by adjusting to scratch as the position of plane in virtual scene.
Technique effect of the present invention:
The present invention adopts high-resolution entity video camera, can under the prerequisite ensureing picture quality photographing panoramic picture, substitute expensive camera tracking system, use common tripod, just can realize that there is the virtual studio pushing away, draw, shake function.Allow installation, debugging become simple and efficient, cost-saving greatly.
Meanwhile, the present invention can simulate the shooting effect of multiple cameras with the entity video camera of a ultrahigh resolution.And can the scope of activities of automatically track target when taking, realize operatorless auto-tracking shooting.
Accompanying drawing explanation
Accompanying drawing described herein is used to provide a further understanding of the present invention, and form a application's part, schematic description and description of the present invention, for explaining the present invention, does not form inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 shows the schematic diagram that virtual studio in prior art realizes tracking target style of shooting;
Fig. 2 show according to the embodiment of the present invention one can the flow chart of virtual studio implementation method of automatically track target;
Fig. 3 shows and can arrange schematic diagram in the seat in the plane of entity video camera in the virtual studio implementation method of automatically track target according to the embodiment of the present invention one;
Fig. 4 show according to the embodiment of the present invention one can the virtual studio implementation method Scene output resolution ratio of automatically track target and the contrast schematic diagram of entity resolution of video camera;
Fig. 5 shows the change schematic diagram of the angle of visual field and stingy picture plan position approach when target has depth direction to move;
Fig. 6 show exist multiple be taken target time photographed scene schematic diagram;
Fig. 7 shows the schematic diagram adopting technical solution of the present invention simulation infinite blue box scene.
Embodiment
Below with reference to the accompanying drawings and in conjunction with the embodiments, describe the present invention in detail.
Fig. 2 show according to the embodiment of the present invention one can the flow chart of virtual studio implementation method of automatically track target, as shown in Figure 1, the method specifically comprises the steps:
Step S202, choose a high-resolution entity video camera, the shooting seat in the plane of this entity video camera is set, wherein, this seat in the plane can meet: entity video camera is when this seat in the plane photographs, when do not need to do anyly to push away, draw, shake, move operation, just can photograph the four corner of coverage goal activity in real scene.
Fig. 3 shows and can arrange diagram in the seat in the plane of entity video camera in the virtual studio implementation method of automatically track target according to the embodiment of the present invention one.As shown in Figure 3, the present embodiment adopt a pixel resolution higher than the final output resolution ratio of virtual scene, high-resolution entity video camera, select suitable seat in the plane to allow entity video camera coverage cover the four corner of character activities, target is herein exactly personage.
In above-mentioned steps, about choosing of high-resolution entity video camera, what is high-resolution, and The present invention gives concrete scheme, under normal circumstances, the resolution of entity video camera is at least 4 times of scene output resolution ratio.The pixel resolution of present common TV programme has 1920x1080,1280x720,720x576,720x480 etc.Therefore the pixel resolution that finally exports of virtual studio, conventional is also above several.Reach good effect, the pixel of the resolution of entity video camera should be 4 times or higher of the final output resolution ratio of scene.Such as, if final output resolution ratio is 1920x1080, then the resolution of entity video camera will reach at least 3840x2160.Fig. 4 show according to the embodiment of the present invention one can the virtual studio implementation method Scene output resolution ratio of automatically track target and the contrast schematic diagram of entity resolution of video camera.The resolution gap that the resolution of entity video camera and scene export is larger, and in virtual scene, virtual video camera pushes away, draws, the scope of shaking just can be larger.
Step S204, creates virtual scene, and in virtual scene, create stingy picture plane.
Wherein, stingy the ratio of width to height as plane of establishment is consistent with the ratio of width to height of the maximum coverage of entity video camera; When the ratio of width to height of stingy picture plane and the maximum coverage of entity video camera is inconsistent, need the picture that entity video camera is taken be mapped to this scratch the display of picture plane time, the ratio of width to height that adjustment maps and magnitude range indeformable to ensure the image of mapping.
Step S206, arranges the position of virtual video camera in virtual scene, and wherein, virtual video camera is equal respectively with angle with entity video camera range-to-go to the Distance geometry angle of scratching picture plane.
Certainly, also can before shooting, make it clap the distance of target with institute by the position of adjustment entity video camera and angle, equal virtual video camera and the stingy Distance geometry angle as plane respectively.As after the adjustment of sporocarp video camera distance, coverage can not meet the demands, lens focus again by adjusting video camera meets the requirements of effect, namely entity video camera when do not need to do anyly to push away, draw, shake, move operation, just can photograph the four corner of coverage goal activity in real scene.
In above process, the setting of the angle of visual field of virtual video camera is also related to.After virtual video camera sets with the stingy distance as plane; Arrange the angle of visual field of virtual video camera all can photograph in picture as plane scratching, the virtual video camera angle of visual field now can represent with A; When adjusting the angle of visual field of virtual video camera, guarantee to make it can not be less than (1/2) A.If in shooting process, the personage be taken does not have the motion in depth direction, does not need the size changing the angle of visual field in shooting process.Use the angle of visual field that above-mentioned method calculates.Because have employed high-resolution entity video camera, so when ensureing clear picture, the angle of visual field of virtual video camera shooting can be increased, the scope of activities of personage is included in coverage.
Especially, Fig. 5 shows the change schematic diagram of the angle of visual field and stingy picture plan position approach when target has depth direction to move.The target be taken in actual use often has the motion in depth direction, sometimes also can move to before some dummy objects, shelter from dummy object.Now just need to move in depth direction to scratch picture plane.This is scratched as needing the angle of visual field readjusting virtual video camera after planar movement.As shown in Figure 5, when scratch move to position 2 as plane from position 1 time, this scratch change into S2 as the distance of plane and virtual video camera by S1, the angle of visual field of virtual video camera should change to angle B from angle A.Now the computing formula of angle B is as follows:
B = arctan ( ( S 1 S 2 ) × tan ( A ) ) ;
Wherein, A represents the angle of visual field of target virtual video camera before depth motion, and B represents the angle of visual field of target virtual video camera after depth motion.
Step S208, carries out Chroma Key by the picture that entity video camera is taken, and by the image mapped of the target generated after Chroma Key to scratching picture plane, forms map image;
Particularly, the personage of picture after Chroma Key (target) of entity video camera shooting will be mapped to this and scratch the display of picture plane.This scratches the scope of picture plane, is namely the scope of activities of real scene shooting personage in virtual scene.The position that the personage adjusting the shooting of entity video camera occurs in virtual scene, only need adjust this and scratch the picture position of plane in virtual scene.
Before above-mentioned steps, also need to adjust chroma key, make it reach best stingy picture effect.
Step S210, by picture recognition module, calculates the position of target when real scene is movable, obtains the displacement information of target.That is: image identification system calculates personage's (target) position in scene, calculates the displacement of personage according to the change of position again.
Step S212, based on upper displacement information, generates one group of attitude data, and controls virtual video camera by attitude data and carry out pushing away, draw, shake, move operation.That is: calculate according to the displacement information of personage in scene the attitude that virtual video camera can photograph photographed person, virtual scene is delivered to by after attitude data filtering, smoothing processing, use this attitude data to control the attitude of virtual video camera, make it consistent with the attitude calculated.
Step S214, the picture generating scene image that image rendering system is taken based on virtual video camera;
Step S216, synthesizes the image finally presented by map image and scene image, and video is broadcasted in generating virtual studio.
In the present embodiment, by adopting a high-resolution entity video camera, for it arranges a suitable shooting seat in the plane, can not do anyly to push away, draw, shake, four corner that namely operation that moves can photograph character activities; And character activities range size, location matches in and real scene is created in virtual scene, picture plane scratched by the video camera that the ratio of width to height is identical with entity camera pixel the ratio of width to height; After the picture taken by entity video camera removes blue or green background, be mapped to this and scratch picture plane.
When the personage be taken moves to position B from position A, this target is all the time in the coverage of entity video camera, and entity video camera does not need to do any operation.Operation virtual video camera is just needed to follow the tracks of personage being shot.Increase a set of image identification system more on this basis, be easy to just to accomplish that virtual video camera follows the tracks of personage being shot.Because entity video camera is fixed when taking, so this system just can the activity of real-time tracking personage without servo console.The method allows installation, debugging become simple and efficient, also can save the construction cost of virtual studio simultaneously greatly.
Fig. 6 show exist multiple be taken target time photographed scene schematic diagram.When there is multiple target, in virtual scene, arrange multiple virtual video camera to realize the tracking to different target.
Particularly, at the floor of studio, often to use multiple stage entity video camera, and by the switching of multiple stage, represent more details to spectators, attract the attentiveness of spectators.
At same position, place multiple virtual video camera, the shooting angle of each virtual video camera is different with coverage.The effect of simulation multiple stage entity video camera shooting.
As shown in Figure 6, " personage A ", " personage B " and " personage C " is had.If will give prominence to key person during shooting, multiple virtual video camera can be set.Each " personage " is allowed to have the virtual video camera of " special ".When certain " personage " is when speaking or when having " reaction " to express one's feelings to others' talk, corresponding virtual video camera can be switched to fast.To capture " reaction shot " fast.When in figure, " personage A " speaks, " personage B " and " personage C " is all listening.When the viewpoint that " personage A " says " personage B " is agreed, " personage B " may have the action of smiling or nodding, and now instructor in broadcasting can be switched to the picture that " virtual video camera B " takes timely.Vice versa.
Only illustrate " virtual video camera A " and " virtual video camera B " in Fig. 6 to take " personage A " and " personage B " respectively." virtual video camera C " shooting " personage C " can also be added in actual use; Add " virtual video camera D " to take " personage C " and " personage A " simultaneously; Add " virtual video camera E " to take " personage B " and " personage C " simultaneously; " virtual video camera F " can also be added and take " personage A ", " personage B " and " personage C " simultaneously.
In a word, the quantity arranging virtual video camera that embodiments of the invention can be very random and position, realize the flexible shooting to photographic subjects.Doing so avoids the defect must taking different target (personage or object) in prior art by arranging multiple stage entity video camera.
Fig. 7 shows the scene schematic diagram adopting technical solution of the present invention simulation infinite blue box.
When needing shooting not present the background frame of target, scene image is exported as the image finally presented.
Particularly, in an embodiment of the present invention, entity video camera is fixed in shooting process, as long as operation virtual video camera.As shown in Figure 7, when the picture of virtual video camera shooting A0 and B1 scope, after the scope of shooting exceedes the movable and entity video camera coverage of target (personage), the picture of shooting would not have the activity of target (personage).Because entity video camera without any motion, shade is set so do not exist, so simulates the problem of infinite blue box well.
And in prior art, after the personage of entity video camera shooting, all can have the background of a blueness or green, so that Chroma Key.This background is commonly called as " blue case ", and the scope of activities of personage can not exceed the scope of this " blue case ".Generally virtual scene generally all can be more much larger than the scope of character activities.Sometimes in order to take one by " air lens " shake shooting personage picture time, the coverage of virtual video camera can exceed the scope of activities of personage.
In traditional Virtual Studio System, because virtual video camera is synchronous by entity video camera, the coverage of virtual video camera be made to exceed the scope of activities of personage, and entity video camera will exceed the scope of blue case.So traditional virtual studio needs to arrange shade, simulation " infinite blue box ".After the scope of entity video camera shooting exceedes the scope of blue case, by the part outside the blue case of shade processing, it is made not show.
Adopt the technical scheme of the embodiment of the present invention, can realize the shooting of simulating infinite blue box easily, not need the process of shade, so just enormously simplify shooting flow process, it is very large convenient to bring to the shooting of virtual studio.
Obviously, those skilled in the art should be understood that, above-mentioned of the present invention each step can realize with general calculation element, they can concentrate on single calculation element, or be distributed on network that multiple calculation element forms, alternatively, they can realize with the executable program code of calculation element, thus, they can be stored and be performed by calculation element in the storage device, or they are made into each integrated circuit modules respectively, or the multiple module in them or step are made into single integrated circuit module to realize.Like this, the present invention is not restricted to any specific hardware and software combination.
The foregoing is only the preferred embodiments of the present invention, be not limited to the present invention, for a person skilled in the art, the present invention can have various modifications and variations.Within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1. can the virtual studio implementation method of automatically track target, it is characterized in that, comprising:
Choose a high-resolution entity video camera, the shooting seat in the plane of described entity video camera is set, wherein, described seat in the plane meets: described entity video camera is when described seat in the plane photographs, when do not need to do anyly to push away, draw, shake, move operation, just can photograph the four corner of coverage goal activity in real scene;
Create virtual scene, and in described virtual scene, create stingy picture plane;
Arrange the position of virtual video camera in described virtual scene, wherein, described virtual video camera is equal respectively to the Distance geometry angle of described target with described entity video camera to the Distance geometry angle of described stingy picture plane;
The picture of described entity video camera shooting is carried out Chroma Key, and by the image mapped of described target that generates after Chroma Key to described stingy picture plane, formation map image;
By picture recognition module, calculate the position of described target when described real scene is movable, obtain the displacement information of described target;
Based on institute's displacement information, generate one group of attitude data, and by described attitude data, control described virtual video camera and carry out pushing away, draw, shake, move operation, realize being taken target from motion tracking;
The picture generating scene image that image rendering system is taken based on described virtual video camera;
Described map image and described scene image are synthesized the image finally presented, and video is broadcasted in generating virtual studio.
2. according to claim 1ly from the virtual studio implementation method of motion tracking, can it is characterized in that, the ratio of width to height of the described stingy picture plane of establishment is consistent with the ratio of width to height of the maximum coverage of described entity video camera.
3. according to claim 1 can from the virtual studio implementation method of motion tracking, it is characterized in that, the ratio of width to height of described stingy picture plane created and the ratio of width to height of the maximum coverage of described entity video camera inconsistent, described by Chroma Key after the image mapped of described target that generates in the process of described stingy picture plane, adjust the ratio of width to height and the size of described map image.
4. according to claim 1ly from the virtual studio implementation method of motion tracking, can to it is characterized in that, before the described picture by described entity video camera shooting carries out Chroma Key, chroma key is adjusted.
5. according to claim 1ly from the virtual studio implementation method of motion tracking, can it is characterized in that, the resolution of described entity video camera is at least 4 times of scene output resolution ratio.
6. according to claim 1 can from the virtual studio implementation method of motion tracking, it is characterized in that, described the position of virtual video camera in described virtual scene is set after, the angle of visual field of described virtual video camera is set, wherein, the described angle of visual field is more than or equal to 1/2 first angle of visual field, and described first angle of visual field is the angle of visual field when described virtual video camera can photograph whole described stingy picture plane.
7. according to claim 1 can from the virtual studio implementation method of motion tracking, it is characterized in that, described the position of virtual video camera in described virtual scene is set after, the angle of visual field of described virtual video camera is set, when described target has depth direction to move, described virtual video camera becomes S2 to described distance of scratching as plane from S1, and the described angle of visual field is obtained by following formulae discovery:
Wherein, A represents the angle of visual field of described target described virtual video camera before depth motion, and B represents the angle of visual field of described target described virtual video camera after depth motion.
8. according to claim 1ly from the virtual studio implementation method of motion tracking, can to it is characterized in that, when there is multiple described target, in described virtual scene, the tracking that multiple described virtual video camera realizes the described target of difference is set.
9. according to claim 1ly from the virtual studio implementation method of motion tracking, can it is characterized in that, when needing shooting not present the background frame of described target, described scene image being exported as the described image finally presented.
10. according to claim 1ly from the virtual studio implementation method of motion tracking, can it is characterized in that, adjusting the position of described target in described virtual scene by adjusting described scratching as the position of plane in described virtual scene.
CN201510492552.3A 2015-08-13 2015-08-13 Virtual studio implementation method capable of automatically tracking objects Pending CN105072314A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510492552.3A CN105072314A (en) 2015-08-13 2015-08-13 Virtual studio implementation method capable of automatically tracking objects

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510492552.3A CN105072314A (en) 2015-08-13 2015-08-13 Virtual studio implementation method capable of automatically tracking objects

Publications (1)

Publication Number Publication Date
CN105072314A true CN105072314A (en) 2015-11-18

Family

ID=54501585

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510492552.3A Pending CN105072314A (en) 2015-08-13 2015-08-13 Virtual studio implementation method capable of automatically tracking objects

Country Status (1)

Country Link
CN (1) CN105072314A (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106027855A (en) * 2016-05-16 2016-10-12 深圳迪乐普数码科技有限公司 Method and terminal for realizing virtual rocker arm
CN106295525A (en) * 2016-07-29 2017-01-04 深圳迪乐普数码科技有限公司 A kind of video image method of replacing and terminal
CN106296573A (en) * 2016-08-01 2017-01-04 深圳迪乐普数码科技有限公司 A kind of method realizing virtual screen curtain wall and terminal
CN106296683A (en) * 2016-08-09 2017-01-04 深圳迪乐普数码科技有限公司 A kind of generation method of virtual screen curtain wall and terminal
CN106485788A (en) * 2016-10-21 2017-03-08 重庆虚拟实境科技有限公司 A kind of game engine mixed reality image pickup method
CN107071365A (en) * 2017-05-08 2017-08-18 北京德火新媒体技术有限公司 A kind of meteorological data Virtual Service system and method
CN107483857A (en) * 2017-08-16 2017-12-15 卓智网络科技有限公司 Micro- class method for recording and device
CN107509068A (en) * 2017-09-13 2017-12-22 北京迪生数字娱乐科技股份有限公司 Virtual photography pre-production method and system
CN107888890A (en) * 2017-12-25 2018-04-06 河南新汉普影视技术有限公司 It is a kind of based on the scene packing device synthesized online and method
CN108259780A (en) * 2018-04-17 2018-07-06 北京艾沃次世代文化传媒有限公司 For the anti-interference special efficacy audio video synchronization display methods of virtual film studio
WO2018161817A1 (en) * 2017-03-08 2018-09-13 捷开通讯(深圳)有限公司 Storage medium, and method and system for simulating photography in virtual reality scene
CN109240499A (en) * 2018-08-31 2019-01-18 云南师范大学 Virtual camera simulates intersection control routine and method, information data processing terminal
CN109769082A (en) * 2018-12-11 2019-05-17 北京美吉克科技发展有限公司 A kind of virtual studio building system and method for recording based on VR tracking
CN110012223A (en) * 2019-03-22 2019-07-12 深圳技威时代科技有限公司 A kind of method and system for realizing cloud platform rotation based on image cropping
CN111243025A (en) * 2020-01-16 2020-06-05 任志忠 Method for positioning target in real-time synthesis of movie and television virtual shooting
CN111698390A (en) * 2020-06-23 2020-09-22 网易(杭州)网络有限公司 Virtual camera control method and device, and virtual studio implementation method and system
CN111885306A (en) * 2020-07-28 2020-11-03 重庆虚拟实境科技有限公司 Target object adjusting method, computer device, and storage medium
CN112153472A (en) * 2020-09-27 2020-12-29 广州博冠信息科技有限公司 Method and device for generating special picture effect, storage medium and electronic equipment
CN112330736A (en) * 2020-11-02 2021-02-05 北京虚拟动点科技有限公司 Scene picture shooting method and device, electronic equipment and storage medium
CN112969034A (en) * 2021-03-01 2021-06-15 华雁智能科技(集团)股份有限公司 Method and device for verifying point distribution scheme of camera device and readable storage medium
CN113160338A (en) * 2021-05-18 2021-07-23 视境技术(深圳)有限公司 AR/VR virtual reality fusion studio character space positioning
CN113923354A (en) * 2021-09-30 2022-01-11 卡莱特云科技股份有限公司 Video processing method and device based on multi-frame image and virtual background shooting system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1241876A2 (en) * 2001-03-13 2002-09-18 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and control program
CN1477856A (en) * 2002-08-21 2004-02-25 北京新奥特集团 True three-dimensional virtual studio system and its implement method
CN1741572A (en) * 2004-08-24 2006-03-01 西安宏源视讯设备有限责任公司 Auxiliary camera light graphic relay tracking method in virtual studio system
CN1741570A (en) * 2004-08-24 2006-03-01 西安宏源视讯设备有限责任公司 Instantaneous initialization positioning method in virtual studio system
US20070248283A1 (en) * 2006-04-21 2007-10-25 Mack Newton E Method and apparatus for a wide area virtual scene preview system
CN101370088A (en) * 2008-10-14 2009-02-18 西安宏源视讯设备有限责任公司 Scene matching apparatus and method for virtual studio
WO2013040516A1 (en) * 2011-09-14 2013-03-21 Motion Analysis Corporation Systems and methods for incorporating two dimensional images captured by a moving studio camera with actively controlled optics into a virtual three dimensional coordinate system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1241876A2 (en) * 2001-03-13 2002-09-18 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and control program
CN1477856A (en) * 2002-08-21 2004-02-25 北京新奥特集团 True three-dimensional virtual studio system and its implement method
CN1741572A (en) * 2004-08-24 2006-03-01 西安宏源视讯设备有限责任公司 Auxiliary camera light graphic relay tracking method in virtual studio system
CN1741570A (en) * 2004-08-24 2006-03-01 西安宏源视讯设备有限责任公司 Instantaneous initialization positioning method in virtual studio system
US20070248283A1 (en) * 2006-04-21 2007-10-25 Mack Newton E Method and apparatus for a wide area virtual scene preview system
CN101370088A (en) * 2008-10-14 2009-02-18 西安宏源视讯设备有限责任公司 Scene matching apparatus and method for virtual studio
WO2013040516A1 (en) * 2011-09-14 2013-03-21 Motion Analysis Corporation Systems and methods for incorporating two dimensional images captured by a moving studio camera with actively controlled optics into a virtual three dimensional coordinate system

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
乔宇 等: "虚拟演播室技术", 《重庆大学学报(自然科学版)》 *
唐玉川: "电视节目制作中虚拟演播室技术的应用", 《内蒙古农业大学学报(社会科学版)》 *
崔秀清: "无限蓝箱技术在虚拟演播室系统中的应用实现", 《电子制作》 *
董武绍: "《虚拟演播室技术与创作》", 31 October 2014 *
鲁敏 等: "虚拟演播室系统中的无限蓝箱技术", 《国防科技大学学报》 *

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106027855B (en) * 2016-05-16 2019-06-25 深圳迪乐普数码科技有限公司 A kind of implementation method and terminal of virtual rocker arm
CN106027855A (en) * 2016-05-16 2016-10-12 深圳迪乐普数码科技有限公司 Method and terminal for realizing virtual rocker arm
CN106295525B (en) * 2016-07-29 2019-12-03 深圳迪乐普数码科技有限公司 A kind of video image method of replacing and terminal
CN106295525A (en) * 2016-07-29 2017-01-04 深圳迪乐普数码科技有限公司 A kind of video image method of replacing and terminal
CN106296573A (en) * 2016-08-01 2017-01-04 深圳迪乐普数码科技有限公司 A kind of method realizing virtual screen curtain wall and terminal
CN106296573B (en) * 2016-08-01 2019-08-06 深圳迪乐普数码科技有限公司 A kind of method and terminal for realizing virtual screen curtain wall
CN106296683A (en) * 2016-08-09 2017-01-04 深圳迪乐普数码科技有限公司 A kind of generation method of virtual screen curtain wall and terminal
CN106485788A (en) * 2016-10-21 2017-03-08 重庆虚拟实境科技有限公司 A kind of game engine mixed reality image pickup method
US11094125B2 (en) 2017-03-08 2021-08-17 JRD Communication (Shenzhen) Ltd. Storage medium, and method and system for simulating photography in virtual reality scene
WO2018161817A1 (en) * 2017-03-08 2018-09-13 捷开通讯(深圳)有限公司 Storage medium, and method and system for simulating photography in virtual reality scene
CN107071365A (en) * 2017-05-08 2017-08-18 北京德火新媒体技术有限公司 A kind of meteorological data Virtual Service system and method
CN107483857A (en) * 2017-08-16 2017-12-15 卓智网络科技有限公司 Micro- class method for recording and device
CN107509068A (en) * 2017-09-13 2017-12-22 北京迪生数字娱乐科技股份有限公司 Virtual photography pre-production method and system
CN107888890A (en) * 2017-12-25 2018-04-06 河南新汉普影视技术有限公司 It is a kind of based on the scene packing device synthesized online and method
CN108259780A (en) * 2018-04-17 2018-07-06 北京艾沃次世代文化传媒有限公司 For the anti-interference special efficacy audio video synchronization display methods of virtual film studio
CN109240499A (en) * 2018-08-31 2019-01-18 云南师范大学 Virtual camera simulates intersection control routine and method, information data processing terminal
CN109240499B (en) * 2018-08-31 2022-02-08 云南师范大学 Virtual camera simulation interaction control system and method, and information data processing terminal
CN109769082A (en) * 2018-12-11 2019-05-17 北京美吉克科技发展有限公司 A kind of virtual studio building system and method for recording based on VR tracking
CN110012223A (en) * 2019-03-22 2019-07-12 深圳技威时代科技有限公司 A kind of method and system for realizing cloud platform rotation based on image cropping
CN111243025A (en) * 2020-01-16 2020-06-05 任志忠 Method for positioning target in real-time synthesis of movie and television virtual shooting
CN111698390A (en) * 2020-06-23 2020-09-22 网易(杭州)网络有限公司 Virtual camera control method and device, and virtual studio implementation method and system
CN111698390B (en) * 2020-06-23 2023-01-10 网易(杭州)网络有限公司 Virtual camera control method and device, and virtual studio implementation method and system
CN111885306A (en) * 2020-07-28 2020-11-03 重庆虚拟实境科技有限公司 Target object adjusting method, computer device, and storage medium
CN112153472A (en) * 2020-09-27 2020-12-29 广州博冠信息科技有限公司 Method and device for generating special picture effect, storage medium and electronic equipment
CN112330736A (en) * 2020-11-02 2021-02-05 北京虚拟动点科技有限公司 Scene picture shooting method and device, electronic equipment and storage medium
CN112969034A (en) * 2021-03-01 2021-06-15 华雁智能科技(集团)股份有限公司 Method and device for verifying point distribution scheme of camera device and readable storage medium
CN113160338A (en) * 2021-05-18 2021-07-23 视境技术(深圳)有限公司 AR/VR virtual reality fusion studio character space positioning
CN113923354A (en) * 2021-09-30 2022-01-11 卡莱特云科技股份有限公司 Video processing method and device based on multi-frame image and virtual background shooting system
CN113923354B (en) * 2021-09-30 2023-08-01 卡莱特云科技股份有限公司 Video processing method and device based on multi-frame images and virtual background shooting system

Similar Documents

Publication Publication Date Title
CN105072314A (en) Virtual studio implementation method capable of automatically tracking objects
CN111698390B (en) Virtual camera control method and device, and virtual studio implementation method and system
US10425638B2 (en) Equipment and method for promptly performing calibration and verification of intrinsic and extrinsic parameters of a plurality of image capturing elements installed on electronic device
US10277890B2 (en) System and method for capturing and viewing panoramic images having motion parallax depth perception without image stitching
JP4153146B2 (en) Image control method for camera array and camera array
US20060165310A1 (en) Method and apparatus for a virtual scene previewing system
CN103477645B (en) Mobile device stereo camera shooting machine and image pickup method thereof
CN105530431A (en) Reflective panoramic imaging system and method
CN107003600A (en) Including the system for the multiple digital cameras for observing large scene
WO1995007590A1 (en) Time-varying image processor and display device
CN105324993B (en) The method and apparatus and computer readable recording medium storing program for performing that size to more projecting the content in arenas is normalized
CN2667827Y (en) Quasi-panorama surrounded visual reproducing system
CN105324994A (en) Method and system for generating multi-projection images
CN107592488A (en) A kind of video data handling procedure and electronic equipment
CN107172413A (en) Method and system for displaying video of real scene
CN108632538B (en) CG animation and camera array combined bullet time shooting system and method
CN111586304B (en) Panoramic camera system and method based on 5G and VR technology
CN111200686A (en) Photographed image synthesizing method, terminal, and computer-readable storage medium
JP7424076B2 (en) Image processing device, image processing system, imaging device, image processing method and program
CN116320363B (en) Multi-angle virtual reality shooting method and system
Kuchelmeister et al. Affect and place representation in immersive media: The Parragirls Past, Present project
CN213461928U (en) Panoramic camera and electronic device
CN112887653B (en) Information processing method and information processing device
CN112019747B (en) Foreground tracking method based on holder sensor
KR100579135B1 (en) Method for capturing convergent-type multi-view image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20151118

WD01 Invention patent application deemed withdrawn after publication