CN105959595A - Virtuality to reality autonomous response method for virtuality and reality real-time interaction - Google Patents

Virtuality to reality autonomous response method for virtuality and reality real-time interaction Download PDF

Info

Publication number
CN105959595A
CN105959595A CN201610365024.6A CN201610365024A CN105959595A CN 105959595 A CN105959595 A CN 105959595A CN 201610365024 A CN201610365024 A CN 201610365024A CN 105959595 A CN105959595 A CN 105959595A
Authority
CN
China
Prior art keywords
image
virtual
real
camera
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610365024.6A
Other languages
Chinese (zh)
Inventor
黄民主
邵刚
张银峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hongyuan Video Communicatino Equipment Co Ltd Xian
Original Assignee
Hongyuan Video Communicatino Equipment Co Ltd Xian
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hongyuan Video Communicatino Equipment Co Ltd Xian filed Critical Hongyuan Video Communicatino Equipment Co Ltd Xian
Priority to CN201610365024.6A priority Critical patent/CN105959595A/en
Publication of CN105959595A publication Critical patent/CN105959595A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides the virtuality to reality autonomous response technology, relates to synthesis of an image shot by a camera and a three-dimensional computer pattern in real time and aims to form a new TV program making system. The method comprises steps that 1, the position of a blue case coordinate center b (0, 0) is determined, a relative position relationship between an auxiliary camera shooting wide angle and a main camera is preset, and two adjacent images are inputted to virtuality and reality real-time interaction software; 2, the position of a target object b (x, y) is determined, an area A of the step 1 is taken as the foreground, an image of the target object is shot by an auxiliary camera in real time, displacement of the target object relative to a reference coordinate center b (0, 0) is calculated through an image processing system in real time, and the position coordinate b (x, y) of the target object taking a frame as a sampling unit is acquired; and 3, the three-dimensional rendering equipment is utilized for rendering, and the result is outputted.

Description

Virtual response method autonomous to reality in a kind of virtual reality real-time, interactive
Technical field
The present invention relates to Virtual Studio Technology field, particularly to virtual in a kind of virtual reality real-time, interactive Response method autonomous to reality.
Background technology
Along with the development of the advanced information technology such as computer network and 3-D graphic software, television program designing side Formula has a very large change.The thinking of vision and auditory effect and the mankind can lean on virtual with real from Main response interactive device realizes.It has distilled the thinking of logic of the mankind.Virtual studio is then virtual and existing The real concrete embodiment combined with human thinking from main response interactive device in television program designing.Virtual drill The major advantage of broadcast system is that it can more effectively express news information, strengthens the appeal of information with mutual Property.Traditional studio is more to the restriction of program making.Virtual studio system make setting be in accordance with than The three dimensional designs of example, when video camera moves, all can there is corresponding change with foreground picture in virtual setting, Thus add the sense of reality of program.Notable in a lot of aspect cost benefits with virtual scene.As it have and The ability of Shi Genghuan scene, reduction of expenditure in studio decorations makes.Need not move and retain scenery, because of This can alleviate the demand pressure to employee.In the series performance that use background and camera position are constant, it can To save substantial amounts of fund.It addition, virtual studio has making advantage.When considering program general layout, system The choice making personnel is big, and they need be not excessively and are limited by scene.Can be same for same program Studio, because background can be stored in disk.It can give full play to the artistic creativity of creator and think As power, utilize existing multiple Three-dimensional Animation Software, create high-quality background.
Summary of the invention
The present invention proposes a kind of virtual and real from main response interaction technique, and it is the figure shot by video camera As synthesizing with Computerized three-dimensional figure in real time, form a kind of new tv program producing system.Virtual Computerized three-dimensional graph technology and Video Composition technology, root is taken full advantage of from main response interactive device with reality According to the location parameter of video camera, make the perspective relation of three-dimensional virtual scene keep consistent with prospect, thus create Go out television studio effect true to nature, third dimension is the strongest, save traditional based on setting up true setting The expensive expense such as design, material, construction, place in television program designing, overcoming tradition setting should not be with Just move, very flexible, the shortcoming that place can not reuse, meet television program designing and broadcast. The auxiliary camera and the scan picture that are specifically arranged on Lan Xiang appropriate location, studio by one fill Putting, the image of prospect in the blue case of auxiliary camera output picked-up, scan picture device is with field frequency speed Real-time image acquisition also carries out image procossing, extracts in image characteristic and carries out calculating conversion, before obtaining Scape host or object plan-position coordinate parameters in blue case.This device is by the foreground planes position of output Coordinate parameters inputs to the three-dimensional rendering equipment of virtual studio, through equation matrix calculus, it is achieved three-dimensional empty Intend the seamless coupling of the position relationship of scene and true foreground so that true foreground the most appropriately incorporates three-dimensional empty Intend among the space correct position of scene, and reflect correct virtual three dimensional space hiding relation.The present invention knows Other precision is high, good stability, and capacity of resisting disturbance is strong, is not affected by ambient light, temperature, sound wave.
For achieving the above object, the technical solution used in the present invention is as follows:
Virtual response method autonomous to reality in a kind of virtual reality real-time, interactive, comprises the following steps:
Step 1, determines the position of blue case coordinate center b (0,0): preset auxiliary camera shooting Radix Rumicis With the relative position relation of main camera, the image photographed according to auxiliary camera, determine that prospect is in blue case Region A, using the prior image frame that photographs region A as the masterplate of rear two field picture, the two frame figures adjacent by this As being input to virtual reality real-time, interactive software, the view data obtained according to virtual reality real-time, interactive software, Use the template matching technique of digital picture, the people in true blue case scene or object are carried out the detection of image Identify and follow the tracks of, it is achieved the segmentation of people in real scene or object and blue case background, and then obtain true field People in scape or object projection coordinate on ground in blue case, gather the people in real scene simultaneously or object exist Range of movement in blue case, calibrates the plan-position coordinate of projection centre after software converts according to this scope B (0,0), flows to auxiliary camera image processing system by blue case coordinate center b (0,0) determined;
Step 2, determines target object b, (x, position y): region A is as prospect in step 1, and auxiliary is taken the photograph The image of camera captured in real-time target object, calculated in real time by image processing system target object relative to The displacement at reference coordinate center b (0,0), thus obtain the position coordinates of the target object with frame as sample unit b,(x,y);
Step 3, three-dimensional rendering equipment renders and exports: target object b step 2 obtained, (x, y) Position, according to step 1 auxiliary camera shooting Radix Rumicis and the relative position relation of main camera, obtain mesh The depth of field location parameter of mark object, is input to three-dimensional rendering equipment by this position, then through to matrix equation Resolve, allow main camera three dimensions generate the particular location relation of target with auxiliary camera relative to blue case The prospect seamless coupling of concrete three-dimensional location coordinates that coordinate center obtains.
Further, described auxiliary camera is colored or B/W camera, analog or digital video camera, CCD Or the video camera such as CMOS-type, described auxiliary camera is arranged near blue case, all video cameras of alignment prospect The mode of picked-up image.
Further, in step 1, the conversion method of the plan-position coordinate of projection centre is:
Obtain the projection of prospect in blue case according to auxiliary camera, obtain projecting both ends horizontal coordinate points;
According to obtaining projection both ends horizontal coordinate points, take its meansigma methods plan-position coordinate as projection centre.
Further, described real-time, interactive software includes image input units, graphics processing unit and depth of field position Putting data output and virtual reality real-time, interactive algorithm unit, its image input units is used for Computer Vision The picture signal input of unit;Its graphics processing unit passes through image processing algorithm, obtains foreground object in indigo plant Plane coordinates position in case and depth of field data, through calculating, the form of expression is for determining that foreground object is specifying seat Plane coordinates position in mark system, calculates coupling by virtual studio 3 d rendering engine, drills for virtual The foreground location broadcasting chamber system determines.
Further, described image input units can be image pick-up card, it is also possible to be the electricity of own design Road, inputs to graphics processing unit in real time for image;Described graphics processing unit can be a computer, Can also be the circuit with data-handling capacity of own design, for image under the support of software algorithm The data of coordinate points process, and obtain prospect position data in blue case.
Further, pictorial form is inputted by auxiliary camera, and auxiliary camera is transmitted by image input units Vedio data pass through the depth of field with field frequency speed continuous acquisition to graphics processing unit, graphics processing unit Position recognition algorithm unit, obtains foreground object coordinate position in blue case and depth of field data, determines prospect Object coordinate position in specified coordinate system, passes to virtual studio three-dimensional rendering by data outputting unit Engine calculates coupling.
Further, described auxiliary camera is arranged in blue case optional position, and the most blue case of the prospect that is directed at Host or object picked-up image.
Further, described depth of field position recognition algorithm unit be utilize adjacent image data frame dependency and The particularity of blue case, it is achieved foreground object and the segmentation of background object, and then obtain prospect on blue case ground Projection centre, obtains the position coordinates of projection centre through conversion.
This technology major embodiment three main technical characteristics once:
Submergence sense is also referred to as telepresenc, refers to that user feels the really degree being present in simulated environment as leading role. Preferably simulated environment should make user be difficult to distinguish truth from false, and makes user put into computer whole-heartedly and creates Three-dimensional virtual environment in, all in this environment appear to be really, and it is genuine for sounding, and move up and are Really, it is all genuine for all feeling such as even smelling, taste, such as the sensation one in real world Sample.
Interactivity refer to prospect in virtual scene object operable degree and from environment obtain feedback natural journey Degree (including real-time).Such as, user can go directly to capture object virtual in virtual scene with hands, At this moment spectators see the sensation having and hold thing in one's hands, it is possible to the weight of sensation object, the thing grabbed in the visual field Body also can be mobile with the movement set about at once.
Imagination emphasizes that virtual reality real-time, interactive technology should have the wide space imagined, can widen the mankind Cognitive range, not only can reproduce the environment of necessary being, it is also possible to arbitrarily conception objective non-existent even The environment that can not occur.
Due to the virtual and real these three feature from main response interaction technique so that it is tradition can be realized and perform in a radio or TV programme The irrealizable authoring effects in room, making television production is no longer only to found pole to see shadow, as long as but can think Obtain to just doing, therefore, receive the welcome of television production department.Now, along with the skilled fortune of technology With, a lot of platforms all achieve several even tens columns and share a set of Virtual Studio System, not only save Place, fund, decrease staffing, reduce television program designing cost, and improve program Make efficiency and fabricating quality, its advantage increasingly displays.And its verisimilitude can make TV programme reach To optimal visual effect, expand bigger space for television program designing.
The invention has the beneficial effects as follows:
1, the image that video camera shoots can be synthesized by this system in real time with Computerized three-dimensional figure, shape Become a kind of new tv program producing system.Virtual with real take full advantage of calculating from main response interactive device Machine three dimensional graphics and image synthesizing method, according to the location parameter of video camera, make three-dimensional virtual scene Perspective relation keeps consistent with prospect, thus creates television studio effect true to nature, third dimension is the strongest, Save traditional based on setting up design, material, construction, place etc. in the television program designing of true setting Expensive expense, overcoming tradition setting should not at will move, very flexible, and what place can not reuse lacks Point, meets television program designing and broadcast.
2, the result of localization process is usually prospect host and rationally can intert in complex three-dimensional virtual scene Walk about, obtain having the prospect of sense of reality and the video composite picture of virtual three dimensional space rather than simple before Scape superposition virtual background, is not artificial to carry out prospect in virtual three dimensional space scene with blocking key Block reflection.Meanwhile, utilize three-dimensional modeling to realize high-quality virtual clone, become the palp east of user West, shorten in life distance between important personnel, place and activity (community, job site, object for appreciation game, Teaching etc.).By using this technology, we can realize more surprised experience in virtual world, than Shake hands as met in Virtual Space Scene.
Accompanying drawing explanation
Fig. 1 is interactive software structural representation of the present invention;
Fig. 2 is embodiment of the present invention structural representation;
Fig. 3 is virtual reality real-time, interactive device 1 schematic diagram of the present invention;
Detailed description of the invention
Understandable, below in conjunction with the accompanying drawings in order to enable the above-mentioned purpose of the present invention, feature and advantage to become apparent from The detailed description of the invention of the present invention is described in detail.
Seeing Fig. 1, virtual response method autonomous to reality in a kind of virtual reality real-time, interactive, including following step Rapid:
Step 1, determines the position of blue case coordinate center b (0,0): preset auxiliary camera shooting Radix Rumicis With the relative position relation of main camera, the image photographed according to auxiliary camera, determine that prospect is in blue case Region A, using the prior image frame that photographs region A as the masterplate of rear two field picture, the two frame figures adjacent by this As being input to virtual reality real-time, interactive software, the view data obtained according to virtual reality real-time, interactive software, Use the template matching technique of digital picture, the people in true blue case scene or object are carried out the detection of image Identify and follow the tracks of, it is achieved the segmentation of people in real scene or object and blue case background, and then obtain true field People in scape or object projection coordinate on ground in blue case, gather the people in real scene simultaneously or object exist Range of movement in blue case, calibrates the plan-position coordinate of projection centre after software converts according to this scope B (0,0), flows to auxiliary camera image processing system by blue case coordinate center b (0,0) determined;
Step 2, determines target object b, (x, position y): region A is as prospect in step 1, and auxiliary is taken the photograph The image of camera captured in real-time target object, calculated in real time by image processing system target object relative to The displacement at reference coordinate center b (0,0), thus obtain the position coordinates of the target object with frame as sample unit b,(x,y);
Step 3, three-dimensional rendering equipment renders and exports: target object b step 2 obtained, (x, y) Position, according to step 1 auxiliary camera shooting Radix Rumicis and the relative position relation of main camera, obtain mesh The depth of field location parameter of mark object, is input to three-dimensional rendering equipment by this position, then through to matrix equation Resolve, allow main camera three dimensions generate the particular location relation of target with auxiliary camera relative to blue case The prospect seamless coupling of concrete three-dimensional location coordinates that coordinate center obtains.
Further, described auxiliary camera is colored or B/W camera, analog or digital video camera, CCD Or the video camera such as CMOS-type, described auxiliary camera is arranged near blue case, all video cameras of alignment prospect The mode of picked-up image.
Further, in step 1, the conversion method of the plan-position coordinate of projection centre is:
Obtain the projection of prospect in blue case according to auxiliary camera, obtain projecting both ends horizontal coordinate points;
According to obtaining projection both ends horizontal coordinate points, take its meansigma methods plan-position coordinate as projection centre.
Further, described real-time, interactive software includes image input units, graphics processing unit and depth of field position Putting data output and virtual reality real-time, interactive algorithm unit, its image input units is used for Computer Vision The picture signal input of unit;Its graphics processing unit passes through image processing algorithm, obtains foreground object in indigo plant Plane coordinates position in case and depth of field data, through calculating, the form of expression is for determining that foreground object is specifying seat Plane coordinates position in mark system, calculates coupling by virtual studio 3 d rendering engine, drills for virtual The foreground location broadcasting chamber system determines.
Further, described image input units can be image pick-up card, it is also possible to be that a part sets oneself The circuit of meter, inputs to graphics processing unit in real time for image;Described graphics processing unit can be one Computer, it is also possible to be a part of self-designed circuit with data-handling capacity, in software algorithm Support that the lower data for image coordinate point process, obtain prospect position data in blue case.
Also include being arranged on the auxiliary of virtual studio blue (or the color such as green, lower with) case appropriate positions near Video camera: the prospect (including host or object) in the blue case of alignment, before producing in real time Scape image transmitting is to position identification device;
Image input units: can be image pick-up card, it is also possible to be self-designed circuit, for image Real-time Collection also inputs to graphics processing unit;
Graphics processing unit: can be a computer, it is also possible to be self-designed have data process energy The circuit of power, under the support of software algorithm, the data for image coordinate point process, and obtain prospect at blue case In position data;
Virtual and real from main response interactive software: for image input units, graphics processing unit and the depth of field Position data exports, and is transferred to the three-dimensional graphics renderer work station of Virtual Studio System by network, is used for By in prospect dynamic fusion to three-dimensional virtual scene, strengthen prospect in Virtual Studio System three-dimensional scenic Depth of field position sense of reality, and correctly reflection prospect position relationship in three-dimensional virtual scene.
According to virtual and real from main response interactive device in described Virtual Studio System, its second camera Machine includes the video camera such as colour or B/W camera, analog or digital video camera, CCD or CMOS-type, installs Near blue case, all camera pickuping images of alignment prospect (host in blue case or object) Mode;Its image input units is for the picture signal input of Computer Vision unit;Its image procossing Unit passes through image processing algorithm, obtains foreground object plane coordinates position in blue case and depth of field data, The form of expression is to determine foreground object plane coordinates position in specified coordinate system, by virtual through calculating Studio 3 d rendering engine calculates coupling, and the foreground location for Virtual Studio System determines.
In Virtual Studio System virtual with real from main response interactive device, its virtual with real from main response Interactive module processes auxiliary camera video image coordinate data by calculating, obtains and exports a prospect (indigo plant Host in case or object) depth of field position data, for the foreground location of Virtual Studio System Location and matching treatment, the result of localization process is usually prospect host can be at complex three-dimensional virtual scene In rationally intert and walk about, obtain the video composite picture of prospect and the virtual three dimensional space having sense of reality, and not It is simple prospect superposition virtual background, is not that artificial to carry out prospect empty at three-dimensional with blocking key Between scene blocks reflection.Its image processing algorithm make use of the dependency of adjacent image data frame and blue case The factor such as particularity, it is achieved the segmentation of foreground object and background object, and then obtain prospect on blue case ground Projection centre, obtain the plan-position coordinate of projection centre through conversion.
And virtual and real from main response interactive device in Virtual Studio System, it comprises two kinds of forms, dress Putting in one, image input units is one piece of normal image capture card, and graphics processing unit is a computer, Calculate depth of field position data by virtual and real in real time from main response interactive software, sent to by network In virtual studio, three-dimensional virtual scene rendering computers carries out mating of prospect and three virtual scenes.
Such as Fig. 3, image input units is the self-designed video input circuit with data converting function, figure As processing unit is the data processing circuit with DSP as core, image processing software algorithm is with compiled In the program designation memorizer that DSP program file mode is saved in DSP circuit, process in real time and obtain Depth of field position data send rendering computers to by rendering computers bus and carry out three-dimensional virtual scene with front The matching treatment of scape depth of field position data.
Embodiment one:
Such as Fig. 2, a foreground camera 110 being arranged on blue front, case studio, at studio indigo plant case 101 On auxiliary camera 102 is installed, auxiliary camera 102 is by video line 103 and virtual reality real-time, interactive The input of the video image acquisition module 105 in system 104 connects, and video image acquisition module 105 is used In the video signal of Real-time Collection auxiliary camera 102, three-dimensional rendering machine 107 passes through netting twine 106 with virtual Reality real-time interaction system 104 connects, and is connected with control machine 108 by netting twine 106 simultaneously, and prospect images Machine 110 is connected with control machine 108 by video line 109.When operating, the most true at studio indigo plant case 101 The three-dimensional location coordinates (0,0,0) of a fixed center, before i.e. collecting auxiliary camera 102 Two two field pictures that this is adjacent, as the masterplate of rear two field picture, are input to virtual reality real-time, interactive system by two field picture In video capture card 105 in system 104, obtaining its reference point, this image processing algorithm make use of The factors such as the particularity of the dependency of adjacent image data frame and blue case, it is achieved foreground object and background object Segmentation, and then obtain the prospect projection centre on blue case ground, the plane position of projection centre is obtained through conversion Put coordinate.Utilize the three-dimensional location coordinates (0,0,0) of center, then obtain according to its studio indigo plant case In 101, objectives 111 are relative to the position three-dimensional coordinate data (x, y, 0) of blue case coordinate center (0,0,0), The foreground planes image that z value therein is absorbed by main camera mates with this foreground depth of field xy location parameter Result shows.Thus can obtain the position of objectives 111 in blue case.To the objectives obtained The coordinate data of 111 is through the resolving to matrix equation, it is achieved three-dimensional virtual scene and control machine 108 are from prospect The seamless coupling of the depth of field position relationship of video camera 110 true foreground so that true foreground the most appropriately incorporates Among the suitable space position of three-dimensional virtual scene, and reflect correct virtual three dimensional space hiding relation.
The above, be only presently preferred embodiments of the present invention and oneself, not make the present invention any pro forma Limit.Although the present invention is own disclosed above with preferred embodiment, but is not limited to the present invention.Any Those of ordinary skill in the art, without departing under technical solution of the present invention ambit, may utilize above-mentioned Technical solution of the present invention is made many possible variations and modification, or amendment by the method disclosed and technology contents Equivalent embodiments for equivalent variations.Therefore, every content without departing from technical solution of the present invention, according to this The technical spirit of invention, to any simple modification made for any of the above embodiments, equivalent variations and modification, the most still belongs to In the range of technical solution of the present invention is protected.

Claims (8)

1. a virtual response method autonomous to reality in virtual reality real-time, interactive, implementation step is as follows:
Step 1, determines the position of blue case coordinate center b (0,0): preset auxiliary camera shooting Radix Rumicis With the relative position relation of main camera, the image photographed according to auxiliary camera, determine that prospect is in blue case Region A, using the prior image frame that photographs region A as the masterplate of rear two field picture, the two frame figures adjacent by this As being input to virtual reality real-time, interactive software, the view data obtained according to virtual reality real-time, interactive software, Use the template matching technique of digital picture, the people in true blue case scene or object are carried out the detection of image Identify and follow the tracks of, it is achieved the segmentation of people in real scene or object and blue case background, and then obtain true field People in scape or object projection coordinate on ground in blue case, gather the people in real scene simultaneously or object exist Range of movement in blue case, calibrates the plan-position coordinate of projection centre after software converts according to this scope B (0,0), flows to auxiliary camera image processing system by blue case coordinate center b (0,0) determined;
Step 2, determines target object b, (x, position y): region A is as prospect in step 1, and auxiliary is taken the photograph The image of camera captured in real-time target object, calculated in real time by image processing system target object relative to The displacement at reference coordinate center b (0,0), thus obtain the position coordinates of the target object with frame as sample unit b,(x,y);
Step 3, three-dimensional rendering equipment renders and exports: target object b step 2 obtained, (x, y) Position, according to step 1 auxiliary camera shooting Radix Rumicis and the relative position relation of main camera, obtain mesh The depth of field location parameter of mark object, is input to three-dimensional rendering equipment by this position, then through to matrix equation Resolve, allow main camera three dimensions generate the particular location relation of target with auxiliary camera relative to blue case The prospect seamless coupling of concrete three-dimensional location coordinates that coordinate center obtains.
Virtual responder autonomous to reality in a kind of virtual reality real-time, interactive the most according to claim 1 Method, it is characterised in that described auxiliary camera is colored or B/W camera, analog or digital video camera, The video camera such as CCD or CMOS-type, described auxiliary camera is arranged near blue case, and alignment all of prospect are taken the photograph The mode of camera picked-up image.
Virtual responder autonomous to reality in a kind of virtual reality real-time, interactive the most according to claim 1 Method, it is characterised in that in step 1, the conversion method of the plan-position coordinate of projection centre is:
Obtain the projection of prospect in blue case according to auxiliary camera, obtain projecting both ends horizontal coordinate points;
According to obtaining projection both ends horizontal coordinate points, take its meansigma methods plan-position coordinate as projection centre.
Virtual responder autonomous to reality in a kind of virtual reality real-time, interactive the most according to claim 1 Method, it is characterised in that described real-time, interactive software includes image input units, graphics processing unit and the depth of field Position data output and virtual reality real-time, interactive algorithm unit, its image input units is at video image The picture signal input of reason unit;Its graphics processing unit passes through image processing algorithm, obtains foreground object and exists Plane coordinates position in blue case and depth of field data, through calculating, the form of expression is for determining that foreground object is being specified Plane coordinates position in coordinate system, calculates coupling by virtual studio 3 d rendering engine, for virtual The foreground location of studio system determines.
Virtual responder autonomous to reality in a kind of virtual reality real-time, interactive the most according to claim 1 Method, it is characterised in that described image input units can be image pick-up card, it is also possible to be own design Circuit, for the Real-time Collection of image and input to graphics processing unit;Described graphics processing unit can be One computer, it is also possible to be the circuit with data-handling capacity of own design, propping up in software algorithm Hold down the data for image coordinate to process, obtain prospect position data in blue case.
6. according in a kind of virtual reality real-time, interactive described in claim 1 or 4 or 5 virtual to reality from Main response method, it is characterised in that pictorial form is inputted by auxiliary camera, image input units will auxiliary The vedio data that video camera transmits is with field frequency speed continuous acquisition to graphics processing unit, image procossing list Unit, by depth of field position recognition algorithm unit, obtains foreground object coordinate position in blue case and depth of field data, Determine foreground object coordinate position in specified coordinate system, pass to virtual studio by data outputting unit 3 d rendering engine calculates coupling.
Virtual responder autonomous to reality in a kind of virtual reality real-time, interactive the most according to claim 1 Method, it is characterised in that described auxiliary camera is arranged in blue case optional position, and the most blue case of the prospect that is directed at Host or object picked-up image.
Virtual responder autonomous to reality in a kind of virtual reality real-time, interactive the most according to claim 1 Method, it is characterised in that described depth of field position recognition algorithm unit is the dependency utilizing adjacent image data frame Particularity with blue case, it is achieved foreground object and the segmentation of background object, and then obtain prospect on blue case ground Projection centre, obtain the position coordinates of projection centre through conversion.
CN201610365024.6A 2016-05-27 2016-05-27 Virtuality to reality autonomous response method for virtuality and reality real-time interaction Pending CN105959595A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610365024.6A CN105959595A (en) 2016-05-27 2016-05-27 Virtuality to reality autonomous response method for virtuality and reality real-time interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610365024.6A CN105959595A (en) 2016-05-27 2016-05-27 Virtuality to reality autonomous response method for virtuality and reality real-time interaction

Publications (1)

Publication Number Publication Date
CN105959595A true CN105959595A (en) 2016-09-21

Family

ID=56909887

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610365024.6A Pending CN105959595A (en) 2016-05-27 2016-05-27 Virtuality to reality autonomous response method for virtuality and reality real-time interaction

Country Status (1)

Country Link
CN (1) CN105959595A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106910253A (en) * 2017-02-22 2017-06-30 天津大学 Stereo-picture cloning process based on different cameral spacing
CN107231531A (en) * 2017-05-23 2017-10-03 青岛大学 A kind of networks VR technology and real scene shooting combination production of film and TV system
CN107423688A (en) * 2017-06-16 2017-12-01 福建天晴数码有限公司 A kind of method and system of the remote testing distance based on Unity engines
CN107888890A (en) * 2017-12-25 2018-04-06 河南新汉普影视技术有限公司 It is a kind of based on the scene packing device synthesized online and method
CN107919085A (en) * 2017-12-25 2018-04-17 河南新汉普影视技术有限公司 A kind of intelligent virtual conference system
CN107976811A (en) * 2017-12-25 2018-05-01 河南新汉普影视技术有限公司 A kind of simulation laboratory and its emulation mode based on virtual reality mixing
CN108134890A (en) * 2018-02-05 2018-06-08 广东佳码影视传媒有限公司 A kind of video capture device
CN108234902A (en) * 2017-05-08 2018-06-29 浙江广播电视集团 A kind of studio intelligence control system and method perceived based on target location
CN109089017A (en) * 2018-09-05 2018-12-25 宁波梅霖文化科技有限公司 Magic virtual bench
CN109752951A (en) * 2017-11-03 2019-05-14 腾讯科技(深圳)有限公司 Processing method, device, storage medium and the electronic device of control system
CN110036635A (en) * 2016-12-28 2019-07-19 微软技术许可有限责任公司 Alleviate the system, method and computer-readable medium of motion sickness via the display of the enhancing for passenger for using video capture device
CN110060354A (en) * 2019-04-19 2019-07-26 苏州梦想人软件科技有限公司 Positioning and exchange method of the true picture in Virtual Space
CN111178127A (en) * 2019-11-20 2020-05-19 青岛小鸟看看科技有限公司 Method, apparatus, device and storage medium for displaying image of target object
CN112929688A (en) * 2021-02-09 2021-06-08 歌尔科技有限公司 Live video recording method, projector and live video system
CN114845148A (en) * 2022-04-29 2022-08-02 深圳迪乐普数码科技有限公司 Interaction control method and device for host to virtual object in virtual studio

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101110908A (en) * 2007-07-20 2008-01-23 西安宏源视讯设备有限责任公司 Foreground depth of field position identification device and method for virtual studio system
CN102118567A (en) * 2009-12-30 2011-07-06 新奥特(北京)视频技术有限公司 Virtual sports system in split mode
CN203933781U (en) * 2014-06-27 2014-11-05 西安宏源视讯设备有限责任公司 Virtual to the autonomous responding system of reality in a kind of virtual reality real-time, interactive
CN104660872A (en) * 2015-02-14 2015-05-27 赵继业 Virtual scene synthesis system and method
CN204836315U (en) * 2015-04-30 2015-12-02 江苏卡罗卡国际动漫城有限公司 Virtual studio system based on AR technique
CN105306862A (en) * 2015-11-17 2016-02-03 广州市英途信息技术有限公司 Scenario video recording system and method based on 3D virtual synthesis technology and scenario training learning method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101110908A (en) * 2007-07-20 2008-01-23 西安宏源视讯设备有限责任公司 Foreground depth of field position identification device and method for virtual studio system
CN102118567A (en) * 2009-12-30 2011-07-06 新奥特(北京)视频技术有限公司 Virtual sports system in split mode
CN203933781U (en) * 2014-06-27 2014-11-05 西安宏源视讯设备有限责任公司 Virtual to the autonomous responding system of reality in a kind of virtual reality real-time, interactive
CN104660872A (en) * 2015-02-14 2015-05-27 赵继业 Virtual scene synthesis system and method
CN204836315U (en) * 2015-04-30 2015-12-02 江苏卡罗卡国际动漫城有限公司 Virtual studio system based on AR technique
CN105306862A (en) * 2015-11-17 2016-02-03 广州市英途信息技术有限公司 Scenario video recording system and method based on 3D virtual synthesis technology and scenario training learning method

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110036635A (en) * 2016-12-28 2019-07-19 微软技术许可有限责任公司 Alleviate the system, method and computer-readable medium of motion sickness via the display of the enhancing for passenger for using video capture device
US11057574B2 (en) 2016-12-28 2021-07-06 Microsoft Technology Licensing, Llc Systems, methods, and computer-readable media for using a video capture device to alleviate motion sickness via an augmented display for a passenger
CN106910253B (en) * 2017-02-22 2020-02-18 天津大学 Stereo image cloning method based on different camera distances
CN106910253A (en) * 2017-02-22 2017-06-30 天津大学 Stereo-picture cloning process based on different cameral spacing
CN108234902A (en) * 2017-05-08 2018-06-29 浙江广播电视集团 A kind of studio intelligence control system and method perceived based on target location
CN107231531A (en) * 2017-05-23 2017-10-03 青岛大学 A kind of networks VR technology and real scene shooting combination production of film and TV system
CN107423688A (en) * 2017-06-16 2017-12-01 福建天晴数码有限公司 A kind of method and system of the remote testing distance based on Unity engines
CN109752951A (en) * 2017-11-03 2019-05-14 腾讯科技(深圳)有限公司 Processing method, device, storage medium and the electronic device of control system
US11275239B2 (en) 2017-11-03 2022-03-15 Tencent Technology (Shenzhen) Company Limited Method and apparatus for operating control system, storage medium, and electronic apparatus
CN109752951B (en) * 2017-11-03 2022-02-08 腾讯科技(深圳)有限公司 Control system processing method and device, storage medium and electronic device
CN107976811B (en) * 2017-12-25 2023-12-29 河南诺控信息技术有限公司 Virtual reality mixing-based method simulation laboratory simulation method of simulation method
CN107976811A (en) * 2017-12-25 2018-05-01 河南新汉普影视技术有限公司 A kind of simulation laboratory and its emulation mode based on virtual reality mixing
CN107919085A (en) * 2017-12-25 2018-04-17 河南新汉普影视技术有限公司 A kind of intelligent virtual conference system
CN107888890A (en) * 2017-12-25 2018-04-06 河南新汉普影视技术有限公司 It is a kind of based on the scene packing device synthesized online and method
CN108134890A (en) * 2018-02-05 2018-06-08 广东佳码影视传媒有限公司 A kind of video capture device
CN108134890B (en) * 2018-02-05 2023-12-19 广东佳码影视传媒有限公司 Virtual playing device
CN109089017A (en) * 2018-09-05 2018-12-25 宁波梅霖文化科技有限公司 Magic virtual bench
CN110060354A (en) * 2019-04-19 2019-07-26 苏州梦想人软件科技有限公司 Positioning and exchange method of the true picture in Virtual Space
CN110060354B (en) * 2019-04-19 2023-08-04 苏州梦想人软件科技有限公司 Positioning and interaction method of real image in virtual space
CN111178127A (en) * 2019-11-20 2020-05-19 青岛小鸟看看科技有限公司 Method, apparatus, device and storage medium for displaying image of target object
CN111178127B (en) * 2019-11-20 2024-02-20 青岛小鸟看看科技有限公司 Method, device, equipment and storage medium for displaying image of target object
CN112929688A (en) * 2021-02-09 2021-06-08 歌尔科技有限公司 Live video recording method, projector and live video system
CN112929688B (en) * 2021-02-09 2023-01-24 歌尔科技有限公司 Live video recording method, projector and live video system
CN114845148A (en) * 2022-04-29 2022-08-02 深圳迪乐普数码科技有限公司 Interaction control method and device for host to virtual object in virtual studio
CN114845148B (en) * 2022-04-29 2024-05-03 深圳迪乐普数码科技有限公司 Interaction control method and device for host in virtual studio to virtual object

Similar Documents

Publication Publication Date Title
CN105959595A (en) Virtuality to reality autonomous response method for virtuality and reality real-time interaction
CN103500465B (en) Ancient cultural relic scene fast rendering method based on augmented reality technology
CN104392045B (en) A kind of real time enhancing virtual reality system and method based on intelligent mobile terminal
CN106157354B (en) A kind of three-dimensional scenic switching method and system
CN105931288A (en) Construction method and system of digital exhibition hall
CN105303600A (en) Method of viewing 3D digital building by using virtual reality goggles
CN105654471A (en) Augmented reality AR system applied to internet video live broadcast and method thereof
CN105488771B (en) Light field image edit methods and device
CN109829976A (en) One kind performing method and its system based on holographic technique in real time
CN102393970A (en) Object three-dimensional modeling and rendering system as well as generation and rendering methods of three-dimensional model
CN107154197A (en) Immersion flight simulator
CN106600688A (en) Virtual reality system based on three-dimensional modeling technology
CN108043027B (en) Storage medium, electronic device, game screen display method and device
CN109345635A (en) Unmarked virtual reality mixes performance system
CN106899782A (en) A kind of method for realizing interactive panoramic video stream map
Nguyen et al. Real-time 3D human capture system for mixed-reality art and entertainment
CN106780759A (en) Method, device and the VR systems of scene stereoscopic full views figure are built based on picture
CN107862718A (en) 4D holographic video method for catching
WO2022055367A1 (en) Method for emulating defocus of sharp rendered images
CN106604087B (en) A kind of rendering implementation method of panorama live streaming
CN102118568B (en) Graphics generation system for sports competitions
CN109389681A (en) A kind of indoor decoration design method and system based on VR
CN102438108B (en) Film processing method
CN102118576B (en) Method and device for color key synthesis in virtual sports system
Wu et al. Digital museum for traditional culture showcase and interactive experience based on virtual reality

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160921

RJ01 Rejection of invention patent application after publication