CN107728782A - Exchange method and interactive system, server - Google Patents
Exchange method and interactive system, server Download PDFInfo
- Publication number
- CN107728782A CN107728782A CN201710855846.7A CN201710855846A CN107728782A CN 107728782 A CN107728782 A CN 107728782A CN 201710855846 A CN201710855846 A CN 201710855846A CN 107728782 A CN107728782 A CN 107728782A
- Authority
- CN
- China
- Prior art keywords
- information
- image
- physical object
- physical
- scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
- H04N21/23412—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs for generating or manipulating the scene composition of objects, e.g. MPEG-4 objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
Abstract
A kind of exchange method includes:Destination object in scene is identified according to image, wherein the destination object includes the physical object object in the scene;Identify state/attitude information of the physical object object;Identify the positional information of the physical object object;The special efficacy content according to corresponding to obtaining the positional information of the physical object object and the state/attitude information;Output includes the video content of the special efficacy content.Present invention also offers a kind of interactive system and server.Exchange method of the present invention and interactive system, server are by after to the physical object Object identifying in scene, the special efficacy content according to corresponding to obtaining the positional information of the physical object object and state/attitude information, so as to which special efficacy content be added on the physical object object, so that user can watch corresponding special efficacy, and then be advantageous to improve the immersion experience effect of user.
Description
Technical field
The present invention relates to a kind of data processing technique, more particularly to a kind of exchange method and interaction system based on augmented reality
System, server.
Background technology
AR (augmented reality) game is now based on mainly based on mobile terminal, advantage is light and handy convenient, but is also not enough to
The shortcomings that allowing user to be immersed in the effect of virtual world, user need to wear AR glasses.However, the visual range of existing AR glasses and
Angle is limited, it is more difficult to and realize that more people experience same AR scenes, so greatly influence the feeling of immersion that virtual reality is built,
Reduce the effect of Consumer's Experience.And with experience number increase, equipment cost also straight line increase.
The content of the invention
In view of the foregoing, it is necessary to which a kind of exchange method for improving Consumer's Experience and interactive system, server are provided.
A kind of exchange method, including:
Destination object in scene is identified according to image, wherein the destination object includes physical object pair in the scene
As;
Identify the physical object Obj State/attitude information;
Identify the positional information of the physical object object;
The special efficacy content according to corresponding to obtaining the positional information of the physical object object and state/attitude information;
Output includes the video content of special efficacy content.
Further, in the exchange method, destination object includes in the identification scene:
Obtain the image of the depth characteristic information of the corresponding physical object object;
Physical object object corresponding to depth characteristic information identification in the image.
Further, in the exchange method, the physical object object is provided with corresponding colouring information, the basis
Physical object object corresponding to depth characteristic information identification in the image also includes:
First object object in the scene is identified according to the depth characteristic information;
Obtain the colouring information of the first object object;
The second destination object according to corresponding to obtaining the colouring information;
Judge whether the first object object and second destination object are identical;
When the first object object is identical with second destination object, determine that the first object object is described
Physical object object.
Further, it is described to identify that destination object includes in scene according to image:
Pre-identification operation is carried out to the image according to reference information, to judge whether include physical object in the image
Object;
Obtain the image of the depth characteristic information of the corresponding physical object object;
Physical object object corresponding to depth characteristic information identification in the image.
Further, it is described that image progress pre-identification operation is included according to reference information:
Judge whether comprising corresponding colour code in the image, wherein physical object object is provided with the scene
Corresponding colour code;Or
Judge whether to include the infrared light of respective frequencies in the image, wherein physical object object is set in the scene
There is the infrared light launched by respective frequencies.
Further, in the exchange method, the identification physical object Obj State/attitude information includes:
Obtain the first depth characteristic information of physical object object described in the first frame image;
Obtain the second depth characteristic information of physical object object described in the second frame image;
Whether within a preset range to judge the change of the first depth characteristic information and the second depth characteristic information;
When the change of the first depth characteristic information and the second depth characteristic information is in the preset range,
Determine state/attitude information of the physical object object.
Further, in the exchange method, the identification physical object Obj State/attitude information includes:
Obtain the action message of user in the scene;
State/attitude information of the physical object object is determined according to the action message of the user.
Further, in the exchange method, the positional information and state/posture according to the physical object object
Special efficacy content includes corresponding to acquisition of information:
The physical object object and void are calculated according to the positional information of the physical object object and state/attitude information
Intend the physical effect of destination object;
According to special efficacy content corresponding to physical effect selection.
Further, in the exchange method, the video content of the output comprising special efficacy content includes:
Obtain the position of user and angle information in the scene;
According to the position and the video content of the corresponding user of angle information output.
Further, in the exchange method, the video content of the output comprising special efficacy content includes:
The video content is exported to each headset equipment by broadcast mode, to cause each headset equipment according to each
From position and angle information the video content is handled, to obtain corresponding video content.
A kind of server, including processor and memory, the memory storage have some programs, and some programs can
With by the computing device, to realize following steps:
Destination object in scene is identified according to image, wherein the destination object includes physical object pair in the scene
As;
Identify state/attitude information of the physical object object;
Identify the positional information of the physical object object;
The special efficacy content according to corresponding to obtaining the positional information of the physical object object and state/attitude information;
Output includes the video content of special efficacy content.
A kind of interactive system, including:
Location identification unit, for identifying destination object in scene according to image, wherein the destination object is including described
Physical object object in scene;
The location identification unit, it is additionally operable to identify the physical object Obj State/attitude information;
The location identification unit, it is additionally operable to identify the positional information of the physical object object;
Control process unit, for the positional information according to the physical object object and state/attitude information acquisition pair
The special efficacy content answered;
The control process unit, it is additionally operable to the video content that output includes special efficacy content.
Above-mentioned exchange method and interactive system, server are by after to the recongnition of objects in scene, according to the reality
The positional information and state of body destination object/special efficacy content corresponding to attitude information acquisition, so as to which special efficacy content is added into institute
State on physical object object, so that user can watch corresponding special efficacy, and then be advantageous to improve the experience of user.
Brief description of the drawings
Technical scheme in order to illustrate the embodiments of the present invention more clearly, it is required in being described below to embodiment to use
Accompanying drawing is briefly described, it should be apparent that, drawings in the following description are some embodiments of the present invention, general for this area
For logical technical staff, on the premise of not paying creative work, other accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 is the block diagram of the better embodiment of interactive system of the present invention.
Fig. 2 is the block diagram of the better embodiment of Fig. 1 processing units.
Fig. 3 is the schematic diagram of interactive system application scenarios of the present invention.
Fig. 4 is the flow chart of the better embodiment of exchange method of the present invention.
Main element symbol description
First display device | 100 |
Filming apparatus | 106 |
Projection arrangement | 108 |
Processing unit | 104 |
Headset equipment | 300 |
Second display device | 302 |
Positioner | 304 |
Arithmetic unit | 306 |
User | 204 |
Physical object object | 202 |
Virtual target object | 208 |
Scene | 200 |
Processor | 150 |
Memory | 152 |
Location identification unit | 140 |
Action recognition unit | 142 |
Control process unit | 144 |
Image output unit | 146 |
Model database | 158 |
Special effects data storehouse | 156 |
Colour code storehouse | 154 |
Embodiment
It is below in conjunction with the accompanying drawings and specific real in order to be more clearly understood that the above objects, features and advantages of the present invention
Applying example, the present invention will be described in detail.It should be noted that in the case where not conflicting, embodiments herein and embodiment
In feature can be mutually combined.
Elaborate many details in the following description to facilitate a thorough understanding of the present invention, described embodiment only
Only it is part of the embodiment of the present invention, rather than whole embodiments.Based on the embodiment in the present invention, ordinary skill
The every other embodiment that personnel are obtained under the premise of creative work is not made, belongs to the scope of protection of the invention.
Unless otherwise defined, all of technologies and scientific terms used here by the article is with belonging to technical field of the invention
The implication that technical staff is generally understood that is identical.Term used in the description of the invention herein is intended merely to description tool
The purpose of the embodiment of body, it is not intended that in the limitation present invention.
Referring to Fig. 1, the better embodiment of interactive system of the present invention may include the first display device 100, projection arrangement
108th, processing unit 104, filming apparatus 106 and headset equipment 300.The headset equipment 300 may include that the second display fills
Put 302, positioner 304 and arithmetic unit 306.The video content of the exportable corresponding scene 200 of the processing unit 104 is to institute
State one or more of the first display device 100, the second display device 102 and projection arrangement 108, with realize augmented reality or
The effect of virtual reality.In present embodiment, the headset equipment 300 can be VR (Virtual Reality, virtual reality)
The helmet or head show, also or VR glasses or AR (Augmented Reality, augmented reality) glasses, also or 3D anaglyph spectacleses.
Also referring to Fig. 2, the filming apparatus 106 is used to shoot the scene 200, and shadow corresponding to output
As to the processing unit 104.The processing unit 104 can according to corresponding to the video generation video content.
It is to be appreciated that the scene 200 can include one or more physical object objects 202.The physical object pair
As that can be entity toy (such as toy airplane, peashooter, toy tank, remote-control handle, barrier, ring).Certainly, the field
Scape 200 can also include some users 204, the user 204 can by the physical object object 202 or directly with the video
Content interacts, to realize that more people experience the video content.In present embodiment, the video content includes one or more
Individual virtual target object 208.In another embodiment, the physical object object 202 in the scene 200 also can be exchanged into void
Intend destination object, the video content can also include the virtual target object after conversion.
It is to be appreciated that the filming apparatus 106 may include some video cameras, the video camera may be disposed at the scene
200 diverse location (such as being set along the circumferential direction of the scene 200), to cause the filming apparatus 106 can be from different angles
Degree is shot to the scene 200, so as to obtain the image of each object in the scene 200, can also obtain physical object pair
As the image of different angle.
It is to be appreciated that in one embodiment, the filming apparatus 106 can be depth camera device, the shooting dress
The depth characteristic information of distance of each point to it in the scene 200 can be obtained by putting 106.
Also referring to Fig. 3, in present embodiment, the processing unit 104 can be server or data center etc., described
Processing unit 104 may include processor 150 and memory 152.
The processing unit 104 is can be automatic to carry out numerical computations and/or information according to the instruction for being previously set or storing
The equipment of processing, its hardware include but is not limited to microprocessor, application specific integrated circuit (Application Specific
Integrated Circuit, ASIC), programmable gate array (Field-Programmable Gate Array, FPGA), number
Word processing device (Digital Signal Processor, DSP), embedded device etc..
Network residing for the processing unit 104 includes, but are not limited to internet, wide area network, Metropolitan Area Network (MAN), LAN, void
Intend dedicated network (Virtual Private Network, VPN) etc., as processing unit 104 can the access of network interface (not shown)
Internet, wide area network, Metropolitan Area Network (MAN), LAN, VPN.
The memory 152 can be different type storage device, for storing Various types of data.It is for example, it may be described
The internal memory of processing unit 104, data center, the storage card that can be external in the processing unit 104 is can also be, such as flash memory, SM
Block (Smart Media Card, smart media card), SD card (Secure Digital Card, safe digital card) etc..It is described to deposit
Reservoir 152 is used to store Various types of data.In present embodiment, the data that the memory 152 stores may include colour code storehouse
154th, special effects data storehouse 156 and model database 158.
The processor 150 is used to perform all kinds of softwares of installation or application in the processing unit 104, such as operation system
System, messaging software, image processing software, motion identification software etc..The processor 150 is including but not limited to processor
(Central Processing Unit, CPU), micro-control unit (Micro Controller Unit, MCU) etc. are used to explain
The device of computer instruction and the data in processing computer software, one or more microprocessor, numeral can be included
Processor.
The processing unit 104 can include one or more modules or unit, one or more of modules or list
Member can be stored in the memory 152 and may be configured to by one or more processors (present embodiment one
Individual processor 150) perform, to complete the present invention.For example, the processing unit 104 may include location identification unit 140, action
Recognition unit 142, control process unit 144 and image output unit 146.
The location identification unit 140 is used to identify some destination objects in the scene 200.In present embodiment, institute
Stating destination object may include some physical object objects 202 and user 204 in the scene 200.
It is to be appreciated that included in the image that the location identification unit 140 can be shot according to the filming apparatus 106
Depth characteristic information identifies each destination object in the scene 200.The destination object includes but is not limited to the user
And one or more physical object objects 202 (body toys strictly according to the facts) in the scene 200 etc..
The depth characteristic information includes but is not limited to inside and outside donut, square box etc. have obvious specific shape
Characteristic information.Therefore, the location identification unit 140 can identify pair according to the depth characteristic information included in the image
Answer physical object object.
For example, the location identification unit 140 can recognize that the entity having in image corresponding to similar depth characteristic information
Destination object, or the physical object object included in the image is identified according to the depth characteristic information corresponding to object,
As during the depth characteristic information corresponding to entity toy is contained in the image, the location identification unit 140 is recognizable
Go out the part corresponding to entity toy in the image.It is to be appreciated that (three-dimensional profile is special for the depth characteristic information of the object
Sign) first it can be stored in comprising parameter in system, wherein, the parameter of the trial feature of the object may include the overall chi of the object
It is very little, depth capacity region of variation (the maximum region of concave-convex surface), the information such as 3D shape of big change in depth feature.Certainly,
The user that the location identification unit 140 can be also pointed in the scene 200 is identified.It is to be appreciated that the depth
Characteristic information may also include the strong reflection solid for causing depth collecting device overexposure, may also include the three-dimensional profile feature of object
(depth characteristic information).The processing unit 104 can control the filming apparatus 106 to carry out overexposure shooting, to obtain corresponding institute
The default image of scene 200 is stated, so as to obtain the solid with strong reflection from the overexposure image.Present embodiment
In, the default image can be that a large amount of chaff interferences have been moved out of or the target object only in scene in the scene 200
202 or a small number of need to identify or interfering object wherein when captured image.
In one embodiment, the processing unit 104 can first identify the reference position in the default image.This reality
Apply in mode, the reference position in the default image can be to cause the strong reflection of camera device overexposure several in the default image
One or more positions of what body, wherein strong reflection solid can occur the depth map of some positions in depth acquisition device
Expose.Due to the frame that the image is to continue with, therefore, the processing unit 104 can be in the image that the filming apparatus 106 transmits
Relative to operation is identified in the scope near the reference position of the default image, such as can in the image relative to
Reference position in the default image is nearby identified, and is such as identified by the depth characteristic information of above-mentioned object,
So as to realize quick three-dimensional feature screening and compare.
It is preferred that the action recognition unit 142 is always according to physical object pair described in the physical object Object identifying of identification
As one or more positions of the strong reflection solid of overexposure that may be present, and the position that can be obtained identifying is as reference bit
Put, to reach the target for updating the reference position.
The colouring information of the visible ray corresponding to each physical object object is stored in the colour code storehouse 154, such as
Corresponding first color of toy airplane, corresponding second color of peashooter, corresponding 3rd color of toy tank etc..The physical object pair
As may be provided with corresponding colour code.When the filming apparatus 106 shoots the scene 200, it can shoot to obtain institute
State the colouring information of physical object object in scene 200, i.e., described filming apparatus 106 shoots and can also included in obtained image pair
The colouring information answered.
In one embodiment, the physical object object 202 in the scene 200 may be configured with infrared light emission device,
The infrared light emission device can launch the infrared light in 780nm to 1064nm wavelength.For example, the infrared light emission device
The wavelength for launching infrared light includes but is not limited to 840nm, 940nm or 970nm.It is to be appreciated that the infrared light emission dress
Putting can be flashed with specific frequency f, and the flicker frequency of each infrared light emission device also can be different.The processing unit 104
Can by identify image that the filming apparatus 106 transmits whether have the infrared light of specific frequency judge in the image whether
Physical object object be present.
In present embodiment, the location identification unit 140 also (can include but is not limited to color mark according to reference information
Know, infrared flash frequency) pre-identification operation is carried out, to identify in advance in the image with the presence or absence of corresponding destination object.
After destination object be present in the image, the location identification unit 140 identifies specific real further according to depth characteristic information
Body destination object.
Whether pre-identification operation includes but is not limited to judge in the image comprising corresponding colour code and/or red
Outer flicker frequency.It is to be appreciated that when including corresponding colour code and/or infrared flash frequency in the image, represent
Corresponding physical object object may be included in the image, now, the location identification unit 140 can do depth to the image
Degree identification operation;When not including corresponding colour code or infrared flash frequency in the image, represent in the image not
Possibility comprising corresponding physical object object is big.Operated, will can skipped over not comprising physical object by described pre-identification
The image of object, advantageously reduce the time of physical object Object identifying.
In one embodiment, further according to reference after the location identification unit 140 can identify according to depth characteristic information
Information, which performs, confirms operation, is advantageous to improve the degree of accuracy of identification.It is preferred that the location identification unit 140 can be according to described
Depth characteristic information identifies corresponding physical object object (first object object) in the scene 200;The position identification is single
Member 140 also obtains the colouring information of physical object object correspondence position in the image, and according to the face acquired
Color information obtained from the colour code storehouse 300 corresponding to physical object object (the second destination object).The position identification
Unit 140 also judges whether first object object is identical with second destination object and (such as judges first object object and second
Whether title or mark of destination object etc. are identical);When the first object object is identical with second destination object, then
The identification operation to the physical object object is completed, the location identification unit 140 can obtain the physical object object and exist
Positional information in the scene 200;When the first object object and second destination object differ, described in expression
The physical object object that location identification unit 140 identifies is inaccurate or in the absence of in corresponding database, and the position is known
Other unit 140 can continue the identification of other physical object objects.It is combined by colour code with depth characteristic information to know
Not corresponding physical object object, be advantageous to improve the degree of accuracy of identification.
The location identification unit 140 can also obtain correspondent entity destination object from depth characteristic information and be located at the field
Position in scape 200.Due to including depth characteristic information corresponding to corresponding physical object, therefore, institute's rheme in the image
Positional information of the physical object object in the scene 200 can also be obtained by putting recognition unit 140.For example, the position identification
Unit 140 can obtain the entity toy according to the depth characteristic information corresponding to identified physical object object and be located at institute
The first position of scene 200 is stated, also the user position can be obtained according to the depth characteristic information corresponding to the user of your identification
In the second place of the scene 200.
The action recognition unit 142 can identify state/attitude information of the physical object object according to the image.
In present embodiment, state/attitude information of the physical object object include but is not limited to the physical object object by
First state/attitudes vibration is the information of the second state/posture.
It is preferred that the action recognition unit 142 can be according to the depth for continuing physical object object described in two frame images
The change of characteristic information obtains state/attitude information corresponding to the physical object.For example, the physical object can be toy
Rifle, the peashooter have the function of emission bullet, and user can realize ballistic projections by clasping the left-hand thread of the peashooter.
It is to be appreciated that the depth that the action recognition unit 142 can be obtained described in the image of two continuous frames corresponding to peashooter is special
(the first depth characteristic information of peashooter, is also obtained in the second frame image reference breath as described in obtaining and corresponded in the first frame image
Second depth characteristic information of the corresponding peashooter), and the change of the depth characteristic information according to corresponding to the peashooter
Judge whether the left-hand thread of the peashooter is triggered to judge whether to the function of ballistic projections, such as the first depth according to
Characteristic information changes to judge whether the left-hand thread of the peashooter is triggered with the second depth characteristic information.When the described first depth
When spending the change of characteristic information and the second depth characteristic information within a preset range, it is believed that the left-hand thread quilt of the peashooter
Triggering.It is to be appreciated that in other embodiments, the action recognition unit 142 can also be by identifying that the user's is dynamic
State/attitude information of the physical object object is obtained as information.It is to be appreciated that the action message of the user includes
But it is not limited to the gesture motion of the user, headwork, foot action etc..In present embodiment, the action recognition unit
142 can according to corresponding to the joint dot position information identification of the human body of user action message.
It is to be appreciated that the video content can include fantasy football, the action recognition unit 142 can recognize that user's
Action message, when such as identifying that the swivel of hand of user touches fantasy football, diminution amplifieroperation is carried out to fantasy football;It is described dynamic
Making recognition unit 142 also can be described virtual by the action for the foot joint for identifying user, with after foot joint encounters fantasy football
Football, which can produce, flicks effect etc..
The location identification unit 140 also may recognize that the position relationship or physical object pair between two physical object objects
As the position relationship between user.
It is to be appreciated that the position relationship between the two physical objects object includes the angle letter between two physical object objects
Cease and/or separated by a distance.The location identification unit 140 can also identify according to the depth characteristic information included in the image
Be separated by between angle information and/or two physical object objects between the physical object object and the filming apparatus 106 away from
From.The angle information includes but is not limited to the deflection that the physical object object is located at the filming apparatus 106, such as described
Physical object object is located at the upper left corner of the filming apparatus 106 or the lower right corner how much spends or a physical object object is positioned at another
How much the upper left corner or the lower right corner of one physical object object are spent.It is to be appreciated that when the filming apparatus 106 includes multiple shootings
During machine, each video camera can also obtain the angle information between the physical object object and corresponding video camera respectively, with this
Obtain the position relationship and distance corresponding to two physical objects.
The control process unit 144 can be used for the physical object object of conversion identification to be virtual target object, can also root
, can also be according to described according to the position of the physical object object and/or state/special efficacy content corresponding to attitude information output generation
Position relationship between user and the physical object object is by the image output unit 146 output comprising described switched
The video content of virtual target object and/or the special efficacy content.
It is to be appreciated that for VR, because user needs to wear VR equipment, and it is caused not see in reality
Object, therefore, present embodiment can be converted to each object in the scene 200 corresponding virtual target object.It is preferred that
The model database 158 has prestored the 3D models corresponding to some physical object objects, such as first instance destination object
Corresponding first 3D models, the corresponding 2nd 3D models of second instance destination object etc..
It is to be appreciated that when the location identification unit 140 identifies the physical object object, the control process
Unit 144 can be selected according to the physical object object from the model database 158 corresponding to 3D models;At the control
Reason unit 144 is also made physical object object described in the image by the 3D models obtained from the model database 158
To replace, in this way, may be such that other users see corresponding 3D models, to increase the experience of user.
It is to be appreciated that in one embodiment, the control process unit 144 also can determine whether the model database
With the presence or absence of the 3D models corresponding to the physical object object in 158, when the entity is not present in the model database 158
During 3D models corresponding to destination object, corresponding to the control process unit 144 can be converted to the physical object object
3D models, and can using the physical object object in the image by the 3D models that are obtained after changing as replacing.
For example, the control process unit 144 can determine whether that peashooter (physical object object) whether there is the pattern number
According in storehouse 158;When the peashooter is present in the model database 158, the control process unit 144 can be from the mould
The 3D models of peashooter corresponding to being obtained in type database 158;When the peashooter is not in the model database 158
When, the control process unit 144 can be shot from different perspectives according to the filming apparatus 106 obtains the corresponding peashooter
Video generation corresponds to the 3D models of the peashooter.In another embodiment, the control process unit 144 will can also generate
The 3D models of the correspondence peashooter be stored in the model database 158, to use subsequent relay is continuous.
For AR, for example, user wears AR equipment, due to user it can be seen that each object in the scene 200, institute
Virtual target object can not be converted to by the physical object object by stating control process unit 144.
The control process unit 144 can also be defeated according to state/attitude information of the physical object object and/or position
Go out special efficacy content corresponding to producing.
In present embodiment, some special efficacys corresponding to each physical object object of the memory storage of special effects data storehouse 156
Content.The special effects data storehouse 156 also stored for some special efficacy contents corresponding to each virtual target object.
It is to be appreciated that state/attitude information of the physical object object may include its different state/posture, such as
The state of the toy gun bullet or the not state of emission bullet.When the peashooter is in ballistic projections state, institute
The 3D models of bullet can be selected from model database 158 by stating control process unit 144, track and position always according to bullet motion
Put and corresponding virtual target object is calculated in real time, and the special efficacy content corresponding to selection from the special effects data storehouse 156.
The special efficacy content of selection is loaded onto on corresponding virtual target object by the control process unit 144, and then is simulated true
The target of real physical effect.Such as when virtual target object (stone) is located in the tracks of bullet, when the bullet with
When the virtual target object (stone) is met, the control process unit 144 can select stone from the special effects data storehouse 156
The special efficacy content of block blast, to simulate the effect that bullet and stone meet.In another embodiment, when user manipulation it is true
When toy airplane bumps against the aerolite in virtual scene, the spy that virtual aerolite can be flicked may be selected in the control process unit 144
Content is imitated, the special efficacy content of blast can be also superimposed on the toy airplane.
The control process unit 144 is additionally operable to output and includes the switched virtual target object and/or the spy
Imitate content video content to first display device 100, the second display device 102 and projection arrangement 108 in one or
It is multiple.
For example, the true unmanned plane of user's control flies in the scene, filming apparatus 106 photographs unmanned plane in the scene
Flight, location identification unit 140 obtains the position of unmanned plane and attitude information, control process unit 144 judge unmanned plane (entity
Destination object) position whether with virtual target object 208, such as aerolite position occur it is overlapping, if occur it is overlapping if judge production
Raw collision, control process unit 144 recalls the special efficacy of collisional breakage from special effects data storehouse 156, and passes through image output unit
146 project in the first display device 100 or the second display device 302.By filming apparatus 106 come to the mesh in scene 200
Mark object is identified, and then the purpose of interaction, advantageously reduces because the visual range and angle of AR glasses are limited
And reduce the deficiency of Consumer's Experience.
In present embodiment, first display device 100 can be that 3D display is set, and the projection arrangement 108 can be in institute
State and 3D projections are carried out in scene 200, in this way, user can pass through its 3D image of the viewing of 3D glasses or other devices.The 3D eyes
Mirror can be active 3D shutter glasses or polarization 3D glasses or red blue 3D glasses, need to only match corresponding projection and set
Standby or 3D display screen, so that virtual stereo scene and three-dimensional object can be seen in multiple users.
In another embodiment, can also directly be thrown without using 3D projections or 3D large-screen displays using common 2D
Shadow or 2D large-screen displays.2D display devices just can be used as hardware in user.The control process unit 144 can regard to described
Frequency content is handled, to produce some depth feelings so that it is a plane that user, which can feel that background scene is not,.
For example, the dynamic content shown by first display device 100, the control process unit 144, which can be selected, to hang down
Directly moved in the direction of motion of display surface, the sensation just as rushing at user, increase near big and far smaller object display effect.Another
In embodiment, the content of display can be divided into distant view and close shot by the control process unit 144, and the image of distant view passes through one
Shown again after the shade of raster pattern, the subregion for only having strip after shade can be shown, and foreground image
Then normal display, so as to reach depth feelings.
In present embodiment, the interactive system can be applied to scene of game, and the scene 200 can have some users 204,
Each wearable headset equipment 300 of user 204, the user 204 can be participated in same game.The headset equipment 300
Can be the AR helmets or AR glasses.
The positioner 304 can be binocular camera, and the physical object object 202 in the scene 200 can be by not
Photosphere with color is made a distinction, and each physical object object is such as set to the photosphere of corresponding color.The positioner 304 is logical
Cross inside-out modes physical object object is tracked and positioned.It is preferred that the positioner 304 has 2 coloured silks
Color camera, the positioner 304 can reduce the exposure of the positioner 304 when being shot to the scene 200
Time so that the image of acquisition can only see photosphere, then relative with the positioner by binocular principle calculating photosphere
Locus carries out space orientation, to obtain the position of the headset equipment 300 and angle.By being tied up on different objects
Determine the photosphere of different colours, thus distinguish different objects.
In present embodiment, the arithmetic unit 306 is used to receive the video content that the processing unit 104 is transmitted, and
According to the position of the headset equipment 300 (position in the scene 200) and angle (direction of visual lines of user) come
The video content is handled, to cause the headset equipment 300 to can watch corresponding video content.
In other embodiments, the processing unit 104 also can obtain the user's by the filming apparatus 106
Position and angle, and the video content is handled with angle according to the position, afterwards, pass through the image output list
Video content after first 146 outputs processing causes user to can watch corresponding video to corresponding headset equipment 300
Content.
Referring to Fig. 4, the better embodiment of exchange method of the present invention comprises the following steps:
Step S400, destination object in scene is identified, wherein the destination object includes physical object pair in the scene
As.
In present embodiment, the processing unit can recognize that physical object object and user in the scene.
It is to be appreciated that the processing unit can according to the depth characteristic information that includes in the image that filming apparatus is shot come
Identify one or more physical object objects in the scene.The destination object includes but is not limited to the user and positioned at institute
State entity toy in scene etc..
The depth characteristic information includes but is not limited to inside and outside donut, square box etc. have obvious specific shape
Characteristic information.Therefore, the processing unit can identify correspondent entity according to the depth characteristic information included in the image
Destination object.For example, the processing unit can recognize that the entity mesh having in image corresponding to similar depth characteristic information part
Object is marked, or the physical object object included in the image is identified according to the depth characteristic information corresponding to object, such as
When containing the depth characteristic information corresponding to entity toy in the image, the processing unit may recognize that the image
Part corresponding to middle entity toy.It is to be appreciated that the depth characteristic information (three-dimensional profile feature) of the object includes ginseng
Number first can be stored in system, wherein, the parameter of the trial feature of the object may include the overall dimensions of the object, depth capacity
Region of variation (the maximum region of concave-convex surface), the information such as 3D shape of big change in depth feature.Certainly, the processing unit
Also the user that can be pointed in the scene is identified.
It is to be appreciated that the depth characteristic information may also include the strong reflection geometry for causing depth collecting device overexposure
Body, it may also include the three-dimensional profile feature (depth characteristic information) of object.The processing unit can control the filming apparatus to enter
Row overexposure is shot, to obtain the default image of the corresponding scene, so as to be obtained from the overexposure image with strong anti-
The solid penetrated.In present embodiment, the default image can be that a large amount of chaff interferences have been moved out of or only in the scene
Have the target object in scene or it is a small number of need to identify or interfering object wherein when captured image.
In one embodiment, the processing unit can first identify the reference position in the default image.This implementation
In mode, the reference position in the default image can be the strong reflection geometry for causing camera device overexposure in the default image
One or more positions of body, wherein strong reflection solid can occur the depth map of some positions in depth acquisition device
Expose.Due to the frame that the image is to continue with, therefore, the processing unit can in the image that the camera device transmits relative to
Operation is identified in scope near the reference position of the default image, such as can be in the image relative to described pre-
If the reference position in image is nearby identified, such as identified by the depth characteristic information of above-mentioned object, so as to real
Now quick three-dimensional feature screening and comparison.
It is preferred that the processing unit may deposit always according to physical object object described in the physical object Object identifying of identification
Overexposure strong reflection solid one or more positions, and can will identify obtained position as reference position, with up to
To the target for updating the reference position.
It is to be appreciated that the colouring information corresponding to each physical object object, such as toy are stored in colour code storehouse
Corresponding first color of aircraft, corresponding second color of peashooter, corresponding 3rd color of toy tank.
The physical object object may be provided with corresponding colour code.The filming apparatus is shot to the scene
When, it can shoot to obtain the colouring information of physical object object in the scene, i.e., described filming apparatus is shot in obtained image
Corresponding visible light colors information can also be included.
In one embodiment, the physical object object in the scene can may be configured with infrared light emission device, described
Infrared light emission device can launch the infrared light in 780nm to 1064nm wavelength.For example, the infrared light emission device transmitting
The wavelength for going out infrared light includes but is not limited to 840nm, 940nm or 970nm.It is to be appreciated that the infrared light emission device can
To be flashed with specific frequency f, the flicker frequency of each infrared light emission device also can be different.The processing unit can pass through knowledge
Whether the image that camera device 106 does not transmit has the infrared light of specific frequency to judge to whether there is entity in the image
Destination object.
In present embodiment, the processing unit also (can include but is not limited to colour code, infrared sudden strain of a muscle according to reference information
Bright frequency) pre-identification operation is carried out, to identify in advance in the image with the presence or absence of corresponding destination object.When the image
In destination object be present after, the processing unit identifies specific physical object object further according to depth characteristic information.
Whether pre-identification operation includes but is not limited to judge in the image comprising corresponding colour code and/or red
Outer flicker frequency.It is to be appreciated that when including corresponding colour code and/or infrared flash frequency in the image, represent
Corresponding physical object object may be included in the image, now, the processing unit can do depth recognition to the image
Operation;When not including corresponding colour code or infrared flash frequency in the image, represent not include in the image pair
The possibility for the physical object object answered is big.Operated, will can skipped over not comprising physical object object by described pre-identification
Image, advantageously reduce the time of physical object Object identifying.
The processing unit performs after being identified according to depth characteristic information further according to reference information confirms operation, is advantageous to
Improve the degree of accuracy of identification.Corresponded to it is preferred that the processing unit can be identified in the scene according to the depth characteristic information
Physical object object (first object object), it is right in the image that the processing unit also obtains the physical object object
Answer the colouring information of position, and physical object corresponding to being obtained from the colour code storehouse according to the colouring information acquired
Object (the second destination object).
The processing unit also judges whether first object object is identical with second destination object;When first mesh
When mark object is identical with second destination object, the processing unit can obtain the physical object object in the scene
Positional information;When the first object object and second destination object differ, the processing unit identification is represented
The physical object object it is inaccurate or in the absence of in corresponding database, the processing unit can continue other physical objects
The identification of object.It is combined to identify corresponding physical object object, is advantageous to depth characteristic information by colour code
Improve the degree of accuracy of identification.
Step S402, identify the physical object Obj State/attitude information.
In present embodiment, state/attitude information of the physical object object includes but is not limited to the physical object
The information by first state/attitudes vibration for the second state/posture of object.
It is preferred that the processing unit can be according to the depth characteristic information for continuing physical object object described in two frame images
Change obtain state/attitude information corresponding to the physical object.For example, the physical object can be peashooter, it is described
Peashooter has the function of emission bullet, and user can realize ballistic projections by clasping the left-hand thread of the peashooter.It can manage
Xie Di, the depth characteristic information that the processing unit can obtain described in the image of two continuous frames corresponding to peashooter (such as obtain
The first depth characteristic information of the peashooter is corresponded in first frame image, also obtains in the second frame image and corresponds to the peashooter
The second depth characteristic information), and the change of the depth characteristic information according to corresponding to the peashooter judges the peashooter
Left-hand thread whether be triggered to judge whether to the functions of ballistic projections, such as the first depth characteristic information and second according to
Depth characteristic information changes to judge whether the left-hand thread of the peashooter is triggered.When the first depth characteristic information and institute
When stating the change of the second depth characteristic information within a preset range, it can confirm that the left-hand thread of the peashooter is triggered.In other realities
Apply in mode, the first frame image with the second frame image can not also be it is continuous, as both be separated by a frame, two frames or
The image of other quantity.
It is to be appreciated that the processing unit can also obtain the entity mesh by identifying the action message of the user
Mark state/attitude information of object.It is to be appreciated that the action message of the user includes but is not limited to the gesture of the user
Action, headwork, foot action etc..In present embodiment, the processing unit can be according to the artis position of the human body of user
Action message corresponding to confidence breath identification.
It is preferred that the video content can include fantasy football, the processing unit can recognize that the action message of user, such as
After the swivel of hand of identification user touches fantasy football, diminution amplifieroperation is carried out to fantasy football;The processing unit also may be used
By the action for the foot joint for identifying user, effect is flicked so that after foot joint encounters fantasy football, the fantasy football can produce
Fruit etc..
Step S404, identify the positional information of the physical object object.
In present embodiment, the processing unit can obtain correspondent entity destination object from depth characteristic information and be located at institute
State the position in scene.Due to including depth characteristic information corresponding to corresponding physical object, therefore, institute's rheme in the image
Positional information of the physical object object in the scene can also be obtained by putting recognition unit.For example, the location identification unit can
The entity toy is obtained according to the depth characteristic information corresponding to identified physical object object and is located at the scene
First position, the depth characteristic information corresponding to user that can be also identified according to you are located at the scene to obtain the user
The second place.
Step S406, the special efficacy according to corresponding to obtaining the positional information of the physical object object and state/attitude information
Content.
It is to be appreciated that state/attitude information of the physical object object may include its different state/posture, such as
The state of the toy gun bullet or the not state of emission bullet.When the peashooter is in ballistic projections state, institute
The 3D models of bullet can be selected from model database by stating processing unit, track and position always according to bullet motion and corresponding
Virtual target object is calculated in real time, and the special efficacy content corresponding to selection from special effects data storehouse.The processing unit will be selected
The special efficacy content selected is loaded onto on corresponding virtual target object, and then simulates the target of real physical effect.As worked as
When virtual target object (stone) is located in the tracks of bullet, when the bullet and virtual target object (stone) phase
During chance, the processing unit can select the special efficacy content that stone explodes from the special effects data storehouse, to simulate bullet and stone
The effect met.In another embodiment, when the real toy aircraft of user's manipulation bumps against the aerolite in virtual scene, institute
State processing unit and the special efficacy content that virtual aerolite can be flicked may be selected, the special efficacy of blast can be also superimposed on the toy airplane
Content.
Step S408, output include the video content of special efficacy content.
In present embodiment, the exportable video content comprising the special efficacy content of the processing unit to described first shows
One or more of showing device, the second display device and projection arrangement.
In present embodiment, first display device can be that 3D display is set, and the projection arrangement can be in the scene
Interior progress 3D projections, in this way, user can pass through its 3D image of the viewing of 3D glasses or other devices.The 3D glasses can be main
Dynamic formula 3D shutter glasses or polarization 3D glasses or red blue 3D glasses, it need to only match corresponding projector equipment or 3D display
Screen, so that virtual stereo scene and three-dimensional object can be seen in multiple users.Described projection screen and throwing
Shadow equipment can be able to be multiple projector equipments, by the projection on 3D image projectings to different directions or position with more than one
On curtain.Such as the front of user, left and right side direction, and the position of behind.
In another embodiment, can also directly be thrown without using 3D projections or 3D large-screen displays using common 2D
Shadow or 2D large-screen displays.2D display devices just can be used as hardware in user.The processing unit can be to the video content
Handled, to produce some depth feelings so that it is a plane that user, which can feel that background scene is not,.
For example, the dynamic content shown by first display device, the processing unit can be selected perpendicular to display surface
The direction of motion motion, the sensation just as rushing at user, increase near big and far smaller object display effect.In another embodiment
In, the content of display can be divided into distant view and close shot by the processing unit, after the shade of the image of distant view by a raster pattern
Showing again, the subregion for only having strip after shade can be shown, and foreground image is then normally shown, so as to
Reach depth feelings.
In present embodiment, the exchange method can be applied to scene of game, and the scene can have some users, Mei Yiyong
The wearable headset equipment in family, the user can be participated in same game.The headset equipment can be the AR helmets or AR glasses.
The headset equipment can have a binocular camera, and the physical object object in the scene can pass through different colours
Photosphere make a distinction, such as by each physical object object set corresponding color photosphere.The headset equipment passes through
Inside-out modes are tracked and positioned to physical object object.It is preferred that there are the headset equipment 2 colours to take the photograph
As head, the headset equipment can reduce the time for exposure of the headset equipment when being shot to the scene so that
The image of acquisition can only see photosphere, then the relative tertiary location progress space of photosphere and camera is calculated by binocular principle
Positioning, to obtain the position of the headset equipment and angle.By binding the photosphere of different colours on different objects, by
This distinguishes different objects.
In present embodiment, the headset equipment is additionally operable to receive the video content of the processing unit transmission, and root
Regarded according to the position (position in the scene) and angle (direction of visual lines of user) of the headset equipment to described
Frequency content is handled, and (now, the processing unit can be exported the video content to each wear-type by the forms of broadcasting and be set
It is standby), to cause the headset equipment to can watch corresponding video content, strengthen Consumer's Experience.
In other embodiments, the processing unit can also obtain the position and angle of the user by filming apparatus
Degree, and the video content is handled with angle according to the position, afterwards, the video content after output processing is to correspondingly
Headset equipment, and then cause user can watch corresponding video content.
Above-mentioned exchange method and interactive system, server are by after to the recongnition of objects in scene, according to the reality
The positional information and state of body destination object/special efficacy content corresponding to attitude information acquisition, so as to which special efficacy content is added into institute
State on physical object object, so that user can watch corresponding special efficacy, and then be advantageous to improve the experience of user.
It should be noted that in the description of the invention, term " first ", " second " etc. are only used for describing purpose, without
It is understood that to indicate or implying relative importance.In addition, in the description of the invention, unless otherwise indicated, the implication of " multiple "
Refer at least two.
Any process or method described otherwise above description in flow chart or herein is construed as, and represents to include
Module, fragment or the portion of the code of the executable instruction of one or more the step of being used to realize specific logical function or process
Point, and the scope of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discuss suitable
Sequence, including according to involved function by it is basic simultaneously in the way of or in the opposite order, carry out perform function, this should be of the invention
Embodiment person of ordinary skill in the field understood.
It should be appreciated that each several part of the present invention can be realized with hardware, software, firmware or combinations thereof.Above-mentioned
In embodiment, software that multiple steps or method can be performed in memory and by suitable instruction execution system with storage
Or firmware is realized.If, and in another embodiment, can be with well known in the art for example, realized with hardware
Any one of row technology or their combination are realized:With the logic gates for realizing logic function to data-signal
Discrete logic, have suitable combinational logic gate circuit application specific integrated circuit, programmable gate array (PGA), scene
Programmable gate array (FPGA) etc..
Those skilled in the art are appreciated that to realize all or part of step that above-described embodiment method carries
Suddenly it is that by program the hardware of correlation can be instructed to complete, described program can be stored in a kind of computer-readable storage medium
In matter, described program upon execution, including one or a combination set of the step of embodiment of the method.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing module, can also
That unit is individually physically present, can also two or more units be integrated in a module.Above-mentioned integrated mould
Block can both be realized in the form of hardware, can also be realized in the form of software function module.The integrated module is such as
Fruit is realized in the form of software function module and as independent production marketing or in use, can also be stored in a computer
In read/write memory medium.
Storage medium mentioned above can be read-only storage, disk or CD etc..
Although embodiments of the invention have been shown and described above, it is to be understood that above-described embodiment is example
Property, it is impossible to limitation of the present invention is interpreted as, one of ordinary skill in the art within the scope of the invention can be to above-mentioned
Embodiment is changed, changed, replacing and modification.
Claims (12)
1. a kind of exchange method, it is characterised in that the exchange method includes:
Destination object in scene is identified according to image, wherein the destination object includes physical object object in the scene;
Identify the physical object Obj State/attitude information;
Identify the positional information of the physical object object;
The special efficacy content according to corresponding to obtaining the positional information of the physical object object and state/attitude information;
Output includes the video content of the special efficacy content.
2. exchange method as claimed in claim 1, it is characterised in that destination object includes in the identification scene:
Obtain the image of the depth characteristic information of the corresponding physical object object;
Physical object object corresponding to depth characteristic information identification in the image.
3. exchange method as claimed in claim 2, the physical object object is provided with corresponding colouring information, and its feature exists
In physical object object corresponding to the depth characteristic information identification in the image also includes:
First object object in the scene is identified according to the depth characteristic information;
Obtain the colouring information of the first object object;
The second destination object according to corresponding to obtaining the colouring information;
Judge whether the first object object and second destination object are identical;
When the first object object is identical with second destination object, it is the entity to determine the first object object
Destination object.
4. exchange method as claimed in claim 1, it is characterised in that described that destination object bag in scene is identified according to image
Include:
Pre-identification operation is carried out to the image according to reference information, to judge whether include physical object pair in the image
As;
Obtain the image of the depth characteristic information of the corresponding physical object object;
Physical object object corresponding to depth characteristic information identification in the image.
5. exchange method as claimed in claim 4, it is characterised in that described to be known in advance to the image according to reference information
It Cao Zuo not include:
Judge whether comprising corresponding colour code in the image, wherein physical object object is provided with correspondingly in the scene
Colour code;Or
Judge whether to include the infrared light of respective frequencies in the image, wherein in the scene physical object object be provided with it is logical
Cross the infrared light of respective frequencies transmitting.
6. exchange method as claimed in claim 1, it is characterised in that the identification physical object Obj State/posture
Information includes:
Obtain the first depth characteristic information of physical object object described in the first frame image;
Obtain the second depth characteristic information of physical object object described in the second frame image;
Whether within a preset range to judge the change of the first depth characteristic information and the second depth characteristic information;
When the change of the first depth characteristic information and the second depth characteristic information is in the preset range, it is determined that
State/attitude information of the physical object object.
7. exchange method as claimed in claim 1, it is characterised in that the identification physical object Obj State/posture
Information includes:
Obtain the action message of user in the scene;
State/attitude information of the physical object object is determined according to the action message of the user.
8. exchange method as claimed in claim 1, it is characterised in that the positional information according to the physical object object
And state/special efficacy content corresponding to attitude information acquisition includes:
The physical object object and virtual mesh are calculated according to the positional information of the physical object object and state/attitude information
Mark the physical effect of object;
According to special efficacy content corresponding to physical effect selection.
9. exchange method as claimed in claim 1, it is characterised in that described to export the video content bag for including special efficacy content
Include:
Obtain the position of user and angle information in the scene;
According to the position and the video content of the corresponding user of angle information output.
10. exchange method as claimed in claim 1, it is characterised in that described to export the video content bag for including special efficacy content
Include:
The video content is exported to each headset equipment by broadcast mode, to cause each headset equipment according to respective
Position and angle information are handled the video content, to obtain corresponding video content.
11. a kind of server, including processor and memory, the memory storage has some programs, and some programs can
With by the computing device, to realize the exchange method as described in any one in claim 1-10.
12. a kind of interactive system, it is characterised in that the interactive system includes:
Location identification unit, for identifying destination object in scene according to image, wherein the destination object includes the scene
Middle physical object object;
The location identification unit, it is additionally operable to identify the physical object Obj State/attitude information;
The location identification unit, it is additionally operable to identify the positional information of the physical object object;
Control process unit, for corresponding to the positional information according to the physical object object and state/attitude information acquisition
Special efficacy content;
The control process unit, it is additionally operable to the video content that output includes special efficacy content.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710855846.7A CN107728782A (en) | 2017-09-21 | 2017-09-21 | Exchange method and interactive system, server |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710855846.7A CN107728782A (en) | 2017-09-21 | 2017-09-21 | Exchange method and interactive system, server |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107728782A true CN107728782A (en) | 2018-02-23 |
Family
ID=61206719
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710855846.7A Pending CN107728782A (en) | 2017-09-21 | 2017-09-21 | Exchange method and interactive system, server |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107728782A (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108830782A (en) * | 2018-05-29 | 2018-11-16 | 北京字节跳动网络技术有限公司 | Image processing method, device, computer equipment and storage medium |
CN108924438A (en) * | 2018-06-26 | 2018-11-30 | Oppo广东移动通信有限公司 | Filming control method and Related product |
CN109618183A (en) * | 2018-11-29 | 2019-04-12 | 北京字节跳动网络技术有限公司 | A kind of special video effect adding method, device, terminal device and storage medium |
CN109754462A (en) * | 2019-01-09 | 2019-05-14 | 上海莉莉丝科技股份有限公司 | The method, system, equipment and medium of built-up pattern in virtual scene |
CN109782901A (en) * | 2018-12-06 | 2019-05-21 | 网易(杭州)网络有限公司 | Augmented reality exchange method, device, computer equipment and storage medium |
CN110392251A (en) * | 2018-04-18 | 2019-10-29 | 广景视睿科技(深圳)有限公司 | A kind of dynamic projection method and system based on virtual reality |
CN110716646A (en) * | 2019-10-15 | 2020-01-21 | 北京市商汤科技开发有限公司 | Augmented reality data presentation method, device, equipment and storage medium |
CN111640185A (en) * | 2020-06-05 | 2020-09-08 | 上海商汤智能科技有限公司 | Virtual building display method and device |
CN111640192A (en) * | 2020-06-05 | 2020-09-08 | 上海商汤智能科技有限公司 | Scene image processing method and device, AR device and storage medium |
CN111757175A (en) * | 2020-06-08 | 2020-10-09 | 维沃移动通信有限公司 | Video processing method and device |
CN111914104A (en) * | 2020-08-07 | 2020-11-10 | 杭州栖金科技有限公司 | Video and audio special effect processing method and device and machine-readable storage medium |
CN112367487A (en) * | 2020-10-30 | 2021-02-12 | 维沃移动通信有限公司 | Video recording method and electronic equipment |
WO2021036624A1 (en) * | 2019-08-28 | 2021-03-04 | 北京市商汤科技开发有限公司 | Interaction method, apparatus and device, and storage medium |
CN112702625A (en) * | 2020-12-23 | 2021-04-23 | Oppo广东移动通信有限公司 | Video processing method and device, electronic equipment and storage medium |
CN113111939A (en) * | 2021-04-12 | 2021-07-13 | 中国人民解放军海军航空大学航空作战勤务学院 | Aircraft flight action identification method and device |
CN113359983A (en) * | 2021-06-03 | 2021-09-07 | 北京市商汤科技开发有限公司 | Augmented reality data presentation method and device, electronic equipment and storage medium |
CN113538696A (en) * | 2021-07-20 | 2021-10-22 | 广州博冠信息科技有限公司 | Special effect generation method and device, storage medium and electronic equipment |
WO2022227937A1 (en) * | 2021-04-29 | 2022-11-03 | 北京字跳网络技术有限公司 | Image processing method and apparatus, electronic device, and readable storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130286004A1 (en) * | 2012-04-27 | 2013-10-31 | Daniel J. McCulloch | Displaying a collision between real and virtual objects |
CN105264460A (en) * | 2013-04-12 | 2016-01-20 | 微软技术许可有限责任公司 | Holographic object feedback |
US20170200313A1 (en) * | 2016-01-07 | 2017-07-13 | Electronics And Telecommunications Research Institute | Apparatus and method for providing projection mapping-based augmented reality |
-
2017
- 2017-09-21 CN CN201710855846.7A patent/CN107728782A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130286004A1 (en) * | 2012-04-27 | 2013-10-31 | Daniel J. McCulloch | Displaying a collision between real and virtual objects |
CN105264460A (en) * | 2013-04-12 | 2016-01-20 | 微软技术许可有限责任公司 | Holographic object feedback |
US20170200313A1 (en) * | 2016-01-07 | 2017-07-13 | Electronics And Telecommunications Research Institute | Apparatus and method for providing projection mapping-based augmented reality |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110392251B (en) * | 2018-04-18 | 2021-07-16 | 广景视睿科技(深圳)有限公司 | Dynamic projection method and system based on virtual reality |
CN110392251A (en) * | 2018-04-18 | 2019-10-29 | 广景视睿科技(深圳)有限公司 | A kind of dynamic projection method and system based on virtual reality |
CN108830782B (en) * | 2018-05-29 | 2022-08-05 | 北京字节跳动网络技术有限公司 | Image processing method, image processing device, computer equipment and storage medium |
CN108830782A (en) * | 2018-05-29 | 2018-11-16 | 北京字节跳动网络技术有限公司 | Image processing method, device, computer equipment and storage medium |
CN108924438B (en) * | 2018-06-26 | 2021-03-02 | Oppo广东移动通信有限公司 | Shooting control method and related product |
CN108924438A (en) * | 2018-06-26 | 2018-11-30 | Oppo广东移动通信有限公司 | Filming control method and Related product |
CN109618183A (en) * | 2018-11-29 | 2019-04-12 | 北京字节跳动网络技术有限公司 | A kind of special video effect adding method, device, terminal device and storage medium |
CN109618183B (en) * | 2018-11-29 | 2019-10-25 | 北京字节跳动网络技术有限公司 | A kind of special video effect adding method, device, terminal device and storage medium |
CN109782901A (en) * | 2018-12-06 | 2019-05-21 | 网易(杭州)网络有限公司 | Augmented reality exchange method, device, computer equipment and storage medium |
CN109754462A (en) * | 2019-01-09 | 2019-05-14 | 上海莉莉丝科技股份有限公司 | The method, system, equipment and medium of built-up pattern in virtual scene |
WO2021036624A1 (en) * | 2019-08-28 | 2021-03-04 | 北京市商汤科技开发有限公司 | Interaction method, apparatus and device, and storage medium |
CN110716646A (en) * | 2019-10-15 | 2020-01-21 | 北京市商汤科技开发有限公司 | Augmented reality data presentation method, device, equipment and storage medium |
WO2021073278A1 (en) * | 2019-10-15 | 2021-04-22 | 北京市商汤科技开发有限公司 | Augmented reality data presentation method and apparatus, electronic device, and storage medium |
TWI782332B (en) * | 2019-10-15 | 2022-11-01 | 中國商北京市商湯科技開發有限公司 | An augmented reality data presentation method, device and storage medium |
CN111640192A (en) * | 2020-06-05 | 2020-09-08 | 上海商汤智能科技有限公司 | Scene image processing method and device, AR device and storage medium |
CN111640185A (en) * | 2020-06-05 | 2020-09-08 | 上海商汤智能科技有限公司 | Virtual building display method and device |
CN111757175A (en) * | 2020-06-08 | 2020-10-09 | 维沃移动通信有限公司 | Video processing method and device |
CN111914104A (en) * | 2020-08-07 | 2020-11-10 | 杭州栖金科技有限公司 | Video and audio special effect processing method and device and machine-readable storage medium |
CN112367487A (en) * | 2020-10-30 | 2021-02-12 | 维沃移动通信有限公司 | Video recording method and electronic equipment |
CN112702625A (en) * | 2020-12-23 | 2021-04-23 | Oppo广东移动通信有限公司 | Video processing method and device, electronic equipment and storage medium |
CN112702625B (en) * | 2020-12-23 | 2024-01-02 | Oppo广东移动通信有限公司 | Video processing method, device, electronic equipment and storage medium |
CN113111939A (en) * | 2021-04-12 | 2021-07-13 | 中国人民解放军海军航空大学航空作战勤务学院 | Aircraft flight action identification method and device |
CN113111939B (en) * | 2021-04-12 | 2022-09-02 | 中国人民解放军海军航空大学航空作战勤务学院 | Aircraft flight action identification method and device |
WO2022227937A1 (en) * | 2021-04-29 | 2022-11-03 | 北京字跳网络技术有限公司 | Image processing method and apparatus, electronic device, and readable storage medium |
CN113359983A (en) * | 2021-06-03 | 2021-09-07 | 北京市商汤科技开发有限公司 | Augmented reality data presentation method and device, electronic equipment and storage medium |
CN113538696A (en) * | 2021-07-20 | 2021-10-22 | 广州博冠信息科技有限公司 | Special effect generation method and device, storage medium and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107728782A (en) | Exchange method and interactive system, server | |
US7427996B2 (en) | Image processing apparatus and image processing method | |
CN107315470B (en) | Graphic processing method, processor and virtual reality system | |
US8040361B2 (en) | Systems and methods for combining virtual and real-time physical environments | |
CN106101689B (en) | The method that using mobile phone monocular cam virtual reality glasses are carried out with augmented reality | |
CN110163943A (en) | The rendering method and device of image, storage medium, electronic device | |
CN107016704A (en) | A kind of virtual reality implementation method based on augmented reality | |
WO2019123729A1 (en) | Image processing device, image processing method, and program | |
US20100182340A1 (en) | Systems and methods for combining virtual and real-time physical environments | |
US20150260474A1 (en) | Augmented Reality Simulator | |
CN107913521B (en) | The display methods and device of virtual environment picture | |
CN108351691A (en) | remote rendering for virtual image | |
CN108629830A (en) | A kind of three-dimensional environment method for information display and equipment | |
CN109598796A (en) | Real scene is subjected to the method and apparatus that 3D merges display with dummy object | |
CN104380347A (en) | Video processing device, video processing method, and video processing system | |
WO2021174389A1 (en) | Video processing method and apparatus | |
CN105915766B (en) | Control method based on virtual reality and device | |
US9955120B2 (en) | Multiuser telepresence interaction | |
CN109640070A (en) | A kind of stereo display method, device, equipment and storage medium | |
CN106527696A (en) | Method for implementing virtual operation and wearable device | |
CN111833458A (en) | Image display method and device, equipment and computer readable storage medium | |
CN113918021A (en) | 3D initiative stereo can interactive immersive virtual reality all-in-one | |
US20180033328A1 (en) | Immersive vehicle simulator apparatus and method | |
US11688150B2 (en) | Color space mapping for intuitive surface normal visualization | |
US10819952B2 (en) | Virtual reality telepresence |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |