CN107481323A - Mix the interactive approach and its system in real border - Google Patents
Mix the interactive approach and its system in real border Download PDFInfo
- Publication number
- CN107481323A CN107481323A CN201610402936.6A CN201610402936A CN107481323A CN 107481323 A CN107481323 A CN 107481323A CN 201610402936 A CN201610402936 A CN 201610402936A CN 107481323 A CN107481323 A CN 107481323A
- Authority
- CN
- China
- Prior art keywords
- image
- real border
- dynamic
- display interface
- virtual image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
Abstract
A kind of interactive approach for mixing real border, is realized by an interaction systems, and a display interface comprising any object showed using a visual angle in an entity space, acquisition scenery is an image acquisition modules of the real border image of a dynamic, and a processor.The processor is according to several major heading objects predefined and comprising different visual angles, recognize the real border image of the dynamic, and when there is a principal goods part being consistent with the major heading object in the real border image of the dynamic, associated with the major heading object and predefined at least one virtual image is called, and shows the virtual image in the display interface.Whereby, the present invention can be without using VR glasses, and in the case of limited without angle position, virtual reality is showed in arbitrary entity space, and aforementioned virtual image can also change according to real border image, it is not only able to virtual reality being popularized in daily life, and it is truer to intend true effect.
Description
Technical field
The present invention relates to one kind to mix real border, more particularly to a kind of interactive approach and its system for mixing real border.
Background technology
Pay close attention to newest scientific and technological subject under discussion, it is found that either virtual reality (Virtual Reality), augmented reality
(Augmented Reality), even real border (Mixture Reality) is mixed, just in the jumbo visual impression for changing the mankind
The official world, wherein, VR mainly creates an entirely virtual 3d space by computer, and with various technologies " deception " mankind
Sense organ produce illusion by them, user will do miscellaneous thing as being personally on the scene in virtual world.AR master
If adding some virtual objects in the space of reality, and user is substantially still present in the real world.MR is then
Virtual scene and reality are carried out to the combination of higher degree, the object in this real border in real world can be with virtual world
In object exist jointly and instant generation is interactive.
Although VR, AR, MR concept are burning hot, still deposit and have the disadvantage that:
1st, for VR, it is necessary to be presented image with the helmet or glasses, still, the helmet or glasses often because volume it is big,
Unsightly, it is and easily dizzy, and the load or with outgoing of can not wearing for a long time, therefore, on application category, can be limited in game or
Film, daily life can not be popularized in.
2nd, because AR or MR are combined with real space, therefore, there can be the dress of display by mobile phone or flat board etc.
Put the combination picture of real space and virtual image is presented, still, current AR or MR technologies when being combined with realistic space,
All it is so that TaiWan, China announces No. 484452 cases of I as an example, to be picked as the basis for judging real space with the 2D images of static state
When taking teaching aid, teaching aid image, such as front or side can only be captured with default angle, once display is mobile or teaching aid moves
It is dynamic, will have can not recognize and can not imaging the problem of.
The content of the invention
It is an object of the invention to provide one kind can be popularized in virtual reality daily life, and it is truer to intend true effect
Without the interactive approach and its system in the real border of mixing of angle position limitation.
The interactive approach in the real border of mixing of the present invention, is realized by an interaction systems, and the interaction systems include being used to perform
One processor of at least one application programming, a display for showing with a visual angle any object in an entity space
Interface, and an image acquisition module of the real border image of a dynamic for capturing foregoing scenery, the interaction in the mixing reality border
Method is programmed by aforementioned applications by the processor and performed, and is comprised the steps of:
Step a:Receive the real border image of the dynamic.
Step b:According to several major heading objects predefined and comprising different visual angles, recognize in foregoing dynamic reality border image
Whether a principal goods part with the major heading object being consistent is had, if it is, step c is carried out, if not, returning to step a.
Step c:Call associated with the major heading object and predefined at least one virtual image.
Step d:Show the virtual image in the display interface.
A kind of interaction systems for mixing real border, include a real border device of mixing, and a processor.
The mixing reality border device includes a display interface, and captures the image that scenery is the real border image of a dynamic
Acquisition module, the display interface show any object in an entity space with a visual angle;And
Several major heading objects of the processor according to predefining and comprising different visual angles, recognize the real border image of the dynamic,
And when a principal goods part being consistent with the major heading object occurs in the real border image of the dynamic, calling is related to the major heading object
Connection and predefined at least one virtual image, and show the virtual image in the display interface.
The interactive approach in the real border of mixing of the present invention, the display interface are the object that can be had an X-rayed, appointing in the entity space
Meaning object is not imaged on the display interface.
The interactive approach in the real border of mixing of the present invention, step a dynamic reality border image come from front of the display interface
Scenery.
The interactive approach in the real border of mixing of the present invention, step c virtual image share several, and distinguish and annotation is destroys
Type virtual image, and non-damage type virtual image, step d include:
Step d1:Whether judgment step c virtual image is damage type virtual image, if it is, step d3 is carried out, if
It is no, carry out step d5;
Step d3:It is an instant dynamic image to merge the real border image of the virtual image, the dynamic;
Step d4:Show the instant dynamic image in the display interface;And
Step d5:Show the virtual image in the display interface.
The interactive approach in the real border of mixing of the present invention, step d3 virtual image include being established according to the main object appearance
A replacement object, and at least one virtual article.
The interactive approach in the real border of mixing of the present invention, step c include:
Step c1:Associated with the major heading object and predefined at least one virtual image group is called, this is virtual
Image group includes several virtual images;And
Step c2:Become according to a control because selection becomes because of associated virtual image with the control.
The interactive approach in the real border of mixing of the present invention, the control become because that can be weather, temperature, humidity, level of noise, empty dirt
Index is a kind of at least within, it is foregoing control become because can by the processor by world-wide web, pass through at least one sensing module
One way in which obtains.
The interactive approach in the real border of mixing of the present invention, the display interface is display screen, and step a dynamic reality border image comes from
Scenery in the display interface rear, and simultaneous display is in the display interface.
The interactive approach in the real border of mixing of the present invention, is also included:
Step e:A real border image is received, the real border image comes from the scenery in front of the display interface;
Step f:According to several secondary target pieces predefined and comprising different visual angles, recognize the real border image whether have with
The object that this target piece is consistent, if it is, step g is carried out, if not, returning to step e;
Step g:Call associated with this target piece and predefined virtual image group at least once, the virtual shadow
As group includes several virtual images;
Step h:According to the dynamic of this object, selection and the virtual image of the dynamical correlation connection of foregoing secondary object;And
Step i:Show the virtual image in the display interface.
The interactive approach in the real border of mixing of the present invention, step b, step f major heading object, secondary target piece respectively can be with
It is real available for foregoing dynamic is recognized when to be image, dynamic image, 3D object one of which, and the major heading object be 3D objects
Whether principal goods part that any angle with the major heading object be consistent is had in the image of border, can when this time target piece is 3D objects
For recognizing the secondary object for whether thering is any angle with this target piece to be consistent in foregoing real border image.
The interaction systems in the real border of mixing of the present invention, the display interface are object, the display screen one of which that can be had an X-rayed, should
When display interface is the object that can be had an X-rayed, any object in the entity space is not imaged on the display interface, the display interface
For display screen when, the real border image synchronous of the dynamic is shown in the display interface.
The interaction systems in the real border of mixing of the present invention, also comprising at least one sensing module, the sensing module is used to detect
One of weather, temperature, humidity, level of noise, the dirty index of sky, and the processor becomes because selection is with being somebody's turn to do always according to a control
Control becomes because of associated virtual image, the control become because can be foregoing weather, temperature, humidity, level of noise, the dirty index of sky extremely
Few one of which.
The interaction systems in the real border of mixing of the present invention, the image acquisition module are including capturing the display interface rear object
Phtographic lens after one of the real border image of dynamic, and the display interface front object is captured as before one of a real border image
Phtographic lens, the processor recognize the real border image according to several secondary target pieces predefined and comprising different visual angles, and
When an object being consistent with this target piece occurs in the real border image, make dynamic of the virtual image according to this object,
The virtual image that the dynamical correlation of selection and foregoing secondary object joins.
The interaction systems in the real border of mixing of the present invention, aforementioned virtual image is distinguished and annotation is damage type virtual image, and
Non-damage type virtual image, foregoing damage type virtual image include at least one substitute established according to the main object appearance
Part, and at least one virtual article, the processor judge to should major heading object virtual image as damage type virtual article
When, it is an instant dynamic image that can further merge the real border image of the virtual image, the dynamic, and shows the instant dynamic shadow
As in the display interface, the processor judge to should major heading object virtual image as non-damage type virtual image when, then
The virtual image is directly displayed in display interface.
The interaction systems in the real border of mixing of the present invention, also comprising a servo host for being arranged on distal end, and the mixing is real
Border device also includes a transport module of online world-wide web, and the transport module is used to export the real border image of the dynamic, the reality
Border image, and the instant dynamic image is received, the servo host includes the processor, online world-wide web and mixes reality border with this
The communication module that the transport module of device mutually communicates, and a store media, the communication module are used to receive the dynamic
Real border image, the real border image, and the instant dynamic image is exported, the store media is used to store major heading object, this time
Target piece, the virtual image.
The beneficial effects of the present invention are:Can without using VR glasses, and in the case of limited without angle position, in
Virtual reality is showed in arbitrary entity space, and aforementioned virtual image can also change according to real border image, be not only able to
Virtual reality is popularized in daily life, and it is truer to intend true effect.
Brief description of the drawings
The other features and effect of the present invention, will clearly be presented in the embodiment with reference to schema, wherein:
Fig. 1 is a block diagram, illustrates that the present invention mixes the interactive approach in real border and its one of system preferably implements figure;
Fig. 2 is a side view, illustrates the position relationship of the real border device of a mixing and entity space in the embodiment;
Fig. 3 is a flow chart, illustrates that the embodiment shows several virtual images according to a major heading object;
Fig. 4 is the schematic diagram that the embodiment recognizes a major heading object;
Fig. 5 is a schematic diagram, illustrate in the embodiment a display interface real border image of display dynamic with it is several virtual
Image;And
Fig. 6 is a flow chart, illustrates that the embodiment is shown according to a secondary target piece and is capable of interactive one virtually
Image;And
Fig. 7 is a schematic diagram, illustrates that the virtual image and one object are interactive in the embodiment.
Embodiment
Refering to Fig. 1 and Fig. 2, the present invention mixes one embodiment of the interaction systems in real border, includes the real border dress of a mixing
Put 1, and servo host 2.
The mixing reality border device 1 includes the 11, image acquisition modules of a display interface being arranged in entity space
12nd, a transport module 13, and at least one sensing module 14.The display interface 11 is display in the present embodiment, Ke Yishi
A kind of display of narrow frame or Rimless.It is one that the image acquisition module 12, which includes capturing the rear object of display interface 11,
Dynamically phtographic lens 121 after one of real border image R1, and the object captured in front of the display interface 11 is a real border image
R2 preceding phtographic lens 122.The 13 online world-wide web of transport module, and for exporting and receiving relevent information.The sensing mould
Block 14 is arranged on the display interface 11, for sensing weather, temperature, humidity, level of noise, empty dirty index at least within one
Person, and obtain several controls and become because of S, and become foregoing control because S is sent to by the transport module 13 of the mixing reality border device 1
The servo host 2.
The servo host 2 positioned at distal end, including a processor 21, online world-wide web and mixes in the present embodiment with this
The communication module 22 that the transport module 13 of real border device 1 mutually communicates, and a store media 23.The processor 21 is used for
Perform at least one application programming.The communication module 22 is used to receive the real border image R1 of the dynamic, the real border image R2, and output
One instant dynamic image V.The store media 23 be used to storing predefined and several major heading objects 31 comprising different visual angles,
Predefined and several secondary target pieces 32 comprising different visual angles, and respectively with respective major heading object 31, secondary target piece
32 associated several virtual image group VG (not shown).Each virtual image group VG include respectively with respective control
Become because of several virtual image V1 of S-phase association.
In addition, what deserves to be explained is, foregoing major heading object 31, secondary target piece 32 can be image, dynamic shadow respectively
Picture, 3D object one of which., can also be by the processor and the control such as aforementioned temperature, humidity, level of noise, empty dirty index becomes because of S
21 are obtained by world-wide web.
Refering to Fig. 3 and Fig. 1,2, the interactive approach of the invention for mixing real border is by the processor 21 of the servo host 2 by preceding
Application programming execution is stated, i.e. step is described as follows in conjunction with the embodiments below:
Step 40:Application programming starts.
Step 41:The servo host 2 receives the real border image R1 of a dynamic by the communication module 22.The real border shadow of dynamic
As R1 come from the rear of display interface 11 that the rear phtographic lens 121 is captured object (including it is motionless or motion in scape,
Thing, or scenery that is actually motionless but moving and move with the display interface 11), and exported by the transport module 13.Value
It must illustrate, the display interface 11 and the rear phtographic lens 121 are online, and the real border image R1 of the simultaneous display dynamic.
For example, the mixing reality border device 1 is arranged on the street in a certain city, and the display interface 11 is as bus stop
It is wherein a piece of enclose curtain, whereby, when the simultaneous display of display interface 11 this dynamic real border image R1 when, stand in the display interface 11
The user in front has a kind of break-through and crosses the display interface 11 and see the illusion of the rear scenery of display interface 11, and then misidentification
The display interface 11 is a piece of glass that can be had an X-rayed.
Step 42:Control is received by the communication module 22 to become because of S.Foregoing control becomes because S comes from the sensing module 14
Sense the weather or temperature or humidity or level of noise or empty dirty index in environment.
Change is controlled because S is exemplified by weather by foregoing, and whether the sensing module 14 can have raindrop, raindrop big by sensing
Small or brightness size, for judging weather for fine day, cloudy day or rainy day.
Step 43:According to predefined major heading object 31, recognize in foregoing dynamic reality border image R1 is the processor 21
It is no to have the principal goods part 81 being consistent with the major heading object 31, if it is, step 44 is carried out, if not, returning to step 41.
What deserves to be explained is refering to Fig. 4, if the major heading object 31 is 3D objects, even if foregoing dynamic reality border image R1
Scenery is presented with 2D, still, no matter principal goods part 81 is actually to occur with what visual angle or position, the processor 21 can be sentenced
Break and whether foregoing principal goods part 81 is consistent with the unspecified angle of foregoing major heading object 31.Certainly, if the major heading object 31 is
Image or film, then the major heading object 31 comprising different visual angles can be predefined, whereby, can equally pick out foregoing dynamic
Whether the principal goods part 81 that with foregoing predefined major heading object 31 angle is consistent is occurred in real border image R1.
In addition, in the present embodiment, the processor 21 is that border image R1 real to the dynamic carries out edge detection, and is determined
Principal goods part 81.
Step 44:The processor 21 calls associated with the major heading object 31 and predefined virtual image group VG.
Step 45:The processor 21 becomes because of S according to control, by selecting to become because of S with the control in virtual image group VG
Associated virtual image V1.
For example, it be bus to predefine the major heading object 31, and controls and become because S is weather, when picking out foregoing the
When there is bus in the one real border image R1 of dynamic, the processor 21 will enter the virtual of corresponding bus according to major heading object 31
Image group VG, and include the various virtual image V1 for being relevant to flying saucer in the virtual image group VG of corresponding bus, then,
The processor 21 can be further according to virtual image V1 corresponding to weather selection.For example, during fine day, selection can attack the flying saucer in building
(virtual image V1), when cloudy, select the flying saucer (virtual image V1) scouted, shuttled between building.
Step 46:Whether the virtual image V1 of the judgment step 45 of processor 21 is damage type virtual image, if it is, entering
Row step 47, if not, carrying out step 49;
Step 47:The processor 21 merges virtual image V1, the real border image R1 of the dynamic is an instant dynamic image
V, the instant dynamic image V is set to be shown in the display interface 11 with a predetermined reproduction time.What deserves to be explained is the virtual image
V1 includes the replacement object V11 established according to the outward appearance of principal goods part 81, and several virtual article V12.
For example, refering to Fig. 5, foregoing damage type virtual image V1 is predefined as presenting flying saucer attack equivalent to master
The building of object 81, due to the picture that building to be presented is damaged by attack, therefore, the processor 21 can merge the real border of the dynamic
Image R1, and virtual image V1 turn into a brand-new instant dynamic image V, and the instant dynamic image V can show and be flown
Dish is attacked and damages and substitute the replacement object V11 in original building (principal goods part 81), and attacks the virtual article V12 in building (i.e.
Flying saucer).
Step 48:The processor 21 transmits the instant dynamic image V extremely by the communication module 22 and the transport module 13
The display interface 11, the instant dynamic image V is set to be shown in the display interface 11 with a predetermined reproduction time.
Whereby, the display interface 11 is now shown, is not to be captured by the rear phtographic lens 121 and instant dynamic is real
Border image R1, but merged virtual image V1 instant dynamic image V.Therefore, for standing in the front of display interface 11
For user, it can see that building is attacked by flying saucer, and avalanche is damaged.
Step 49:The processor 21 passes through the virtual image V1 of the transmitting step 45 of communication module 22 to the display interface
11, virtual image V1 is shown in the display interface 11 with a predetermined reproduction time.
For example, foregoing non-damage type virtual image V1 be predefined as present a flying saucer scout, shuttle between building,
Because building need not change, therefore, the direct transfer of virtual image V1 of the processor 21 meeting to the display interface 11.
Refering to Fig. 6, after in aforementioned virtual image V1 or foregoing, the predefined reproduction times of dynamic image V terminate immediately, or
During broadcasting, interactive model can also be imported, and carries out the following steps:
Step 50:One real border image R2 is received by the communication module 22.The real border image R2 comes from the preceding photography
The scenery in the front of display interface 11 that camera lens 122 is captured, and exported by the transport module 13.
Step 51:The processor 21 recognizes whether the real border image R2 has with being somebody's turn to do according to predefined secondary target piece 32
One object 82 that secondary target piece 32 is consistent, if it is, step 52 is carried out, if not, returning to step 50.
Likewise, if this target piece 32 is 3D objects, even if scenery is presented with 2D in foregoing real border image R2, still,
No matter time object 82 is actually to occur with what visual angle or position, the processor 21 can judge time object 82 whether with
The unspecified angle of foregoing secondary target piece 32 is consistent.Certainly, if this target piece 32 is image or film, can predefine
Secondary target piece 32 comprising different visual angles, whereby, can equally pick out whether occur in foregoing real border image R2 with it is foregoing
The secondary object 82 that the predefined angle of secondary target piece 32 is consistent.
In addition, in the present embodiment, the processor 21 is to carry out edge detection to the real border image R2, and determines time object
82。
Step 52:The processor 21 calls associated with this target piece 32 and predefined virtual image group VG.
Step 53:According to the dynamic of this object 82, selection and the virtual image of the dynamical correlation connection of foregoing secondary object 82
V1。
Step 54:The processor 21 passes through the virtual image V1 of the transmitting step 53 of communication module 22 to the display interface
11, make virtual image V1 with a predetermined reproduction time in the display interface 11.
For example, refering to Fig. 7, it is face to predefine this target piece 32, when picking out in foregoing real border image R1
When there is face, the processor 21 will enter the virtual image group VG of corresponding face according to secondary target piece 32, and correspond to
Include the various virtual image V1 for being relevant to outman in the virtual image group VG of face, then, the processor 21 can again
According to virtual image V1 corresponding to the dynamic of secondary object 82 choosing pendulum, show for example, outman (i.e. virtual image V1) appears in this
Show interface 11, with passerby carry out scissors, stone, cloth finger-guessing game play, and according to it is defeated, win result, make different reactions, and
Interactive enjoyment can be reached.
What deserves to be explained is foregoing display interface 11 can also be a kind of transparent display, and it is the object that can be had an X-rayed,
Whereby, in addition to step 47,48 state, the dynamic reality border image R1 that the rear phtographic lens 121 is captured need to only be sent to this
Servo host 2 is determined whether the principal goods part 81 being consistent with the major heading object 31, and without being delivered to the display
Interface 11.Expansion details can be deduced from the description above due to having usually intellectual in this area, therefore does not add to illustrate.
In addition, the present invention can also omit the rear phtographic lens 121 or the preceding phtographic lens 122, and only with the real border shadow
As the R2 or real border image R1 of dynamic is as the basis of judgement, and determine the virtual image V1 of display.Due to having in this area
Usual skill can deduce expansion details from the description above, therefore not add to illustrate.
The advantages of previous embodiment, can be summarized as follows by the explanation more than:
1st, the present invention can be empty in arbitrary entity without using VR glasses, and in the case of limited without angle position
Between in show virtual reality, will be no longer limited on application category, and can be popularized in daily life.
2nd, the present invention can utilize the environment space that the display interface 11 is set, and the imaging mode that the present invention is special, skill
The combination dynamic real border image R1 and virtual image V1 of skilful property, or by interaction effect, make user there is a kind of break-through to cross the display
Interface 11 and see the illusion of the rear scenery of display interface 11, and then virtual image V1 authenticity can not be differentiated, and lifted
Intend true effect.
As described above, only embodiments of the invention are when the scope that the present invention can not be limited with this implement, i.e., all
The simple equivalent changes and modifications made according to claims of the present invention and description, all still belongs to the scope of the present invention.
Claims (16)
1. a kind of interactive approach for mixing real border, is realized by an interaction systems, the interaction systems include being used to perform at least one
One processor of individual application programming, a display interface for showing with a visual angle any object in an entity space,
An and image acquisition module of the real border image of a dynamic for capturing foregoing scenery, it is characterised in that the mixing reality border
Interactive approach programmed and perform by aforementioned applications by the processor, and comprise the steps of:
Step a:Receive the real border image of the dynamic;
Step b:According to several major heading objects predefined and comprising different visual angles, recognize in foregoing dynamic reality border image whether
There is a principal goods part being consistent with the major heading object, if it is, step c is carried out, if not, returning to step a;
Step c:Call associated with the major heading object and predefined at least one virtual image;And
Step d:Show the virtual image in the display interface.
2. the interactive approach in the real border of mixing according to claim 1, it is characterised in that:The display interface is the thing that can be had an X-rayed
Part, any object in the entity space are not imaged on the display interface.
3. the interactive approach in the real border of mixing according to claim 1 or 2, it is characterised in that:Step a dynamic reality border image
Come from the scenery in front of the display interface.
4. the interactive approach in the real border of mixing according to claim 1, it is characterised in that:The shared number of step c virtual image
It is individual, and distinguish and annotation is damage type virtual image, and non-damage type virtual image, step d include:
Step d1:Whether judgment step c virtual image is damage type virtual image, if it is, step d3 is carried out, if not,
Carry out step d5;
Step d3:It is an instant dynamic image to merge the real border image of the virtual image, the dynamic;
Step d4:Show the instant dynamic image in the display interface;And
Step d5:Show the virtual image in the display interface.
5. the interactive approach in the real border of mixing according to claim 4, it is characterised in that:Step d3 virtual image includes root
The replacement object established according to the main object appearance, and at least one virtual article.
6. the interactive approach in the real border of mixing according to claim 1, it is characterised in that:Step c includes:
Step c1:Call associated with the major heading object and predefined at least one virtual image group, the virtual image
Group includes several virtual images;And
Step c2:Become according to a control because selection becomes because of associated virtual image with the control.
7. the interactive approach in the real border of mixing according to claim 6, it is characterised in that:The control become because can be weather,
Temperature, humidity, level of noise, empty dirty index are a kind of at least within, foregoing control become because can by the processor by world-wide web,
Obtained by least one sensing module one way in which.
8. the interactive approach in the real border of mixing according to claim 1, it is characterised in that:The display interface is display screen, step
Rapid a dynamic reality border image comes from the scenery at the display interface rear, and simultaneous display is in the display interface.
9. the interactive approach in the real border of mixing according to claim 8, it is characterised in that:Also include:
Step e:A real border image is received, the real border image comes from the scenery in front of the display interface;
Step f:According to several secondary target pieces predefined and comprising different visual angles, recognize whether the real border image has and this time
One object that target piece is consistent, if it is, step g is carried out, if not, returning to step e;
Step g:Call associated with this target piece and predefined at least one secondary virtual image group, the virtual image
Group includes several virtual images;
Step h:According to the dynamic of this object, selection and the virtual image of the dynamical correlation connection of foregoing secondary object;And
Step i:Show the virtual image in the display interface.
10. the interactive approach in the real border of mixing according to claim 9, it is characterised in that:Step b, step f major heading thing
Part, secondary target piece can be respectively image, dynamic image, 3D object one of which, and the major heading object when being 3D objects,
Available for recognizing the principal goods part that whether there is any angle with the major heading object to be consistent in foregoing dynamic reality border image, this mesh
Object is marked when being 3D objects, available for recognizing in foregoing real border image whether have what any angle with this target piece was consistent
Secondary object.
A kind of 11. interaction systems for mixing real border, it is characterised in that:Comprising:
One real border device of mixing, including a display interface, and capture the image that scenery is the real border image of a dynamic
Acquisition module, the display interface show any object in an entity space with a visual angle;And
One processor, according to several major heading objects predefined and comprising different visual angles, the real border image of the dynamic is recognized, and
When occurring a principal goods part being consistent with the major heading object in the real border image of the dynamic, calling is related to the major heading object
Connection and predefined at least one virtual image, and show the virtual image in the display interface.
12. the interaction systems in the real border of mixing according to claim 11, it is characterised in that:The display interface can be had an X-rayed
Object, display screen one of which, when the display interface is the object that can be had an X-rayed, any object in the entity space is not imaged on
The display interface, when the display interface is display screen, the real border image synchronous of the dynamic is shown in the display interface.
13. the interaction systems in the real border of mixing according to claim 11, it is characterised in that:Also include at least one sensing mould
Block, the sensing module are used to detecting one of weather, temperature, humidity, level of noise, empty dirty index, and the processor always according to
One control become because, selection becomes because of associated virtual image with the control, control change because can be foregoing weather, temperature,
Humidity, level of noise, empty dirty index are a kind of at least within.
14. the interaction systems in the real border of mixing according to claim 11, it is characterised in that:The image acquisition module includes picking
It is phtographic lens after one of the real border image of the dynamic to take the display interface rear object, and captures the display interface front object
For a preceding phtographic lens of a real border image, the processor is according to several secondary objects predefined and comprising different visual angles
Part, the real border image is recognized, and when one object being consistent with this target piece occurs in the real border image, make this virtual
Image selects the virtual image joined with the dynamical correlation of foregoing secondary object according to the dynamic of this object.
15. the interaction systems in the real border of mixing according to claim 14, it is characterised in that:Aforementioned virtual image is distinguished and note
Damage type virtual image, and non-damage type virtual image are designated as, foregoing damage type virtual image is included according to the main object appearance
At least one replacement object established, and at least one virtual article, the processor judge to should major heading object void
When plan image is damage type virtual article, the real border image of the virtual image, the dynamic can be further merged as an instant dynamic
Image, and show the instant dynamic image in the display interface, the processor judge to should major heading object virtual image
For non-damage type virtual image when, then directly display the virtual image in display interface.
16. the interaction systems in the real border of mixing according to claim 15, it is characterised in that:Also include and be arranged on the one of distal end
Individual servo host, and the mixing reality border device also includes a transport module of online world-wide web, the transport module is used for defeated
Go out the real border image of the dynamic, the real border image, and receive the instant dynamic image, the servo host includes the processor, online
World-wide web and the communication module mutually communicated with the transport module for mixing reality border device, and a store media, should
Communication module is used to receive the real border image of the dynamic, the real border image, and exports the instant dynamic image, and the store media is used for
Store the major heading object, this target piece, the virtual image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610402936.6A CN107481323A (en) | 2016-06-08 | 2016-06-08 | Mix the interactive approach and its system in real border |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610402936.6A CN107481323A (en) | 2016-06-08 | 2016-06-08 | Mix the interactive approach and its system in real border |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107481323A true CN107481323A (en) | 2017-12-15 |
Family
ID=60594610
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610402936.6A Pending CN107481323A (en) | 2016-06-08 | 2016-06-08 | Mix the interactive approach and its system in real border |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107481323A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110517580A (en) * | 2019-07-26 | 2019-11-29 | 北京林业大学 | A kind of manifolding formula interaction landscape system for the reconstruct of postindustrial resource landscape |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1728780A (en) * | 2004-07-29 | 2006-02-01 | 台湾梦工场科技股份有限公司 | Method and system of instant interactive images and sounds |
CN102859991A (en) * | 2010-04-06 | 2013-01-02 | 阿尔卡特朗讯 | A Method Of Real-time Cropping Of A Real Entity Recorded In A Video Sequence |
CN103809744A (en) * | 2012-11-06 | 2014-05-21 | 索尼公司 | Image display device, image display method, and computer program |
CN104346612A (en) * | 2013-07-24 | 2015-02-11 | 富士通株式会社 | Information processing apparatus, and displaying method |
CN104346081A (en) * | 2013-07-25 | 2015-02-11 | 邱美虹 | Augmented reality learning system and method thereof |
CN104487915A (en) * | 2012-07-26 | 2015-04-01 | 高通股份有限公司 | Maintaining continuity of augmentations |
CN105095850A (en) * | 2014-05-13 | 2015-11-25 | 林纪忠 | Intelligent community augmented reality system |
CN205158392U (en) * | 2015-11-23 | 2016-04-13 | 创意点子数位股份有限公司 | Article identification system of developments image |
-
2016
- 2016-06-08 CN CN201610402936.6A patent/CN107481323A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1728780A (en) * | 2004-07-29 | 2006-02-01 | 台湾梦工场科技股份有限公司 | Method and system of instant interactive images and sounds |
CN102859991A (en) * | 2010-04-06 | 2013-01-02 | 阿尔卡特朗讯 | A Method Of Real-time Cropping Of A Real Entity Recorded In A Video Sequence |
CN104487915A (en) * | 2012-07-26 | 2015-04-01 | 高通股份有限公司 | Maintaining continuity of augmentations |
CN103809744A (en) * | 2012-11-06 | 2014-05-21 | 索尼公司 | Image display device, image display method, and computer program |
CN104346612A (en) * | 2013-07-24 | 2015-02-11 | 富士通株式会社 | Information processing apparatus, and displaying method |
CN104346081A (en) * | 2013-07-25 | 2015-02-11 | 邱美虹 | Augmented reality learning system and method thereof |
CN105095850A (en) * | 2014-05-13 | 2015-11-25 | 林纪忠 | Intelligent community augmented reality system |
CN205158392U (en) * | 2015-11-23 | 2016-04-13 | 创意点子数位股份有限公司 | Article identification system of developments image |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110517580A (en) * | 2019-07-26 | 2019-11-29 | 北京林业大学 | A kind of manifolding formula interaction landscape system for the reconstruct of postindustrial resource landscape |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110147231B (en) | Combined special effect generation method and device and storage medium | |
US11798224B2 (en) | Generation apparatus, system and method for generating virtual viewpoint image | |
CN104599243B (en) | A kind of virtual reality fusion method of multiple video strems and three-dimensional scenic | |
US20170032577A1 (en) | Real-time virtual reflection | |
JP5671349B2 (en) | Image processing program, image processing apparatus, image processing system, and image processing method | |
CN106170978B (en) | Depth map generation device, method and non-transitory computer-readable medium | |
CN106131411B (en) | A kind of method and apparatus shooting image | |
CN105869216A (en) | Method and apparatus for presenting object target | |
TW202205059A (en) | Control method, electronic device and computer-readable storage medium for virtual object | |
US10977869B2 (en) | Interactive method and augmented reality system | |
CN106843493B (en) | A kind of picture charge pattern method and the augmented reality implementation method using this method | |
CN108304063A (en) | Information processing unit, information processing method and computer-readable medium | |
TW202025719A (en) | Method, apparatus and electronic device for image processing and storage medium thereof | |
CN107154032A (en) | A kind of image processing method and device | |
CN101872243A (en) | System and method for realizing 360-degree panoramic play following real space direction | |
CN206178657U (en) | Interactive display system of AR and interactive display system of museum's historical relic | |
CN108509173A (en) | Image shows system and method, storage medium, processor | |
CN107862718A (en) | 4D holographic video method for catching | |
CN108600729A (en) | Dynamic 3D models generating means and image generating method | |
CN107995481B (en) | A kind of display methods and device of mixed reality | |
WO2023097805A1 (en) | Display method, display device, and computer-readable storage medium | |
CN208506731U (en) | Image display systems | |
CN108564654B (en) | Picture entering mode of three-dimensional large scene | |
CN108616729A (en) | Enhance the method and system of user interface three-dimensional stereo effect | |
CN107481323A (en) | Mix the interactive approach and its system in real border |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20171215 |