CN106896923A - Virtual interacting method and device - Google Patents
Virtual interacting method and device Download PDFInfo
- Publication number
- CN106896923A CN106896923A CN201710201755.1A CN201710201755A CN106896923A CN 106896923 A CN106896923 A CN 106896923A CN 201710201755 A CN201710201755 A CN 201710201755A CN 106896923 A CN106896923 A CN 106896923A
- Authority
- CN
- China
- Prior art keywords
- dummy object
- environmental objects
- interacted
- dynamic menu
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/012—Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
The present invention relates to a kind of virtual interacting method and device, methods described includes:Generation dummy object;Obtain the real picture in camera;Recognize the environmental objects that can be interacted with the dummy object in the real picture;Generate the dynamic menu that the dummy object is interacted with the environmental objects;The dummy object is interacted according to the dynamic menu with the environmental objects.The present invention is after dummy object is generated, the environmental objects that further identification can be interacted with dummy object from the real picture in camera, and generate dynamic menu when the two is interacted, dummy object is set to be interacted according to dynamic menu with environmental objects, so that the form of dummy object more mobilism, improves interactive sense.
Description
Technical field
The present invention relates to AR technical fields, more particularly to a kind of virtual interacting method and device.
Background technology
With the development of artificial intelligence, the appearance of AR (Augmented Reality, augmented reality) technology is changed
Cognition of people's two-dimensional visual to object, electronic equipment can be such that object is directly opened up in the form of solid by AR technologies
It is existing, people is obtained the experience such as more deep vision, tactile.
, it is necessary to select just watch the true of camera after starting AR after the electronic equipment opening camera such as usual mobile device
Real picture.Presently, starting the dummy object generated after AR can not interact with the real picture in camera, from
And making the comparison in images of dummy object stiff, interactive dynamic is relatively low.
The content of the invention
Based on this, it is necessary to which the dummy object for generation can not be with asking that the real picture in camera is interacted
Topic, there is provided a kind of virtual interacting method and device.
A kind of virtual interacting method, methods described includes:
Generation dummy object;
Obtain the real picture in camera;
Recognize the environmental objects that can be interacted with the dummy object in the real picture;
Generate the dynamic menu that the dummy object is interacted with the environmental objects;
The dummy object is interacted according to the dynamic menu with the environmental objects.
Wherein in one embodiment, the environment thing that can be interacted with the dummy object in the identification real picture
Body includes:
Whether the title for detecting environmental objects in the real picture is default object names;
If so, then judging that the environmental objects can be interacted with the dummy object.
Wherein in one embodiment, the dynamic picture that the generation dummy object is interacted with the environmental objects
Face includes:
Detect whether to be stored with the dynamic menu that the dummy object and environmental objects interact;
If so, then extracting the dynamic menu.
Wherein in one embodiment, the dynamic menu is the interactive instruction of connection, described by the dummy object root
Interacted specially with the environmental objects according to the dynamic menu:
The dummy object is controlled to be interacted with environmental objects according to the interactive instruction.
It is described to enter the dummy object with the environmental objects according to the dynamic menu wherein in one embodiment
Row interaction is specially:
By the dummy object according to the dynamic menu and the ring described in real picture in the camera
Border object is interacted.
A kind of virtual interacting device, described device includes:
Virtual module, for generating dummy object;
Acquisition module, for obtaining the real picture in camera;
Identification module, for recognizing the environmental objects that can be interacted with the dummy object in the real picture;
Picture generation module, for generating the dynamic menu that the dummy object is interacted with the environmental objects;
Interactive module, for the dummy object to be interacted according to the dynamic menu with the environmental objects.
Wherein in one embodiment, the identification module includes:
Title detection module, whether the title for detecting environmental objects in the real picture is default object name
Claim;
Judge module, for detecting the real picture in the detection module in environmental objects it is entitled default
During object names, then judge that the environmental objects can be interacted with the dummy object.
Wherein in one embodiment, the picture generation module includes:
Picture detection module, for detecting whether the dynamic picture that the be stored with dummy object and environmental objects are interacted
Face;
Extraction module, the dummy object is stored with and environmental objects are carried out for being detected in the picture detection module
During interactive dynamic menu, then the dynamic menu is extracted.
Wherein in one embodiment, the dynamic menu is the interactive instruction of connection, and the interactive module is additionally operable to root
The dummy object is controlled to be interacted with environmental objects according to the interactive instruction.
Wherein in one embodiment, will described in the real picture that the interactive module is additionally operable in the camera
The dummy object is interacted according to the dynamic menu with the environmental objects.
The above virtual interacting method and device after dummy object is generated, further from the real picture in camera
Environmental objects that middle identification can be interacted with dummy object, and generate the two dynamic menu when interacting, make dummy object with
Environmental objects are interacted according to dynamic menu, so that the form of dummy object more mobilism, improves interactive sense.
Brief description of the drawings
Fig. 1 is the flow chart of the virtual interacting method of an embodiment;
Fig. 2 is the structure chart of the virtual interacting device of an embodiment.
Specific embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, it is right below in conjunction with drawings and Examples
The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and
It is not used in the restriction present invention.
The various electronic equipments such as mobile terminal, panel computer generally support AR technologies now, after AR programs are started, can be with
Specific picture is generated into solid image.For example, the front of AR cards is generally Role Network, the back side is FM yards of figure.Start electronics to set
After AR scanning softwares in standby, FM yards of scanning may occur in which the 3D solid images of the positive character.This is the one of the present embodiment
Plant application, it should be pointed out that the present embodiment also virtual object including the various AR Program Generatings by being installed in electronic equipment
Body, it will be apparent that, the dummy object includes the 3D solid images of above positive character, also other the various virtual objects including generating
Body.
As shown in figure 1, the virtual interacting method of the present embodiment includes step S120 to step S180.
Step S120, obtains the real picture in camera.
The camera in electronic equipment is opened, real picture can be manifested in camera.The angles and positions of camera are equal
Do not limit, real picture is defined by that can recognize object therein in camera.
Step S140, the environmental objects that can be interacted with dummy object in identification real picture.
Wherein, step S140 includes step S141 and step S142.
Step S141, whether the title of environmental objects is default object names in detection real picture.
For the dummy object for generating, it can only be interacted with corresponding environmental objects.If for example, dummy object is behaved
Thing, personage has and the various actions such as sits, lies, and personage can interact with the environmental objects such as ground, wall, desk, Ren Wuye
Cup can be picked up to drink water shape, books are picked up and seen book-like, this is satisfied by interactive condition.But if dummy object is fish, its
Can be interacted with water, make swimming shape, but it cannot be interacted with cup, books.In the present embodiment, deposited in advance in electronic equipment
The title of the environmental objects that can be interacted with dummy object is contained, whether the title of environmental objects is pre- in detection real picture
If object names, you can judge whether the environmental objects can interact with dummy object.
Step S142, if so, then judging that environmental objects can be interacted with dummy object.
If the entitled default object names of the environmental objects in real picture, illustrate environmental objects can with it is virtual
Object is interacted.
Step S160, the dynamic menu that generation dummy object is interacted with environmental objects.
Wherein, step S160 includes step S161 and step S62.
Step S161, the dynamic menu that detect whether to be stored with dummy object and environmental objects are interacted.
In the present embodiment, electronic equipment is previously stored with the dynamic picture that various dummy objects are interacted with environmental objects
Face.For example, the dynamic menu that dummy object fish interacts with environmental objects water, such as fish swim in the water, or fish jumps in water.It is bright
Aobvious, dummy object can have multiple with the dynamic menu that environmental objects are interacted, and multiple interactive pictures can be according to interaction
Content interact the definition of title.According to interaction title and dummy object and the corresponding relation of environmental objects, electronic equipment
Whether can be stored with corresponding interactive picture according to interaction name search, if it has, then including being set in electronics by interaction title
On standby display screen, user can select corresponding dynamic menu according to interaction title.
Step S162, if so, then extracting dynamic menu.
If detecting corresponding dynamic menu, extracted read in electronic equipment internal memory in wait operation i.e.
Can.
Step S180, dummy object is interacted according to dynamic menu with environmental objects.
Dynamic menu can be the interactive instruction of connection, according to interactive instruction dummy object can be controlled to enter with environmental objects
Row interaction.For example, for dummy object fish and environmental objects water, dynamic menu if swimming shape of the fish in water, then interact
Instruction can be tail fin of control fish or so or the control instruction for swinging up and down, it is also possible to which the control comprising control water level fluctuation refers to
Order.Interactive instruction can have sequencing, it is also possible to while performing side by side.General electronic equipment supports the place of parallel thread
Reason, therefore, electronic equipment is typically each supported to perform interactive instruction simultaneously or one by one.As previously discussed, performing interactive instruction can be with
Fish is set freely to be moved about in water.
In the present embodiment, directly in the real picture in camera by dummy object according to dynamic menu and environmental objects
Interact.Common, the dynamic menu that dummy object is interacted with environmental objects can also directly in the display screen of electronic equipment
Upper display.But dynamic menu directly is reacted in the real picture in camera, stereoscopic viewing is stronger, more intuitively.
The above virtual interacting method is further recognized after dummy object is generated from the real picture in camera
The environmental objects that can be interacted with dummy object, and dynamic menu when the two is interacted is generated, make dummy object and environment thing
Body is interacted according to dynamic menu, so that the form of dummy object more mobilism, improves interactive sense.
As shown in Fig. 2 the present embodiment additionally provides a kind of virtual interacting device, including virtual module 110, acquisition module
120th, identification module 130, picture generation module 140 and interactive module 150.
Virtual module 110, for generating dummy object.
The various electronic equipments such as mobile terminal, panel computer generally support AR technologies now, after AR programs are started, can be with
Specific picture is generated into solid image.For example, the front of AR cards is generally Role Network, the back side is FM yards of figure.Start electronics to set
After AR scanning softwares in standby, FM yards of scanning may occur in which the 3D solid images of the positive character.This is the one of the present embodiment
Plant application, it should be pointed out that the present embodiment also virtual object including the various AR Program Generatings by being installed in electronic equipment
Body, it will be apparent that, the dummy object includes the 3D solid images of above positive character, also other the various virtual objects including generating
Body.Virtual module 120 can be AR software programs etc..
Acquisition module 120, for obtaining the real picture in camera.
The camera in electronic equipment is opened, real picture can be manifested in camera.The angles and positions of camera are equal
Do not limit, real picture is defined by that can recognize object therein in camera.
Identification module 130, for recognizing the environmental objects that can be interacted with dummy object in real picture.
Specifically, identification module 130 includes title detection module 131 and judge module 132.
Title detection module 131, whether the title for detecting environmental objects in real picture is default object names.
For the dummy object for generating, it can only be interacted with corresponding environmental objects.If for example, dummy object is behaved
Thing, personage has and the various actions such as sits, lies, and personage can interact with the environmental objects such as ground, wall, desk, Ren Wuye
Cup can be picked up to drink water shape, books are picked up and seen book-like, this is satisfied by interactive condition.But if dummy object is fish, its
Can be interacted with water, make swimming shape, but it cannot be interacted with cup, books.In the present embodiment, deposited in advance in electronic equipment
The title of the environmental objects that can be interacted with dummy object is contained, whether the title of environmental objects is pre- in detection real picture
If object names, you can judge whether the environmental objects can interact with dummy object.
Judge module 132, for detecting real picture in detection module in environmental objects entitled default object
During title, then judge that environmental objects can be interacted with dummy object.
If the entitled default object names of the environmental objects in real picture, illustrate environmental objects can with it is virtual
Object is interacted.
Picture generation module 140, for generating the dynamic menu that dummy object and environmental objects are interacted.
Wherein, picture generation module 140 includes picture detection module 141 and extraction module 142.
Picture detection module 141, for detecting whether the dynamic picture that be stored with dummy object and environmental objects are interacted
Face.
In the present embodiment, electronic equipment is previously stored with the dynamic picture that various dummy objects are interacted with environmental objects
Face.For example, the dynamic menu that dummy object fish interacts with environmental objects water, such as fish swim in the water, or fish jumps in water.It is bright
Aobvious, dummy object can have multiple with the dynamic menu that environmental objects are interacted, and multiple interactive pictures can be according to interaction
Content interact the definition of title.According to interaction title and dummy object and the corresponding relation of environmental objects, electronic equipment
Whether can be stored with corresponding interactive picture according to interaction name search, if it has, then including being set in electronics by interaction title
On standby display screen, user can select corresponding dynamic menu according to interaction title.
Extraction module 142, dummy object is stored with and environmental objects are interacted for being detected in picture detection module
Dynamic menu when, then extract dynamic menu.
If detecting corresponding dynamic menu, extracted read in electronic equipment internal memory in wait operation i.e.
Can.
Interactive module 150, for dummy object to be interacted according to dynamic menu with environmental objects.
Dynamic menu can be the interactive instruction of connection, and interactive module 150 can control dummy object according to interactive instruction
Interacted with environmental objects.For example, for dummy object fish and environmental objects water, dynamic menu if trip of the fish in water
Swimming shape, then interactive instruction can be tail fin of control fish or so or the control instruction for swinging up and down, it is also possible to comprising the control water surface
The control instruction of fluctuation.Interactive instruction can have sequencing, it is also possible to while performing side by side.General electronic equipment is supported
The treatment of parallel thread, therefore, electronic equipment is typically each supported to perform interactive instruction simultaneously or one by one.As previously discussed, perform
Interactive instruction can make fish freely be moved about in water.
In the present embodiment, by dummy object according to dynamic picture in the real picture directly in camera of interactive module 150
Face interacts with environmental objects.Common, the dynamic menu that dummy object is interacted with environmental objects can also be directly in electronics
The display screen display of equipment.But dynamic menu directly is reacted in the real picture in camera, stereoscopic viewing is stronger, more directly
See.
The above virtual interacting device is further recognized after dummy object is generated from the real picture in camera
The environmental objects that can be interacted with dummy object, and dynamic menu when the two is interacted is generated, make dummy object and environment thing
Body is interacted according to dynamic menu, so that the form of dummy object more mobilism, improves interactive sense.
Each technical characteristic of embodiment described above can be combined arbitrarily, to make description succinct, not to above-mentioned reality
Apply all possible combination of each technical characteristic in example to be all described, as long as however, the combination of these technical characteristics is not deposited
In contradiction, the scope of this specification record is all considered to be.
Embodiment described above only expresses several embodiments of the invention, and its description is more specific and detailed, but simultaneously
Can not therefore be construed as limiting the scope of the patent.It should be pointed out that coming for one of ordinary skill in the art
Say, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to protection of the invention
Scope.Therefore, the protection domain of patent of the present invention should be determined by the appended claims.
Claims (10)
1. a kind of virtual interacting method, it is characterised in that methods described includes:
Generation dummy object;
Obtain the real picture in camera;
Recognize the environmental objects that can be interacted with the dummy object in the real picture;
Generate the dynamic menu that the dummy object is interacted with the environmental objects;
The dummy object is interacted according to the dynamic menu with the environmental objects.
2. method according to claim 1, it is characterised in that can be with the virtual object in the identification real picture
The environmental objects of body interaction include:
Whether the title for detecting environmental objects in the real picture is default object names;
If so, then judging that the environmental objects can be interacted with the dummy object.
3. method according to claim 1, it is characterised in that the generation dummy object enters with the environmental objects
The dynamic menu of row interaction includes:
Detect whether to be stored with the dynamic menu that the dummy object and environmental objects interact;
If so, then extracting the dynamic menu.
4. method according to claim 1, it is characterised in that the dynamic menu is the interactive instruction of connection, it is described will
The dummy object is interacted specially according to the dynamic menu with the environmental objects:
The dummy object is controlled to be interacted with environmental objects according to the interactive instruction.
5. method according to claim 1, it is characterised in that it is described by the dummy object according to the dynamic menu with
The environmental objects are interacted specially:
By the dummy object according to the dynamic menu and the environment thing described in real picture in the camera
Body is interacted.
6. a kind of virtual interacting device, it is characterised in that described device includes:
Virtual module, for generating dummy object;
Acquisition module, for obtaining the real picture in camera;
Identification module, for recognizing the environmental objects that can be interacted with the dummy object in the real picture;
Picture generation module, for generating the dynamic menu that the dummy object is interacted with the environmental objects;
Interactive module, for the dummy object to be interacted according to the dynamic menu with the environmental objects.
7. device according to claim 6, it is characterised in that the identification module includes:
Title detection module, whether the title for detecting environmental objects in the real picture is default object names;
Judge module, for detecting the real picture in the detection module in environmental objects entitled default object
During title, then judge that the environmental objects can be interacted with the dummy object.
8. device according to claim 6, it is characterised in that the picture generation module includes:
Picture detection module, for detecting whether the dynamic menu that the be stored with dummy object and environmental objects are interacted;
Extraction module, the dummy object is stored with and environmental objects are interacted for being detected in the picture detection module
Dynamic menu when, then extract the dynamic menu.
9. device according to claim 6, it is characterised in that the dynamic menu is the interactive instruction of connection, the friendship
Mutual module is additionally operable to control the dummy object to be interacted with environmental objects according to the interactive instruction.
10. device according to claim 6, it is characterised in that the interactive module is additionally operable in the camera
The dummy object is interacted according to the dynamic menu with the environmental objects described in real picture.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710201755.1A CN106896923A (en) | 2017-03-30 | 2017-03-30 | Virtual interacting method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710201755.1A CN106896923A (en) | 2017-03-30 | 2017-03-30 | Virtual interacting method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106896923A true CN106896923A (en) | 2017-06-27 |
Family
ID=59193032
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710201755.1A Pending CN106896923A (en) | 2017-03-30 | 2017-03-30 | Virtual interacting method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106896923A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108958480A (en) * | 2018-06-19 | 2018-12-07 | 北京科技大学 | A kind of method and system controlling virtual sea biology |
CN109917989A (en) * | 2019-03-06 | 2019-06-21 | 珠海金山网络游戏科技有限公司 | A kind of method and device that realizing virtual page turning calculates equipment and storage medium |
CN110443873A (en) * | 2019-08-12 | 2019-11-12 | 苏州悠优互娱文化传媒有限公司 | A kind of children's book equipped AR scene shows method, apparatus, storage medium |
CN110458928A (en) * | 2019-08-12 | 2019-11-15 | 苏州悠优互娱文化传媒有限公司 | AR animation producing method, device, medium based on unity3d |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103995584A (en) * | 2014-04-29 | 2014-08-20 | 深圳超多维光电子有限公司 | Three-dimensional interactive method, display device, handling rod and system |
CN105005145A (en) * | 2015-08-03 | 2015-10-28 | 众景视界(北京)科技有限公司 | Intelligent glasses and external equipment of intelligent glasses |
US20170053449A1 (en) * | 2015-08-19 | 2017-02-23 | Electronics And Telecommunications Research Institute | Apparatus for providing virtual contents to augment usability of real object and method using the same |
CN106485782A (en) * | 2016-09-30 | 2017-03-08 | 珠海市魅族科技有限公司 | Method and device that a kind of reality scene is shown in virtual scene |
-
2017
- 2017-03-30 CN CN201710201755.1A patent/CN106896923A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103995584A (en) * | 2014-04-29 | 2014-08-20 | 深圳超多维光电子有限公司 | Three-dimensional interactive method, display device, handling rod and system |
CN105005145A (en) * | 2015-08-03 | 2015-10-28 | 众景视界(北京)科技有限公司 | Intelligent glasses and external equipment of intelligent glasses |
US20170053449A1 (en) * | 2015-08-19 | 2017-02-23 | Electronics And Telecommunications Research Institute | Apparatus for providing virtual contents to augment usability of real object and method using the same |
CN106485782A (en) * | 2016-09-30 | 2017-03-08 | 珠海市魅族科技有限公司 | Method and device that a kind of reality scene is shown in virtual scene |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108958480A (en) * | 2018-06-19 | 2018-12-07 | 北京科技大学 | A kind of method and system controlling virtual sea biology |
CN109917989A (en) * | 2019-03-06 | 2019-06-21 | 珠海金山网络游戏科技有限公司 | A kind of method and device that realizing virtual page turning calculates equipment and storage medium |
CN110443873A (en) * | 2019-08-12 | 2019-11-12 | 苏州悠优互娱文化传媒有限公司 | A kind of children's book equipped AR scene shows method, apparatus, storage medium |
CN110458928A (en) * | 2019-08-12 | 2019-11-15 | 苏州悠优互娱文化传媒有限公司 | AR animation producing method, device, medium based on unity3d |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106227439B (en) | Device and method for capturing digitally enhanced image He interacting | |
CN107862315B (en) | Subtitle extraction method, video searching method, subtitle sharing method and device | |
CN106896923A (en) | Virtual interacting method and device | |
US8922588B2 (en) | Information processing program, information processing system, information processing apparatus, and information processing method, utilizing augmented reality technique | |
EP2410401B1 (en) | Method for selection of an object in a virtual environment | |
CN105659200B (en) | For showing the method, apparatus and system of graphic user interface | |
CN106203286B (en) | Augmented reality content acquisition method and device and mobile terminal | |
WO2017107192A1 (en) | Depth map generation apparatus, method and non-transitory computer-readable medium therefor | |
CN106200917B (en) | A kind of content display method of augmented reality, device and mobile terminal | |
CN113395542B (en) | Video generation method and device based on artificial intelligence, computer equipment and medium | |
US20140223474A1 (en) | Interactive media systems | |
AU2013273829A1 (en) | Time constrained augmented reality | |
CN105706109A (en) | Correlated display of biometric identity, feedback and user interaction state | |
WO2022227393A1 (en) | Image photographing method and apparatus, electronic device, and computer readable storage medium | |
CN109215416A (en) | A kind of Chinese character assistant learning system and method based on augmented reality | |
CN110868554B (en) | Method, device and equipment for changing faces in real time in live broadcast and storage medium | |
CN106127828A (en) | The processing method of a kind of augmented reality, device and mobile terminal | |
US9118903B2 (en) | Device and method for 2D to 3D conversion | |
CN112232260A (en) | Subtitle region identification method, device, equipment and storage medium | |
CN112532882B (en) | Image display method and device | |
CN111954045A (en) | Augmented reality device and method | |
CN105744168B (en) | A kind of information processing method and electronic equipment | |
US20230018814A1 (en) | Integrating overlaid digital content into displayed data via graphics processing circuitry | |
CN103543916A (en) | Information processing method and electronic equipment | |
CN113066189B (en) | Augmented reality equipment and virtual and real object shielding display method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170627 |