CN106157262A - The processing method of a kind of augmented reality, device and mobile terminal - Google Patents
The processing method of a kind of augmented reality, device and mobile terminal Download PDFInfo
- Publication number
- CN106157262A CN106157262A CN201610507252.2A CN201610507252A CN106157262A CN 106157262 A CN106157262 A CN 106157262A CN 201610507252 A CN201610507252 A CN 201610507252A CN 106157262 A CN106157262 A CN 106157262A
- Authority
- CN
- China
- Prior art keywords
- user
- augmented reality
- feature
- real world
- characteristic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000003190 augmentative effect Effects 0.000 title claims abstract description 105
- 238000003672 processing method Methods 0.000 title claims abstract description 21
- 238000000034 method Methods 0.000 claims abstract description 26
- 230000008921 facial expression Effects 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 8
- 230000015572 biosynthetic process Effects 0.000 claims description 6
- 238000003786 synthesis reaction Methods 0.000 claims description 6
- 230000002194 synthesizing effect Effects 0.000 claims description 5
- 230000001960 triggered effect Effects 0.000 claims description 5
- 230000013011 mating Effects 0.000 claims description 3
- 238000012544 monitoring process Methods 0.000 claims 1
- 230000009471 action Effects 0.000 description 19
- 230000000694 effects Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000007547 defect Effects 0.000 description 3
- 230000002708 enhancing effect Effects 0.000 description 3
- 230000000391 smoking effect Effects 0.000 description 3
- 235000019504 cigarettes Nutrition 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 239000000779 smoke Substances 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 235000013361 beverage Nutrition 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003651 drinking water Substances 0.000 description 1
- 235000020188 drinking water Nutrition 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 238000000465 moulding Methods 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention discloses processing method, device and the mobile terminal of a kind of augmented reality, wherein, the method includes: identify pre-set user from the real world images of camera capture;Current performance feature according to described user obtains the augmented reality content with described user-association;Described augmented reality content and described real world images are synthesized.The processing method of augmented reality disclosed in the embodiment of the present invention, device and mobile terminal, can eliminate the flaw in shooting image, make user quickly shoot satisfied image.
Description
Technical field
The present embodiments relate to augmented reality field, particularly relate to a kind of present augmented reality processing method,
Device and mobile terminal.
Background technology
Augmented reality (Augmented Reality, AR) is by the certain time spatial dimension in real world
Be difficult to information (such as visual information, sound, taste, the sense of touch etc.) additive fusion experienced in real environment by human sensory
Institute's perception, thus strengthen perception in true environment for the user, deepen the experience of feeling of immersion.
Camera function on the intelligent terminals such as mobile phone is more and more ripe, and user uses the frequency of mobile phone photograph increasingly
Height, augmented reality function also begins to slowly be integrated on the intelligent terminals such as mobile phone.
During existing taking pictures, expressing one's feelings or stiff in the movements situation of the personage shooting out often occurs, needs
Repeatedly to shoot and just can shoot satisfied image.
Content of the invention
The embodiment of the present invention proposes the processing method of a kind of augmented reality, device and mobile terminal, to eliminate shooting image
In flaw, make user quickly shoot satisfied image.
First aspect, embodiments provides the processing method of a kind of augmented reality, comprising:
Identify pre-set user from the real world images of camera capture;
Current performance feature according to described user obtains the augmented reality content with described user-association;
Described augmented reality content and described real world images are synthesized.
Second aspect, the embodiment of the present invention additionally provides the processing means of a kind of augmented reality, comprising:
User identification unit, for identifying pre-set user from the real world images of camera capture;
Augmented reality contents acquiring unit, obtains and described user-association for the current performance feature according to described user
Augmented reality content;
Strengthen image generation unit, for synthesizing described augmented reality content and described real world images.
The third aspect, the embodiment of the present invention additionally provides a kind of mobile terminal, including the augmented reality in second aspect
Processing means.
The processing method of the augmented reality that the embodiment of the present invention provides, device and mobile terminal, from showing of camera capture
Identifying pre-set user in real image, the current performance feature according to described user obtains the augmented reality with described user-association
Described augmented reality content and described real world images are synthesized by content.The increasing that current performance feature according to user obtains
Strong real content, can weaken or eliminate the flaw in user's current performance feature, by this augmented reality content and reality figure
As synthesizing, the image that user can be made to shoot is relatively perfect, improves the experience of user.
Brief description
Fig. 1 is the schematic flow sheet of the processing method of a kind of augmented reality that the embodiment of the present invention one provides
Fig. 2 a is the schematic flow sheet of the processing method of a kind of augmented reality that the embodiment of the present invention two provides;
Fig. 2 b-Fig. 2 c be the embodiment of the present invention two provide take pictures during use augmented reality disposal methods
The schematic diagram of process;
Fig. 3 a is the schematic flow sheet of the processing method of a kind of augmented reality that the embodiment of the present invention three provides;
Fig. 3 b-Fig. 3 c be the embodiment of the present invention three provide take pictures during use augmented reality disposal methods
The schematic diagram of process;
Fig. 4 is the schematic flow sheet of the processing method of a kind of augmented reality that the embodiment of the present invention four provides;
Fig. 5 is the structural representation of the processing means of a kind of augmented reality that the embodiment of the present invention five provides.
Detailed description of the invention
Below in conjunction with the accompanying drawings and further illustrate technical scheme by detailed description of the invention.May be appreciated
It is that specific embodiment described herein is used only for explaining the present invention, rather than limitation of the invention.Further need exist for explanation
, for the ease of describing, accompanying drawing illustrate only part related to the present invention rather than entire infrastructure.
It should be mentioned that, some exemplary embodiments are described as before being discussed in greater detail exemplary embodiment
The process described as flow chart or method.Although every step is described as the process of order by flow chart, but therein permitted
Multi-step can be implemented concurrently, concomitantly or simultaneously.Additionally, the order of every step can be rearranged.When it
Described process when step completes can be terminated, it is also possible to have the additional step being not included in accompanying drawing.Described process
Can correspond to method, function, code, subroutine, subprogram etc..
Embodiment one
Fig. 1 is the schematic flow sheet of the processing method of a kind of augmented reality that the embodiment of the present invention one provides.The method can
To be performed by the device possessing augmented reality function, wherein this device can be realized by software and/or hardware, typically can be integrated in shifting
In dynamic terminal.Seeing Fig. 1, the method includes:
S110, from camera capture real world images identify pre-set user.
Real world images is the scene in the real world that camera captures, and examines the real world images of camera capture
Survey identifying processing, identify pre-set user.This pre-set user can be the user needing to obtain its augmented reality content.
S120, obtain the augmented reality content with described user-association according to the current performance feature of described user.
Wherein, the performance characteristic of described user includes: the motion characteristic of user and/or facial expression feature.
The augmented reality content with user-association that current performance feature according to user gets, this augmented reality content
It is usually some expression or actions normally, after augmented reality content is synthesized with real world images, reality figure can be revised
The flaw occurring in user's current action in Xiang, such as corrective action is inharmonious, or have little expression, the flaw such as eye closing.
It should be noted that when pre-set user is the subject user in real world images, this subject user is usually to need
The user it is taken pictures.The augmented reality content obtaining can be the flaw this subject user current performance feature occur
The content that defect weakens or eliminates.When pre-set user is not the subject user in real world images, for example, it can be subject user
Other passerbys in addition, or user is when shooting landscape image, the unrelated personage occurring in image, the augmented reality of acquisition
Content can be the scene content can blocked this user.
S130, described augmented reality content and described real world images are synthesized.
The processing method of the augmented reality that the embodiment of the present invention provides, identifies pre-from the real world images of camera capture
If user, the current performance feature according to described user obtains the augmented reality content with described user-association, by described enhancing
Real content synthesizes with described real world images.The augmented reality content that current performance feature according to user obtains is permissible
Weaken or eliminate the flaw in user's current performance feature, this augmented reality content being synthesized with real world images, can make
The image that user shoots is relatively perfect, improves the experience of user, and improves the success rate that user's shooting is satisfied with image.
Embodiment two
Fig. 2 a is the schematic flow sheet of the processing method of a kind of augmented reality that the embodiment of the present invention two provides, the present embodiment
Being optimized based on above-described embodiment one, seeing Fig. 2 a, the method includes:
S210, from camera capture real world images identify pre-set user.
S220, described user's current performance feature is mated with default feature.
Matching process can be carried out at mobile terminal, or sends the parameter characterizing user's current performance feature to service
Device, is carried out matching treatment by server.
S230, when described user's current performance feature is not mated with described default feature, obtain with described user-association
Augmented reality content.
When user's current performance feature is not mated with default feature, can illustrate that the action of user and/or expression occur in that
Flaw.Specifically can obtain the body characteristics parameter of user, body characteristics parameter is carried out with default body characteristics parameter area
Relatively, it is judged that whether the body characteristics parameter of user is beyond presetting body characteristics parameter area, when beyond default body characteristics ginseng
During number scope, it may be determined that user's current performance feature is not mated with described default feature, illustrates that flaw occurs in the action of user.Example
If, user is when taking pictures, from camera capture realize image identifies this user, obtain the figure feature of user, according to
The figure feature of user can determine that whether user occurs in that not straight situation of standing.
The facial expression feature of user can embody with facial expression feature parameter, by the human body in real world images
Face is identified, and identifies pre-set user, and obtains the facial expression feature parameter of user, the facial expression feature that will obtain
Parameter is mated with default facial expression feature parameter, determines whether the expression of user occurs in that flaw.For example whether determine user
Close one's eyes or whether expression is excessively nervous.When facial expression feature parameter is not mated with default facial expression feature parameter, explanation
User occurs in that nervous situation of closing one's eyes or express one's feelings.
The flaw that also can occur according to the comprehensive user of determination of the current action of user and expression, specifically can by action and
Whether expression mates whether determination user occurs in that flaw.For example when getting the current body characteristics of user for going out " scissors hand "
Moulding, and the expression of user is more serious, it may be determined that the action of user and expression are not mated, and the expression of user occurs in that the flaw
Defect.
When the current expressive features of user is not mated with default feature, illustrate that the action of user and/expression occur in that the flaw
Defect, can obtain the augmented reality content with user-association, specifically obtain can weaken or eliminate user action and/or
The content of the flaw that expression occurs.
S240, described augmented reality content and described real world images are synthesized.
In the another embodiment of the present embodiment, the image of shooting is not likely to be the action of user belonging to terminal
Or the flaw that expression occurs, but due to others appearance, have impact on the effect of shooting image.Can be by except belonging to terminal
Other outer people of user itself are set to pre-set user, when identifying pre-set user, it is believed that occur in that flaw, obtain and are somebody's turn to do
The augmented reality content of pre-set user association, for example, obtain mosaic effect, use this mosaic effect to block this pre-set user.
Exemplary, seeing Fig. 2 b, user is when taking pictures, and camera captures a character image, and this character image is
For real world images, from the real world images of camera capture, identify pre-set user, by the current performance feature of user with preset
Characteristic matching, determines that in real world images, user is closed-eye state, i.e. the expression of user occurs in that flaw.Now, acquisition and user
The augmented reality content of association, the augmented reality content of acquisition is the related content that user opens eyes, in the augmented reality that will obtain
Hold after synthesizing with real world images, obtain effect image as shown in Figure 2 c.It can be seen that enhanced image disappears from Fig. 2 c
Except the flaw in original image.
The present embodiment provide technical scheme, after identifying pre-set user, according to by performance characteristic current for user with
Default feature is mated, and does not mate constantly in described user's current performance feature with described default feature, obtains and described use
The augmented reality content of family association.Can be accurately determined the flaw that user occurs, and in the action of user and/or flaw occurs
When, obtain related augmented reality content and eliminate the flaw that user occurs, make user take satisfied image, improve user's shooting
It is satisfied with the success rate of photo.
Embodiment three
Fig. 3 a is the schematic flow sheet of the processing method of a kind of augmented reality that the embodiment of the present invention three provides, the present embodiment
Being optimized based on above-described embodiment one, seeing Fig. 3 a, the method includes:
S310, from camera capture real world images identify pre-set user.
S320, next step performance characteristic determining described user according to the current performance characteristic of described user.
Wherein, the current performance characteristic of user can be the current motion characteristic of user and/or facial expression feature.
Next step performance characteristic of user is the feature associating with current performance characteristic, for example when get user work as
Front performance characteristic is for during by key, and next step performance characteristic of user can be for opening the door;Get the current performance feature of user
During for picking up cup, next step performance characteristic of user can be for drinking water.Next step performance characteristic of user is current with user
Performance characteristic associates, and correspondence is stored in database, is identifying pre-set user, determine user current performance feature it
After, according to current performance feature storage relation corresponding with next step performance characteristic, determine next step performance characteristic of user.
The augmented reality content that S330, acquisition associate with next step performance characteristic described.
The augmented reality content associating with next step performance characteristic obtaining is that action that can be current with user is formed even
The augmented reality content of coherence, can also is that the augmented reality content eliminating the flaw occurring in user's current performance feature.Example
As when next step performance characteristic of determination is for forbidding class performance characteristic, for example, determine that next step performance characteristic of user, can for smoking
Occur that this forbids class performance characteristic to obtain augmented reality content warnings user.
S340, described augmented reality content and described real world images are synthesized.
Exemplary, participate in Fig. 3 b, when capturing, from camera, the real world images that user takes cup, identify user and take cup
The action of son.Determine that next step action of user is the feature associating with the action by cup, for example, drink water or fall in quilt
The action of water, at this point it is possible to obtain the augmented reality associating with next step performance characteristic, the augmented reality content for example obtaining is
Beverage, synthesize augmented reality content with real world images after, forms image as shown in Figure 3 c.
When capturing, from camera, the real world images that user draws out cigarette, identify user and draw the action of cigarette, determine under user
One step action is to smoke, and obtain and smoke the augmented reality content associating, for example, obtain the enhancing that Smoking is harmful to your health identifies existing
This augmented reality content is synthesized with real world images, is used for warning user to give up smoking by real content.
The technical scheme that the present embodiment provides, after identifying user, true according to the performance characteristic that described user is current
Next step performance characteristic of fixed described user, obtains the augmented reality content associating with next step performance characteristic described, by described
Augmented reality content synthesizes with described real world images.Next step performance characteristic of user can be used to strengthen realities of the day figure
Picture, makes real world images abundanter, or eliminates the flaw in current performance feature, improves user and takes pictures experience.
Embodiment four
Fig. 4 is the schematic flow sheet of the processing method of a kind of augmented reality that the embodiment of the present invention four provides, the present embodiment
Being optimized based on above-described embodiment one, seeing Fig. 4, the method includes:
S410, the characteristic information of the multiple users obtaining in described real world images.
May there is multiple user in real world images, the characteristic information of multiple user can be obtained.The feature of user
Information can be the facial feature information of user, can carry out face recognition to identify use according to the facial feature information of user
Family.
S420, the characteristic information of the plurality of user is mated with the characteristic information of pre-set user.
S430, determine described pre-set user according to matching result.
The characteristic information of multiple users obtaining is mated with the characteristic information of pre-set user, knows after the match is successful
Do not go out pre-set user.
S440, obtain the augmented reality content with described user-association according to the current performance feature of described user.
S450, described augmented reality content and described real world images are synthesized.
S460, monitor the event of taking pictures be triggered when, will synthesis after image preserve and/or display.
In the present embodiment, described pre-set user is preferably the user belonging to terminal.Can be by obtaining mobile terminal originally
The head portrait of the user on ground identifies the user belonging to the terminal in real world images.
Image after synthesis, when detecting that the event of taking pictures is triggered, is preserved by the technical scheme that the present embodiment provides
And/or display, due to the image after synthesis, use augmented reality to be strengthened, eliminate flaw in real world images, permissible
Improve user and shoot the success rate of photo.
Embodiment five
Fig. 5 is the structural representation of the processing means of a kind of augmented reality that the embodiment of the present invention five provides, and this device can
Realized by software and/or hardware, be typically integrated in mobile terminal, can be realized by the processing method performing augmented reality.
Seeing Fig. 5, this device includes:
User identification unit 510, for identifying pre-set user from the real world images of camera capture;
Augmented reality contents acquiring unit 520, obtains and described user for the current performance feature according to described user
The augmented reality content of association;
Strengthen image generation unit 530, for synthesizing described augmented reality content and described real world images.
Wherein, the performance characteristic of described user includes:
The motion characteristic of user and/or facial expression feature.
Further, described augmented reality contents acquiring unit 520 includes:
Characteristic matching subelement 521, for mating described user's current performance feature with default feature;
First augmented reality content obtaining subelement 522, for described user's current performance feature and described default spy
Levy and do not mate constantly, obtain the augmented reality content with described user-association.
Further, described augmented reality contents acquiring unit 520 includes:
Performance characteristic determines subelement 523, for determining under described user according to the current performance characteristic of described user
One step performance characteristic;
Second augmented reality content obtaining subelement 524, for obtaining the enhancing associating with next step performance characteristic described
Real content.
Further, described pre-set user is the user belonging to terminal.
Further, described user identification unit 510 includes:
Characteristic information obtains subelement 511, for obtaining the characteristic information of the multiple users in described real world images;
Characteristic information mates subelement 512, for the feature letter by the characteristic information of the plurality of user and pre-set user
Breath mates;
User determines subelement 513, for determining described pre-set user according to matching result.
Further, described device also includes: image preserves and display unit 540, for using described augmented reality
After content and described real world images synthesize, monitor the event of taking pictures be triggered when, the image after synthesis is preserved
And/or display.
Said apparatus can perform the processing method of the augmented reality that any embodiment of the present invention is provided, and possesses execution above-mentioned
The corresponding functional module of method and beneficial effect.The ins and outs of not detailed in the present embodiment description, can be found in the present invention real
Execute the method that example is provided.
In addition, the embodiment of the present invention additionally provides a kind of mobile terminal, including the device that the embodiment of the present invention five is provided,
It is able to carry out the processing method of the augmented reality that any embodiment of the present invention is provided.
Note, above are only presently preferred embodiments of the present invention and institute's application technology principle.It will be appreciated by those skilled in the art that
The invention is not restricted to specific embodiment described here, can carry out for a person skilled in the art various obvious change,
Readjust and substitute without departing from protection scope of the present invention.Therefore, although by above example, the present invention is carried out
It is described in further detail, but the present invention is not limited only to above example, without departing from the inventive concept, also
Other Equivalent embodiments more can be included, and the scope of the present invention is determined by scope of the appended claims.
Claims (15)
1. the processing method of an augmented reality, it is characterised in that include:
Identify pre-set user from the real world images of camera capture;
Current performance feature according to described user obtains the augmented reality content with described user-association;
Described augmented reality content and described real world images are synthesized.
2. method according to claim 1, it is characterised in that the performance characteristic of described user includes:
The motion characteristic of user and/or facial expression feature.
3. method according to claim 1, it is characterised in that the described current performance feature according to described user obtain with
The augmented reality content of described user-association, comprising:
Described user's current performance feature is mated with default feature;
When described user's current performance feature is not mated with described default feature, obtain the augmented reality with described user-association
Content.
4. method according to claim 1, it is characterised in that the described current performance feature according to described user obtain with
The augmented reality content of described user-association, comprising:
Determine next step performance characteristic of described user according to the current performance characteristic of described user;
Obtain the augmented reality content associating with next step performance characteristic described.
5. method according to claim 1, it is characterised in that described pre-set user is the user belonging to terminal.
6. method according to claim 1, it is characterised in that described from camera capture real world images identify pre-
If user, comprising:
Obtain the characteristic information of multiple users in described real world images;
The characteristic information of the plurality of user is mated with the characteristic information of pre-set user;
Determine described pre-set user according to matching result.
7. the method according to any one of claim 1-6, it is characterised in that described use described augmented reality content with
After described real world images synthesizes, also include:
Monitor the event of taking pictures be triggered when, will synthesis after image preserve and/or display.
8. the processing means of an augmented reality, it is characterised in that include:
User identification unit, for identifying pre-set user from the real world images of camera capture;
Augmented reality contents acquiring unit, obtains the increasing with described user-association for the current performance feature according to described user
Strong real content;
Strengthen image generation unit, for synthesizing described augmented reality content and described real world images.
9. device according to claim 8, it is characterised in that the performance characteristic of described user includes:
The motion characteristic of user and/or facial expression feature.
10. device according to claim 8, it is characterised in that described augmented reality contents acquiring unit includes:
Characteristic matching subelement, for mating described user's current performance feature with default feature;
First augmented reality content obtaining subelement, for not mating with described default feature in described user's current performance feature
When, obtain the augmented reality content with described user-association.
11. devices according to claim 8, it is characterised in that described augmented reality contents acquiring unit includes:
Performance characteristic determines subelement, for determining next step performance of described user according to the current performance characteristic of described user
Feature;
Second augmented reality content obtaining subelement, for obtaining in the augmented reality associating with next step performance characteristic described
Hold.
12. devices according to claim 8, it is characterised in that described pre-set user is the user belonging to terminal.
13. devices according to claim 8, it is characterised in that described user identification unit includes:
Characteristic information obtains subelement, for obtaining the characteristic information of the multiple users in described real world images;
Characteristic information mates subelement, for carrying out the characteristic information of the plurality of user and the characteristic information of pre-set user
Join;
User determines subelement, for determining described pre-set user according to matching result.
14. devices described in-13 any one according to Claim 8, it is characterised in that also include: image preserves and display unit,
For, after using described augmented reality content to synthesize with described real world images, being triggered monitoring the event of taking pictures
When, the image after synthesis is preserved and/or display.
15. 1 kinds of mobile terminals, it is characterised in that include that the process of the augmented reality described in any one of claim 8-14 fills
Put.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610507252.2A CN106157262B (en) | 2016-06-28 | 2016-06-28 | Augmented reality processing method and device and mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610507252.2A CN106157262B (en) | 2016-06-28 | 2016-06-28 | Augmented reality processing method and device and mobile terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106157262A true CN106157262A (en) | 2016-11-23 |
CN106157262B CN106157262B (en) | 2020-04-17 |
Family
ID=57351100
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610507252.2A Active CN106157262B (en) | 2016-06-28 | 2016-06-28 | Augmented reality processing method and device and mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106157262B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106685941A (en) * | 2016-12-19 | 2017-05-17 | 宇龙计算机通信科技(深圳)有限公司 | Method, apparatus and server for optimizing AR registering |
CN107590828A (en) * | 2017-08-09 | 2018-01-16 | 广东欧珀移动通信有限公司 | The virtualization treating method and apparatus of shooting image |
CN108513060A (en) * | 2017-02-28 | 2018-09-07 | 三星电子株式会社 | Use the electronic equipment of the image pickup method and support this method of external electronic device |
CN110176077A (en) * | 2019-05-23 | 2019-08-27 | 北京悉见科技有限公司 | The method, apparatus and computer storage medium that augmented reality is taken pictures |
CN112330477A (en) * | 2019-08-01 | 2021-02-05 | 脸谱公司 | Generating customized personalized responses for social media content |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1794265A (en) * | 2005-12-31 | 2006-06-28 | 北京中星微电子有限公司 | Method and device for distinguishing face expression based on video frequency |
CN101247482A (en) * | 2007-05-16 | 2008-08-20 | 北京思比科微电子技术有限公司 | Method and device for implementing dynamic image processing |
CN101334845A (en) * | 2007-06-27 | 2008-12-31 | 中国科学院自动化研究所 | Video frequency behaviors recognition method based on track sequence analysis and rule induction |
CN101370195A (en) * | 2007-08-16 | 2009-02-18 | 英华达(上海)电子有限公司 | Method and device for implementing emotion regulation in mobile terminal |
CN101916370A (en) * | 2010-08-31 | 2010-12-15 | 上海交通大学 | Method for processing non-feature regional images in face detection |
WO2013027893A1 (en) * | 2011-08-22 | 2013-02-28 | Kang Jun-Kyu | Apparatus and method for emotional content services on telecommunication devices, apparatus and method for emotion recognition therefor, and apparatus and method for generating and matching the emotional content using same |
CN103297742A (en) * | 2012-02-27 | 2013-09-11 | 联想(北京)有限公司 | Data processing method, microprocessor, communication terminal and server |
CN104780338A (en) * | 2015-04-16 | 2015-07-15 | 美国掌赢信息科技有限公司 | Method and electronic equipment for loading expression effect animation in instant video |
-
2016
- 2016-06-28 CN CN201610507252.2A patent/CN106157262B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1794265A (en) * | 2005-12-31 | 2006-06-28 | 北京中星微电子有限公司 | Method and device for distinguishing face expression based on video frequency |
CN101247482A (en) * | 2007-05-16 | 2008-08-20 | 北京思比科微电子技术有限公司 | Method and device for implementing dynamic image processing |
CN101334845A (en) * | 2007-06-27 | 2008-12-31 | 中国科学院自动化研究所 | Video frequency behaviors recognition method based on track sequence analysis and rule induction |
CN101370195A (en) * | 2007-08-16 | 2009-02-18 | 英华达(上海)电子有限公司 | Method and device for implementing emotion regulation in mobile terminal |
CN101916370A (en) * | 2010-08-31 | 2010-12-15 | 上海交通大学 | Method for processing non-feature regional images in face detection |
WO2013027893A1 (en) * | 2011-08-22 | 2013-02-28 | Kang Jun-Kyu | Apparatus and method for emotional content services on telecommunication devices, apparatus and method for emotion recognition therefor, and apparatus and method for generating and matching the emotional content using same |
CN103297742A (en) * | 2012-02-27 | 2013-09-11 | 联想(北京)有限公司 | Data processing method, microprocessor, communication terminal and server |
CN104780338A (en) * | 2015-04-16 | 2015-07-15 | 美国掌赢信息科技有限公司 | Method and electronic equipment for loading expression effect animation in instant video |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106685941A (en) * | 2016-12-19 | 2017-05-17 | 宇龙计算机通信科技(深圳)有限公司 | Method, apparatus and server for optimizing AR registering |
CN108513060A (en) * | 2017-02-28 | 2018-09-07 | 三星电子株式会社 | Use the electronic equipment of the image pickup method and support this method of external electronic device |
US10917552B2 (en) | 2017-02-28 | 2021-02-09 | Samsung Electronics Co., Ltd. | Photographing method using external electronic device and electronic device supporting the same |
CN107590828A (en) * | 2017-08-09 | 2018-01-16 | 广东欧珀移动通信有限公司 | The virtualization treating method and apparatus of shooting image |
CN107590828B (en) * | 2017-08-09 | 2020-01-10 | Oppo广东移动通信有限公司 | Blurring processing method and device for shot image |
CN110176077A (en) * | 2019-05-23 | 2019-08-27 | 北京悉见科技有限公司 | The method, apparatus and computer storage medium that augmented reality is taken pictures |
CN110176077B (en) * | 2019-05-23 | 2023-05-26 | 北京悉见科技有限公司 | Augmented reality photographing method and device and computer storage medium |
CN112330477A (en) * | 2019-08-01 | 2021-02-05 | 脸谱公司 | Generating customized personalized responses for social media content |
Also Published As
Publication number | Publication date |
---|---|
CN106157262B (en) | 2020-04-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10516916B2 (en) | Method of processing video data, device, computer program product, and data construct | |
CN106157262A (en) | The processing method of a kind of augmented reality, device and mobile terminal | |
CN106170083B (en) | Image processing for head mounted display device | |
CN105072327B (en) | A kind of method and apparatus of the portrait processing of anti-eye closing | |
CN106200918B (en) | A kind of information display method based on AR, device and mobile terminal | |
US20210279971A1 (en) | Method, storage medium and apparatus for converting 2d picture set to 3d model | |
JP4449723B2 (en) | Image processing apparatus, image processing method, and program | |
CN104584531B (en) | Image processing apparatus and image display device | |
KR102106135B1 (en) | Apparatus and method for providing application service by using action recognition | |
CN110418095B (en) | Virtual scene processing method and device, electronic equipment and storage medium | |
CN107766785B (en) | Face recognition method | |
KR20190032502A (en) | A technique for controlling a virtual image generation system using emotion states of a user | |
CN111580652B (en) | Video playing control method and device, augmented reality equipment and storage medium | |
JP5492077B2 (en) | Method and system for improving the appearance of a person on an RTP stream coming from a media terminal | |
CN110837750B (en) | Face quality evaluation method and device | |
US20090251484A1 (en) | Avatar for a portable device | |
WO2017211139A1 (en) | Method and apparatus for implementing video communication | |
CN106127828A (en) | The processing method of a kind of augmented reality, device and mobile terminal | |
JP2020039029A (en) | Video distribution system, video distribution method, and video distribution program | |
CN110868554B (en) | Method, device and equipment for changing faces in real time in live broadcast and storage medium | |
CN110210449B (en) | Face recognition system and method for making friends in virtual reality | |
CN108540863B (en) | Bullet screen setting method, storage medium, equipment and system based on facial expressions | |
EP2798853A1 (en) | Interactive media systems | |
CN110413108A (en) | Processing method, device, system, electronic equipment and the storage medium of virtual screen | |
CN113840158B (en) | Virtual image generation method, device, server and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., Ltd. Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |