CN107067474A - A kind of augmented reality processing method and processing device - Google Patents

A kind of augmented reality processing method and processing device Download PDF

Info

Publication number
CN107067474A
CN107067474A CN201710131677.2A CN201710131677A CN107067474A CN 107067474 A CN107067474 A CN 107067474A CN 201710131677 A CN201710131677 A CN 201710131677A CN 107067474 A CN107067474 A CN 107067474A
Authority
CN
China
Prior art keywords
image
acquisition
virtual
reality imagery
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710131677.2A
Other languages
Chinese (zh)
Inventor
武晓勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jimei Culture Technology Co Ltd
Original Assignee
Shenzhen Jimei Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jimei Culture Technology Co Ltd filed Critical Shenzhen Jimei Culture Technology Co Ltd
Priority to CN201710131677.2A priority Critical patent/CN107067474A/en
Publication of CN107067474A publication Critical patent/CN107067474A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention belongs to image processing field there is provided a kind of augmented reality processing method and processing device, methods described gathers reality imagery in real time by camera;Image recognition processing is carried out to the reality imagery of collection by default identification model, default target object is identified;If the number of the target object identified is two or more, the corresponding first virtual animated image of set of the target object with identifying is obtained;By the described first virtual animated image of acquisition added to the reality imagery gathered, and the reality imagery after display addition in real time on the display device.By the embodiment of the present invention, when identifying multiple default target objects in the reality imagery from collection, then corresponding virtual animated image can be obtained and added to reality imagery, the reality imagery after display addition in real time.I.e. under the combination of different target objects, different virtual images can be added in reality imagery, form is changeable, and flexibility is strong.

Description

A kind of augmented reality processing method and processing device
Technical field
The invention belongs to image processing field, more particularly to a kind of augmented reality processing method and processing device.
Background technology
With the fast development of preschool education cause, augmented reality is also progressively applied to various preschool education systems Or equipment.Augmented reality (Augmented Reality, AR) is a kind of by real world information and virtual world information " seamless " integrated new technology, it script in the certain time spatial dimension of real world is difficult the entity information experienced to be (visual information, sound, taste, tactile etc.), by science and technology such as computers, is superimposed again after analog simulation, by virtual information Real world is applied to, is perceived by human sensory, so as to reach the sensory experience of exceeding reality.Real environment and virtual Object has been added to same picture in real time or space exists simultaneously, and virtual information is reformulated with real world possesses true to nature Vision, the sense of hearing, the environment of tactile, realize the natural interaction of user and environment.
Existing children education equipment augmented reality processing method is typically all by the specific built-in invisible two-dimensional codes of scanning Single figure (carrier of figure is generally books, card etc.) shows what is prestored in the terminals such as mobile phone, tablet personal computer Single augmented reality image, but when multiple figures are arrived in scanning, can only also show an augmented reality image, form is single, Very flexible.
The content of the invention
In consideration of it, the embodiments of the invention provide a kind of augmented reality processing method and processing device, to solve existing children education Equipment augmented reality processing method exist form is single, very flexible the problem of.
A kind of augmented reality processing method provided in an embodiment of the present invention, can include:
Reality imagery is gathered by camera in real time;
Image recognition processing is carried out to the reality imagery of collection by default identification model, default mesh is identified Mark object;
If the number of the target object identified is two or more, the target object for obtaining and identifying Gather in corresponding default first virtual animated image, the first virtual animated image of the acquisition comprising identify each The corresponding default virtual image of the target object;
By the described first virtual animated image of acquisition added to the reality imagery gathered, and it is real on the display device When show addition after the reality imagery.
Further, reality imagery progress image recognition processing of the default identification model to collection is passed through described Before, it can also include:
Obtain the training sample of the target object;
Using the training sample of acquisition as the input of artificial neural network algorithm, school is carried out to the identification model Just, the identification model after being corrected.
Further, before the reality imagery after the display addition real-time on the display device, it can also wrap Include:
Realities of the day information is obtained, the real information includes Weather information and/or temporal information and/or geographical position Information;
By the second virtual animated image added to collection the reality imagery in, the second virtual animated image be with The corresponding virtual animated image of the real information obtained.
Further, before the reality imagery after the display addition real-time on the display device, it can also wrap Include:
Obtain the facial characteristics of active user;
The set the goal designated area of the virtual image of object of described the first of acquisition virtual animated image middle finger is replaced with The facial characteristics of the active user obtained.
Further, before the reality imagery after the display addition real-time on the display device, in addition to:
Obtain the human body image of active user;
In the reality imagery that the human body image of the active user of acquisition is added to collection.
A kind of augmented reality processing unit provided in an embodiment of the present invention, can include:
Acquisition module, for gathering reality imagery in real time by camera;
Identification module, for carrying out image recognition processing to the reality imagery of collection by default identification model, Identify default target object;
Image acquiring module, if the number of the target object for identifying is two or more, is obtained and identification The corresponding default first virtual animated image of set of the target object gone out, the first virtual animated image of the acquisition In include the corresponding default virtual image of each described target object identified;
First add module, the real shadow for the described first virtual animated image of acquisition to be added to collection Picture;
Display module, for the reality imagery after display addition in real time on the display device.
Further, described augmented reality processing unit can also include:
Sample acquisition module, the training sample for obtaining the target object;
Correction module, for the training sample of acquisition, as the input of artificial neural network algorithm, to be known to described Other model is corrected, the identification model after being corrected.
Further, described augmented reality processing unit can also include:
Real data obtaining module, for obtaining realities of the day information, the real information include Weather information and/or Temporal information and/or geographical location information;
Second add module, for by the second virtual animated image added in the reality imagery of collection, described the Two virtual animated images are virtual animated image corresponding with the real information of acquisition.
Further, described augmented reality processing unit can also include:
Facial characteristics acquisition module, the facial characteristics for obtaining active user;
Replacement module, for by described the first of acquisition the virtual animated image middle finger set the goal object virtual image finger Determine the facial characteristics that region replaces with the active user of acquisition.
Further, described augmented reality processing unit can also include:
Human body image acquisition module, the human body image for obtaining active user;
3rd add module, the real shadow for the human body image of the active user of acquisition to be added to collection As in.
A kind of terminal provided in an embodiment of the present invention, can include a kind of augmented reality processing unit of any of the above.
The beneficial effect that the embodiment of the present invention exists compared with prior art is:The embodiment of the present invention is real-time by camera Gather reality imagery;Image recognition processing is carried out to the reality imagery of collection by default identification model, identified pre- If target object;If the number of the target object identified is two or more, the target for obtaining and identifying In the corresponding first virtual animated image of set of object, the first virtual animated image of the acquisition comprising identify each The corresponding default virtual image of the target object;Described first virtual animated image of acquisition is described existing added to what is gathered Real image, and the reality imagery after display addition in real time on the display device.By the embodiment of the present invention, when from collection When identifying multiple default target objects in reality imagery, then it can obtain corresponding virtual animated image and be added to real shadow Picture, and the reality imagery after display addition in real time on the display device.I.e. under the combination of different target objects, reality Different virtual images can be added in image, form is changeable, and flexibility is strong.
Brief description of the drawings
Technical scheme in order to illustrate the embodiments of the present invention more clearly, below by using required in embodiment Accompanying drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the present invention, for ability For the those of ordinary skill of domain, without having to pay creative labor, it can also obtain other according to these accompanying drawings Accompanying drawing.
Fig. 1 is a kind of schematic flow diagram for augmented reality processing method that the embodiment of the present invention one is provided;
Fig. 2 is the schematic flow diagram for the preferred steps that the embodiment of the present invention one is provided;
Fig. 3 be the embodiment of the present invention one provide step S103 in explanation example schematic diagram;
Fig. 4 is a kind of schematic block diagram for augmented reality processing unit that the embodiment of the present invention two is provided.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation is described, it is clear that described embodiment is a part of embodiment of the invention, rather than whole embodiments.Based on this hair Embodiment in bright, the every other implementation that those of ordinary skill in the art are obtained under the premise of creative work is not made Example, belongs to the scope of protection of the invention.
It should be appreciated that ought be in this specification and in the appended claims in use, term " comprising " indicates described spy Levy, entirety, step, operation, the presence of element and/or component, but be not precluded from one or more of the other feature, entirety, step, Operation, element, component and/or its presence or addition for gathering.
It is also understood that the term used in this description of the invention is merely for the sake of the mesh for describing specific embodiment And be not intended to limit the present invention.As used in description of the invention and appended claims, unless on Other situations are hereafter clearly indicated, otherwise " one " of singulative, " one " and "the" are intended to include plural form.
It will be further appreciated that, the term "and/or" used in description of the invention and appended claims is Refer to any combinations of one or more of the associated item listed and be possible to combination, and including these combinations.
As used in this specification and in the appended claims, term " if " can be according to context quilt Be construed to " when ... " or " once " or " in response to determining " or " in response to detecting ".Similarly, phrase " if it is determined that " or " if detecting [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to true It is fixed " or " once detecting [described condition or event] " or " in response to detecting [described condition or event] ".
In order to illustrate technical solutions according to the invention, illustrated below by specific embodiment.
Embodiment one:
Referring to Fig. 1, it is a kind of schematic flow diagram for augmented reality processing method that the embodiment of the present invention one is provided, specifically explains State as follows:
Step S101, reality imagery is gathered by camera in real time.
Product on the market is mostly to carry out AR displayings, this mode by way of scanning invisible two-dimensional codes at present Being necessarily dependent upon specific carrier (such as card, poster, books) could realize that Consumer's Experience is poor, passes through hand in this implementation The camera of the terminals such as machine, tablet personal computer directly gathers reality imagery to carry out AR displayings in real time, and use is more convenient, experience effect Fruit is truer.
Step S102, image recognition processing, identification are carried out by default identification model to the reality imagery of collection Go out default target object.
The target object can be the personages such as princess, prince, witch, can be the animals such as cat, dog, wolf, rabbit, can be with It is the plants such as flower, grass, trees, can also be that the object in table, chair, stool and other items, i.e., all reality or the imagination can be made For the target object, which specifically chosen object can be chosen as the target object according to actual conditions, this implementation Example is not specifically limited to this.
If the number of the target object identified is two or more, step S103 and step S104 is performed;If knowing The number only one of which for the target object not gone out, then perform step S105 and step S106;If it is unidentified go out it is any described Target object, then can continue executing with the reality imagery gatherer process in step S101, first can also eject on the display device It is unidentified go out target object prompt message after be further continued for performing the reality imagery gatherer process in step S101.
Preferably, it can also include before step S102, set up target object storehouse, according to actual conditions in the target Each target object is added in object storehouse.
It is preferred that, the preferred steps shown in Fig. 2 can also be included before step S102:
Step S201, obtains the training sample of the target object;
Step S202, using the training sample of acquisition as artificial neural network algorithm input, to the identification mould Type is corrected, the identification model after being corrected.
General neural network image identifying system is by pre-processing, feature extraction and neural network classifier composition.Pretreatment It is exactly by the garbage deletion in initial data, smooth, binaryzation and progress amplitude normalization etc..Neural network image is recognized Characteristic extraction part in system is not necessarily present, and is thus divided into two major classes:The first kind, there is characteristic extraction part:This Class system is actually the combination of conventional method and neural network method technology, and this method can make full use of the experience of people Obtaining mode feature and neural network classification ability carry out recognition target image.Feature extraction must be able to react the spy of whole image Levy.But its antijamming capability is not so good as Equations of The Second Kind.Equations of The Second Kind, no characteristic extraction part:Feature extraction is saved, whole sub-picture is straight Connect under the input as neutral net, this mode, the complexity of the neural network structure of system is considerably increased, input pattern The increase of dimension result in the huge of network size.In addition, neural network structure needs oneself to eliminate the shadow of mode deformation completely Ring.But the good in anti-interference performance of network, discrimination height.
First have to select all kinds of samples to be trained, approximately equal is wanted per the number of class sample.Its reason is a side Face prevents that network after training is excessively sensitive to the classification response more than sample, and insensitive to the few classification of sample number.
Make network to the translation of pattern, rotation, stretching has consistency, and the sample of various possible situations is selected as far as possible This.The representational sample such as selecting different postures, different azimuth, different angles, different background, can so ensure Network has higher discrimination.
Illustrated by cat of the target object, it is necessary first to which the picture information as much as possible for obtaining various cats comes As training sample, the cat of various kinds is included in these picture informations, such as Siamese cat, Persian cat, Egyptian Mau will be included The various postures that cat is made, such as stance, prone position, sleeping position, will include the various photo angles of cat, also various comprising being in Cat in environment, only guaranteed training sample is enough, and the accuracy of the identification model finally given just can be higher.
Study should be trained with substantial amounts of sample in the study stage, by a large amount of study of sample to neutral net The connection weight of each layer network is modified, and it is had correct recognition result to sample, and this is just as people's numeration word, network In neuron be like people's brain cell, the change of weights is like the change of the interaction of people's brain cell, and neutral net exists In sample learning just as people's numeration word, network weight adjustment during learning sample is equivalent to people and remembers each digital shape As network weight is exactly the content that network is remembered, the e-learning stage is learned repeatedly just as people by not recognizing numeral to understanding numeral Habit process is the same.Neutral net is to carry out mental picture by the entirety of whole characteristic vector, as long as most of features meet Once the sample learnt can be identified as same category, so neural network classifier still can be just when sample has larger noise Really identification.
Step S103, obtains the corresponding default first virtual animation shadow of set of the target object with identifying The corresponding default virtual shadow of each described target object identified is included in picture, the first virtual animated image of the acquisition Picture.
The first virtual animated image changes with the change of the set of the target object, if as shown in figure 3, The reality imagery gathered in real time in step S101 is the pattern that princess and prince are painted on a face wall, wall, is known in step s 102 Do not gone out princess and prince the two target objects, that is, the collection of the target object identified be combined into princess and prince the two The set that target object element is constituted, then will the virtual image comprising princess and prince in the described first virtual animated image Virtual image, the first virtual animated image displaying can be princess and prince's happy life scene;If in step The reality imagery gathered in real time in S101 is the pattern that princess and witch are painted on a face wall, wall, is identified in step s 102 Princess and witch the two target objects, that is, the collection of the target object that identifies are combined into princess and witch the two targets The set that object element is constituted, then will include the virtual image of princess and the void of witch in the described first virtual animated image Intend image, what the first virtual animated image was shown can be the scene that witch poisons princess;If in step S101 in real time The reality imagery of collection is the pattern that prince and witch are painted on a face wall, wall, and prince and female are have identified in step s 102 The two target objects of witch, that is, the collection of the target object identified is combined into prince and the two target object element institutes of witch The set of composition, then will include the virtual image of prince and the virtual image of witch, institute in the described first virtual animated image State the first virtual animated image displaying can be the scene that prince wrestles with witch.
Step S104, by the described first virtual animated image of acquisition added to the reality imagery gathered, and aobvious Show the reality imagery after display addition in real time in equipment.
Namely virtual animated image is added in the display image gathered in real time, will be virtual and real organically combine Together, user is presented to by display device.
Usually, the display device can be folded above-mentioned virtual and reality by its screen with mobile phone or tablet personal computer Plus AR images be presented to user.
Step S105, obtains default 3rd virtual animated image corresponding with the target object identified, described The corresponding default virtual image of the target object identified is included in the 3rd virtual animated image obtained;
Step S106, by the 3rd virtual animated image of acquisition added to the reality imagery gathered, and aobvious Show the reality imagery after display addition in real time in equipment.
Ground is readily appreciated that, step S105 and step S106 are the number only one of which of the target object for identifying When situation, in this case, be not related to and be combined with other target objects, step S103 and step S104 can be used as Special case.
Preferably, before step S104, it can also include:
Realities of the day information is obtained, the real information includes Weather information and/or temporal information and/or geographical position Information;
By the second virtual animated image added to collection the reality imagery in, the second virtual animated image be with The corresponding virtual animated image of the real information obtained.
For example, automatic networking obtains current Weather information, if being currently overcast and rainy, the second virtual animated image For overcast and rainy continuous virtual animated image;If being currently fine day, the second virtual animated image is virtual for what the sun is shining Animated image.
And for example, automatic networking obtains current temporal information, if current time is 7 points of morning, and described second is virtual dynamic It is the virtual animated image that the sun is just eastwardly rising to draw image;If current time is at 9 points in evening, the second virtual animation Image is the virtual animated image of starry sky.It is preferred that, can also judge whether be currently specific from the temporal information Red-letter day, then red-letter day related element is added in the described second virtual animated image, if being currently Christmas Day, described the The elements such as Christmas tree, present box are added in two virtual animated images;If being currently the All Saints' Day, in the described second virtual animation shadow The elements such as pumpkin lamp, funny face mask are added as in;If being currently the Spring Festival, lamp is added in the described second virtual animated image The elements such as cage, new Year scroll, firecrackers.
For another example, automatic networking obtains current geographic position information, if current location is virtually moved in Beijing described second Draw and the elements such as the Forbidden City, Bird's Nest are added in image;If current location is in Chongqing, fire is added in the described second virtual animated image The elements such as pot;If current location is in Harbin, the elements such as ice sculpture are added in the described second virtual animated image.
Preferably, before step S104, it can also include:
Obtain the facial characteristics of active user;The facial characteristics of the active user can be by mobile phone, tablet personal computer Etc. terminal front camera obtain in real time or obtain from the photo that has shot in advance.
The set the goal designated area of the virtual image of object of described the first of acquisition virtual animated image middle finger is replaced with The facial characteristics of the active user obtained.
Preferably, the replacement process is carried out in real time, i.e., show described work as always in the designated area of virtual image The facial characteristics of preceding user, and change with the change of the facial characteristics of active user.
If for example, including the virtual image of princess in the first virtual animated image, by the virtual influences of princess Facial characteristics replace with acquisition the active user facial characteristics, if active user makes the expression of laugh, princess's Virtual image also makes the expression of laugh, if active user makes the expression of indignation, the virtual image of princess also makes indignation Expression.
Preferably, before step S104, it can also include:
Obtain the human body image of active user;
In the reality imagery that the human body image of the active user of acquisition is added to collection.
Ground is readily appreciated that, passes through above preferred steps, it is possible to achieve by the facial characteristics or human body image of active user It is dissolved into final AR bandwagon effects, user is obtained stronger sense of participation, lifts Consumer's Experience.
In summary, the embodiment of the present invention gathers reality imagery in real time by camera;Pass through default identification model pair The reality imagery of collection carries out image recognition processing, identifies default target object;If the object identified The number of body is two or more, then obtains the corresponding first virtual animated image of set of the target object with identifying, The corresponding default virtual image of each described target object identified is included in first virtual animated image of the acquisition;Will The the described first virtual animated image obtained is added to the reality imagery gathered, and display addition in real time on the display device The reality imagery afterwards.By the embodiment of the present invention, when identifying multiple default objects in the reality imagery from collection During body, then corresponding virtual animated image can be obtained and added to reality imagery, and on the display device after display addition in real time The reality imagery.I.e. under the combination of different target objects, different virtual images, form can be added in reality imagery Changeable, flexibility is strong.
Embodiment two:
It is a kind of schematic block diagram for augmented reality processing unit that the embodiment of the present invention two is provided referring to Fig. 4, for the ease of Illustrate, the part related to the embodiment of the present invention is only shown.
The augmented reality processing unit can be built in terminal (such as mobile phone, tablet personal computer) software unit, Hardware cell or the unit of soft or hard combination, can also be integrated into the terminal as independent suspension member.
The augmented reality processing unit can include:
Acquisition module 401, for gathering reality imagery in real time by camera;
Identification module 402, for being carried out by default identification model to the reality imagery of collection at image recognition Reason, identifies default target object;
Image acquiring module 403, if the number of the target object for identifying is two or more, is obtained with knowing The corresponding default first virtual animated image of set for the target object not gone out, the first virtual animation shadow of the acquisition The corresponding default virtual image of each described target object identified is included as in;
First add module 404, the reality for the described first virtual animated image of acquisition to be added to collection Image;
Display module 405, for the reality imagery after display addition in real time on the display device.
Further, described augmented reality processing unit can also include:
Sample acquisition module 406, the training sample for obtaining the target object;
Correction module 407, for using the training sample of acquisition as artificial neural network algorithm input, to described Identification model is corrected, the identification model after being corrected.
Further, described augmented reality processing unit can also include:
Real data obtaining module 408, for obtaining realities of the day information, the real information includes Weather information And/or temporal information and/or geographical location information;
Second add module 409, it is described for the second virtual animated image to be added in the reality imagery of collection Second virtual animated image is virtual animated image corresponding with the real information of acquisition.
Further, described augmented reality processing unit can also include:
Facial characteristics acquisition module 410, the facial characteristics for obtaining active user;
Replacement module 411, the virtual image for object that described the first of acquisition the virtual animated image middle finger sets the goal Designated area replace with acquisition the active user facial characteristics.
Further, described augmented reality processing unit can also include:
Human body image acquisition module 412, the human body image for obtaining active user;
3rd add module 413, for the human body image of the active user of acquisition is described existing added to what is gathered In real image.
Those of ordinary skill in the art are it is to be appreciated that the mould of each example described with reference to the embodiments described herein Block and algorithm steps, can be realized with electronic hardware, computer software or the combination of the two, in order to clearly demonstrate hardware With the interchangeability of software, the composition and step of each example are generally described according to function in the above description.This A little functions are performed with hardware or software mode actually, depending on the application-specific and design constraint of technical scheme.Specially Industry technical staff can realize described function to each specific application using distinct methods, but this realization is not It is considered as beyond the scope of this invention.
In embodiment provided by the present invention, it should be understood that disclosed apparatus and method, others can be passed through Mode is realized.For example, device embodiment described above is only schematical, for example, the division of the module or unit, It is only a kind of division of logic function, there can be other dividing mode when actually realizing, such as multiple units or component can be with With reference to or be desirably integrated into another system, or some features can be ignored, or not perform.It is another, it is shown or discussed Coupling each other or direct-coupling or communication connection can be by some interfaces, the INDIRECT COUPLING of device or unit or Communication connection, can be electrical, machinery or other forms.
The unit illustrated as separating component can be or may not be it is physically separate, it is aobvious as unit The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple On NE.Some or all of unit therein can be selected to realize the mesh of this embodiment scheme according to the actual needs 's.
In addition, each functional module in each embodiment of the invention can be integrated in a processing unit, can also That unit is individually physically present, can also two or more units it is integrated in a unit.Above-mentioned integrated list Member can both be realized in the form of hardware, it would however also be possible to employ the form of SFU software functional unit is realized.
If the integrated unit is realized using in the form of SFU software functional unit and as independent production marketing or used When, it can be stored in a computer read/write memory medium.Understood based on such, the technical scheme of the embodiment of the present invention The part substantially contributed in other words to prior art or all or part of the technical scheme can be with software products Form embody, the computer software product is stored in a storage medium, including some instructions are to cause one Computer equipment (can be personal computer, server, or network equipment etc.) or processor (processor) perform this hair The all or part of step of each embodiment methods described of bright embodiment.And foregoing storage medium includes:USB flash disk, mobile hard disk, Read-only storage (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic Dish or CD etc. are various can be with the medium of store program codes.
Embodiment three:
The embodiment of the present invention provides a kind of terminal, and the terminal can include any one described in the corresponding embodiments of Fig. 4 Plant augmented reality processing unit.
Embodiment described above is merely illustrative of the technical solution of the present invention, rather than its limitations;Although with reference to foregoing reality Example is applied the present invention is described in detail, it will be understood by those within the art that:It still can be to foregoing each Technical scheme described in embodiment is modified, or carries out equivalent substitution to which part technical characteristic;And these are changed Or replace, the essence of appropriate technical solution is departed from the spirit and model of each embodiment technical scheme of the embodiment of the present invention Enclose.

Claims (10)

1. a kind of augmented reality processing method, it is characterised in that including:
Reality imagery is gathered by camera in real time;
Image recognition processing is carried out to the reality imagery of collection by default identification model, default object is identified Body;
If the number of the target object identified is two or more, the set for the target object for obtaining and identifying In corresponding default first virtual animated image, the first virtual animated image of the acquisition comprising identify each described in The corresponding default virtual image of target object;
By the described first virtual animated image of acquisition added to the reality imagery gathered, and it is aobvious in real time on the display device Show the reality imagery after addition.
2. augmented reality processing method according to claim 1, it is characterised in that pass through default identification model described Before the reality imagery progress image recognition processing of collection, in addition to:
Obtain the training sample of the target object;
Using the training sample of acquisition as the input of artificial neural network algorithm, the identification model is corrected, obtained Identification model after to correction.
3. augmented reality processing method according to claim 1, it is characterised in that described aobvious in real time on the display device Show addition after the reality imagery before, in addition to:
Realities of the day information is obtained, the real information includes Weather information and/or temporal information and/or geographical position letter Breath;
In the reality imagery that second virtual animated image is added to collection, the second virtual animated image is and acquisition The corresponding virtual animated image of the real information.
4. augmented reality processing method according to any one of claim 1 to 3, it is characterised in that described in display Before the reality imagery in equipment after display addition in real time, in addition to:
Obtain the facial characteristics of active user;
The set the goal designated area of the virtual image of object of described the first of acquisition virtual animated image middle finger is replaced with into acquisition The active user facial characteristics.
5. augmented reality processing method according to any one of claim 1 to 3, it is characterised in that described in display Before the reality imagery in equipment after display addition in real time, in addition to:
Obtain the human body image of active user;
In the reality imagery that the human body image of the active user of acquisition is added to collection.
6. a kind of augmented reality processing unit, it is characterised in that including:
Acquisition module, for gathering reality imagery in real time by camera;
Identification module, for carrying out image recognition processing, identification to the reality imagery of collection by default identification model Go out default target object;
Image acquiring module, if the number of the target object for identifying is two or more, obtains and identifies Wrapped in the corresponding default first virtual animated image of set of the target object, the first virtual animated image of the acquisition Containing the corresponding default virtual image of each described target object identified;
First add module, the reality imagery for the described first virtual animated image of acquisition to be added to collection;
Display module, for the reality imagery after display addition in real time on the display device.
7. augmented reality processing unit according to claim 6, it is characterised in that also include:
Sample acquisition module, the training sample for obtaining the target object;
Correction module, for using the training sample of acquisition as artificial neural network algorithm input, to the identification mould Type is corrected, the identification model after being corrected.
8. augmented reality processing unit according to claim 6, it is characterised in that also include:
Real data obtaining module, for obtaining realities of the day information, the real information includes Weather information and/or time Information and/or geographical location information;
Second add module, for the second virtual animated image to be added in the reality imagery of collection, described second is empty It is virtual animated image corresponding with the real information of acquisition to intend animated image.
9. the augmented reality processing method according to any one of claim 6 to 8, it is characterised in that also include:
Facial characteristics acquisition module, the facial characteristics for obtaining active user;
Replacement module, for by described the first of acquisition the virtual animated image middle finger set the goal object virtual image specified area Domain replaces with the facial characteristics of the active user of acquisition.
10. the augmented reality processing method according to any one of claim 6 to 8, it is characterised in that also include:
Human body image acquisition module, the human body image for obtaining active user;
3rd add module, the reality imagery for the human body image of the active user of acquisition to be added to collection In.
CN201710131677.2A 2017-03-07 2017-03-07 A kind of augmented reality processing method and processing device Pending CN107067474A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710131677.2A CN107067474A (en) 2017-03-07 2017-03-07 A kind of augmented reality processing method and processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710131677.2A CN107067474A (en) 2017-03-07 2017-03-07 A kind of augmented reality processing method and processing device

Publications (1)

Publication Number Publication Date
CN107067474A true CN107067474A (en) 2017-08-18

Family

ID=59622473

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710131677.2A Pending CN107067474A (en) 2017-03-07 2017-03-07 A kind of augmented reality processing method and processing device

Country Status (1)

Country Link
CN (1) CN107067474A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108492121A (en) * 2018-04-18 2018-09-04 景德镇止语堂陶瓷有限公司 A kind of system and method based on the VR technical identification Freehandhand-drawing tea set true and falses
CN108648139A (en) * 2018-04-10 2018-10-12 光锐恒宇(北京)科技有限公司 A kind of image processing method and device
CN108648284A (en) * 2018-04-10 2018-10-12 光锐恒宇(北京)科技有限公司 A kind of method and apparatus of video processing
CN108696699A (en) * 2018-04-10 2018-10-23 光锐恒宇(北京)科技有限公司 A kind of method and apparatus of video processing
CN108711192A (en) * 2018-04-10 2018-10-26 光锐恒宇(北京)科技有限公司 A kind of method for processing video frequency and device
CN109035420A (en) * 2018-08-21 2018-12-18 维沃移动通信有限公司 A kind of processing method and mobile terminal of augmented reality AR image
CN109255297A (en) * 2018-08-06 2019-01-22 百度在线网络技术(北京)有限公司 animal state monitoring method, terminal device, storage medium and electronic equipment
CN109658523A (en) * 2018-12-10 2019-04-19 西安小明出行新能源科技有限公司 The method for realizing each function operation instruction of vehicle using the application of AR augmented reality
CN111209809A (en) * 2019-12-24 2020-05-29 广东省智能制造研究所 Siamese network-based multi-input cross-view-angle gait recognition method and device
CN111652985A (en) * 2020-06-10 2020-09-11 上海商汤智能科技有限公司 Virtual object control method and device, electronic equipment and storage medium
CN111667588A (en) * 2020-06-12 2020-09-15 上海商汤智能科技有限公司 Person image processing method, person image processing device, AR device and storage medium
CN111723806A (en) * 2019-03-19 2020-09-29 北京京东尚科信息技术有限公司 Augmented reality method and apparatus
CN112053449A (en) * 2020-09-09 2020-12-08 脸萌有限公司 Augmented reality-based display method, device and storage medium
CN112053370A (en) * 2020-09-09 2020-12-08 脸萌有限公司 Augmented reality-based display method, device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020095276A1 (en) * 1999-11-30 2002-07-18 Li Rong Intelligent modeling, transformation and manipulation system
CN1588992A (en) * 2004-10-21 2005-03-02 上海交通大学 Entertainment system for video frequency real time synthesizing and recording
CN103116451A (en) * 2013-01-25 2013-05-22 腾讯科技(深圳)有限公司 Virtual character interactive method, device and system of intelligent terminal
CN103337079A (en) * 2013-07-09 2013-10-02 广州新节奏智能科技有限公司 Virtual augmented reality teaching method and device
CN103561065A (en) * 2013-10-22 2014-02-05 深圳市优逸电子科技有限公司 System and method for achieving 3D virtual advertisement with mobile terminal
KR101697041B1 (en) * 2016-01-12 2017-01-16 오철환 Method for data processing for responsive augmented reality card game by collision detection for virtual objects and device for playing responsive augmented reality card game

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020095276A1 (en) * 1999-11-30 2002-07-18 Li Rong Intelligent modeling, transformation and manipulation system
CN1588992A (en) * 2004-10-21 2005-03-02 上海交通大学 Entertainment system for video frequency real time synthesizing and recording
CN103116451A (en) * 2013-01-25 2013-05-22 腾讯科技(深圳)有限公司 Virtual character interactive method, device and system of intelligent terminal
CN103337079A (en) * 2013-07-09 2013-10-02 广州新节奏智能科技有限公司 Virtual augmented reality teaching method and device
CN103561065A (en) * 2013-10-22 2014-02-05 深圳市优逸电子科技有限公司 System and method for achieving 3D virtual advertisement with mobile terminal
KR101697041B1 (en) * 2016-01-12 2017-01-16 오철환 Method for data processing for responsive augmented reality card game by collision detection for virtual objects and device for playing responsive augmented reality card game

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
樊重俊 等: "《大数据分析与应用》", 31 January 2016 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108648139A (en) * 2018-04-10 2018-10-12 光锐恒宇(北京)科技有限公司 A kind of image processing method and device
CN108648284A (en) * 2018-04-10 2018-10-12 光锐恒宇(北京)科技有限公司 A kind of method and apparatus of video processing
CN108696699A (en) * 2018-04-10 2018-10-23 光锐恒宇(北京)科技有限公司 A kind of method and apparatus of video processing
CN108711192A (en) * 2018-04-10 2018-10-26 光锐恒宇(北京)科技有限公司 A kind of method for processing video frequency and device
CN108492121B (en) * 2018-04-18 2021-09-07 景德镇止语堂陶瓷有限公司 System and method for verifying authenticity of hand-drawn tea set based on VR technology
CN108492121A (en) * 2018-04-18 2018-09-04 景德镇止语堂陶瓷有限公司 A kind of system and method based on the VR technical identification Freehandhand-drawing tea set true and falses
CN109255297A (en) * 2018-08-06 2019-01-22 百度在线网络技术(北京)有限公司 animal state monitoring method, terminal device, storage medium and electronic equipment
CN109255297B (en) * 2018-08-06 2022-12-13 百度在线网络技术(北京)有限公司 Animal state monitoring method, terminal device, storage medium and electronic device
CN109035420A (en) * 2018-08-21 2018-12-18 维沃移动通信有限公司 A kind of processing method and mobile terminal of augmented reality AR image
CN109658523A (en) * 2018-12-10 2019-04-19 西安小明出行新能源科技有限公司 The method for realizing each function operation instruction of vehicle using the application of AR augmented reality
CN109658523B (en) * 2018-12-10 2023-05-09 田海玉 Method for realizing use description of various functions of vehicle by AR augmented reality application
CN111723806A (en) * 2019-03-19 2020-09-29 北京京东尚科信息技术有限公司 Augmented reality method and apparatus
CN111209809B (en) * 2019-12-24 2023-03-28 广东省智能制造研究所 Siamese network-based multi-input cross-view-angle gait recognition method and device
CN111209809A (en) * 2019-12-24 2020-05-29 广东省智能制造研究所 Siamese network-based multi-input cross-view-angle gait recognition method and device
CN111652985A (en) * 2020-06-10 2020-09-11 上海商汤智能科技有限公司 Virtual object control method and device, electronic equipment and storage medium
CN111652985B (en) * 2020-06-10 2024-04-16 上海商汤智能科技有限公司 Virtual object control method and device, electronic equipment and storage medium
CN111667588A (en) * 2020-06-12 2020-09-15 上海商汤智能科技有限公司 Person image processing method, person image processing device, AR device and storage medium
CN112053370A (en) * 2020-09-09 2020-12-08 脸萌有限公司 Augmented reality-based display method, device and storage medium
CN112053449A (en) * 2020-09-09 2020-12-08 脸萌有限公司 Augmented reality-based display method, device and storage medium
US11587280B2 (en) 2020-09-09 2023-02-21 Beijing Zitiao Network Technology Co., Ltd. Augmented reality-based display method and device, and storage medium
US11594000B2 (en) 2020-09-09 2023-02-28 Beijing Zitiao Network Technology Co., Ltd. Augmented reality-based display method and device, and storage medium
US11989845B2 (en) 2020-09-09 2024-05-21 Beijing Zitiao Network Technology Co., Ltd. Implementation and display of augmented reality

Similar Documents

Publication Publication Date Title
CN107067474A (en) A kind of augmented reality processing method and processing device
US11790589B1 (en) System and method for creating avatars or animated sequences using human body features extracted from a still image
CN110245638A (en) Video generation method and device
CN106575450B (en) It is rendered by the augmented reality content of albedo model, system and method
CN110021061A (en) Collocation model building method, dress ornament recommended method, device, medium and terminal
CN103810504B (en) Image processing method and device
CN103916621A (en) Method and device for video communication
CN103218844A (en) Collocation method, implementation method, client side, server and system of virtual image
CN110298283A (en) Matching process, device, equipment and the storage medium of picture material
CN111627117A (en) Method and device for adjusting special effect of portrait display, electronic equipment and storage medium
CN108460398A (en) Image processing method, device, cloud processing equipment and computer program product
WO2018119593A1 (en) Statement recommendation method and device
CN114332374A (en) Virtual display method, equipment and storage medium
CN105847583A (en) Method and apparatus for image processing on mobile terminal
Anjani et al. Implementation of deep learning using convolutional neural network algorithm for classification rose flower
CN106909438A (en) Virtual data construction method and system based on True Data
CN107883520A (en) Based reminding method and device based on air-conditioning equipment, terminal
CN106683553B (en) Simulation corn and realization method for interactively experiencing and harvesting corn
CN108986191A (en) Generation method, device and the terminal device of figure action
Hutchison Recoding consumer culture: Ester Hernández, Helena María Viramontes, and the farmworker cause.
CN107135356A (en) Captions clap generation method and device, image processing method and device
CN111640199A (en) AR special effect data generation method and device
CN111061902A (en) Drawing method and device based on text semantic analysis and terminal equipment
Fried Dressing up, dressing down: Ethnic identity among the Tongren Tu of northwest China
Ayari Humor in Contemporary Native American Art

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170818

RJ01 Rejection of invention patent application after publication