CN106251405A - The method of augmented reality and terminal - Google Patents

The method of augmented reality and terminal Download PDF

Info

Publication number
CN106251405A
CN106251405A CN201610597140.0A CN201610597140A CN106251405A CN 106251405 A CN106251405 A CN 106251405A CN 201610597140 A CN201610597140 A CN 201610597140A CN 106251405 A CN106251405 A CN 106251405A
Authority
CN
China
Prior art keywords
model
image
user
database
filmed image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610597140.0A
Other languages
Chinese (zh)
Inventor
孙璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Qizhi Software Beijing Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Qizhi Software Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd, Qizhi Software Beijing Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201610597140.0A priority Critical patent/CN106251405A/en
Publication of CN106251405A publication Critical patent/CN106251405A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention provides method and the terminal of a kind of augmented reality, starts terminal camera and obtains filmed image;More than one model database is provided, sets the goal data base from more than one model database middle finger according to user's designated order;Show filmed image in real time, from filmed image, identify the destination object matched with the model in target database, the AR image being associated with model is carried on filmed image.When having multiple model, multiple models can be divided into more than one group (such as according to setting classification packet), the often corresponding model database of group, and these model databases are supplied to user's selection.For a user, user is it is known that the model database of the marker that shooting which type corresponding, model in the target database that terminal selects according to user carries out match cognization to destination object, the quantity of Matching Model can be reduced, reduce data processing amount and saving processes the time, improve recognition efficiency.

Description

The method of augmented reality and terminal
Technical field
The present invention relates to augmented reality field, specifically, the present invention relates to method and the end of a kind of augmented reality End.
Background technology
Augmented reality (Augmented Reality, AR), be a kind of position calculating camera image in real time and Angle also adds respective image, video, the technology of 3D model, and the target of this technology is on screen, virtual world to be enclosed within now The real world also carries out interaction.
Traditional AR technology typically requires identification destination object, such as, identify the marker with specific pattern, is identifying Corresponding AR image is exported after going out marker.The specific pattern of marker is unique, namely can only identify unique specific pattern Case, then exports AR image corresponding with this specific pattern.During identifying, terminal unit needs the mould according to data base Type carries out match cognization.
But, some occasions may need identify plurality of target object, such as in museum, recreation ground, theme park, Secret room is escaped etc., and user needs marker (three-dimensional type objects such as such as sculpture etc., the paintings etc. utilizing terminal unit identification different Plane type objects) to obtain corresponding prompting and to introduce.This is for terminal unit, needs storage to have the data of a large amount of model Storehouse, then removes the destination object in match cognization filmed image one by one according to the model in data base, and during identification, needs process big Amount data so that recognition efficiency is the highest.
And, traditional AR technology does not do human oriented design for user, and interactivity is not enough, and Consumer's Experience needs Improve.
Summary of the invention
The purpose of the present invention is intended at least can solve the technology that one of above-mentioned technological deficiency, particularly recognition efficiency are the highest At least one defect.
The present invention provides a kind of method of augmented reality, comprises the steps:
Start terminal camera and obtain filmed image;
More than one model database is provided, specifies from more than one model database according to user's designated order Target database;
Show described filmed image in real time, identify from described filmed image and the model kissing in described target database The destination object closed, is carried in the AR image being associated with described model on described filmed image.
Wherein in an embodiment, between the model in each described model database, there is same or analogous genus Property.
Wherein in an embodiment, in described more than one model database, at least include plane class model data Storehouse and three-dimensional class model data base.
Wherein in an embodiment, between the model in each described model database, there is same or analogous outward appearance Attribute.
Wherein in an embodiment, described appearance attribute include in color scheme, shape, pattern at least one.
Wherein in an embodiment, each described model database is to there being respective recognizer;From described shooting When image identifies the destination object matched with the model in described target database, by corresponding with described target database Target Recognition Algorithms be identified.
Wherein in an embodiment, described recognizer includes SIFT algorithm, Harris algorithm, SURF algorithm, FAST At least one in algorithm.
Wherein in an embodiment, described the AR image being associated with described model is carried on described filmed image Including:
The AR image being associated with described model is carried on the destination object on described filmed image, and described AR shadow Move as following the tracks of the movement of described destination object.
Wherein in an embodiment, minimize (ESM) algorithm by LK optical flow algorithm, efficient second order or ESM_blur calculates Described destination object is tracked by method.
Wherein in an embodiment, described AR image includes at least one of image, animation, video, 3D model.
Wherein in an embodiment, described AR image is for being introduced or for entering to user described destination object Row prompting.
Wherein in an embodiment, the AR image being associated with described model is more than one, according to users personal data Determine the AR image needing to be carried on described filmed image;Described users personal data includes user identity, age of user, use At least one in location, family, user interest.
Wherein in an embodiment, described users personal data prestores, or from other social class application Program is transferred.
Wherein in an embodiment, the AR image being associated with described model is carried on described filmed image it After, further comprise the steps of:
Receive user's transformation directive;
According to described user's transformation directive, described AR image is changed accordingly or changes.
Wherein in an embodiment, the AR image being associated with described model is carried on described filmed image it After, further comprise the steps of:
Receive the sensor values of end sensor;
According to described sensor values, described AR image is changed accordingly or changes.
The present invention also provides for the terminal of a kind of augmented reality, including:
Photographing module, is used for starting terminal camera and obtains filmed image;
Designated module, is used for providing more than one model database, according to user's designated order from more than one mould Type data base's middle finger sets the goal data base;
Display module, shows described filmed image, identifies and described target data from described filmed image in real time The destination object that model in storehouse matches, is carried in the AR image being associated with described model on described filmed image.
Wherein in an embodiment, between the model in each described model database, there is same or analogous genus Property.
Wherein in an embodiment, in described more than one model database, at least include plane class model data Storehouse and three-dimensional class model data base.
Wherein in an embodiment, between the model in each described model database, there is same or analogous outward appearance Attribute.
Wherein in an embodiment, described appearance attribute include in color scheme, shape, pattern at least one.
Wherein in an embodiment, each described model database is to there being respective recognizer;From described shooting When image identifies the destination object matched with the model in described target database, by corresponding with described target database Target Recognition Algorithms be identified.
Wherein in an embodiment, described recognizer includes SIFT algorithm, Harris algorithm, SURF algorithm, FAST At least one in algorithm.
Wherein in an embodiment, described display module is used for: be carried in by the AR image being associated with described model On destination object on described filmed image, and described AR image is followed the tracks of the movement of described destination object and is moved.
Wherein in an embodiment, minimize (ESM) algorithm by LK optical flow algorithm, efficient second order or ESM_blur calculates Described destination object is tracked by method.
Wherein in an embodiment, described AR image includes at least one of image, animation, video, 3D model.
Wherein in an embodiment, described AR image is for being introduced or for entering to user described destination object Row prompting.
Wherein in an embodiment, the AR image being associated with described model is more than one, according to users personal data Determine the AR image needing to be carried on described filmed image;Described users personal data includes user identity, age of user, use At least one in location, family, user interest.
Wherein in an embodiment, described users personal data prestores, or from other social class application Program is transferred.
Wherein in an embodiment, also include conversion module;Described change module is used for: will be with at described display module After the AR image that described model is associated is carried on described filmed image, receive user's transformation directive, according to described user Described AR image is changed or changes by transformation directive accordingly.
Wherein in an embodiment, also include conversion module;Described change module is used for: will be with at described display module After the AR image that described model is associated is carried on described filmed image, receive the sensor values of end sensor, according to Described AR image is changed or changes by described sensor values accordingly.
The method of above-mentioned augmented reality and terminal, start terminal camera and obtain filmed image;There is provided more than one Model database, sets the goal data base from more than one model database middle finger according to user's designated order;Show institute in real time State filmed image, from described filmed image, identify the destination object that matches with the model in described target database, will be with The AR image that described model is associated is carried on described filmed image.When having multiple model, multiple models can be divided Become more than one group (such as according to setting classification packet), often organize a corresponding model database, and by these model datas Storehouse is supplied to user and selects.For a user, user is it is known that the model of the marker that shooting which type corresponding Data base, the model in the target database that terminal selects according to user carries out match cognization to destination object, it is possible to reduce Join the quantity of model, reduce data processing amount and saving processes the time, improve recognition efficiency.
Aspect and advantage that the present invention adds will part be given in the following description, and these will become from the following description Obtain substantially, or recognized by the practice of the present invention.
Accompanying drawing explanation
The present invention above-mentioned and/or that add aspect and advantage will become from the following description of the accompanying drawings of embodiments Substantially with easy to understand, wherein:
Fig. 1 is the method flow diagram of the augmented reality of an embodiment;
Fig. 2 is the terminal module figure of the augmented reality of an embodiment.
Detailed description of the invention
Embodiments of the invention are described below in detail, and the example of described embodiment is shown in the drawings, the most from start to finish Same or similar label represents same or similar element or has the element of same or like function.Below with reference to attached The embodiment that figure describes is exemplary, is only used for explaining the present invention, and is not construed as limiting the claims.
Those skilled in the art of the present technique are appreciated that unless expressly stated, singulative used herein " ", " Individual ", " described " and " being somebody's turn to do " may also comprise plural form.It is to be further understood that use in the description of the present invention arranges Diction " including " refers to there is described feature, integer, step, operation, element and/or assembly, but it is not excluded that existence or adds Other features one or more, integer, step, operation, element, assembly and/or their group.It should be understood that when we claim unit Part is " connected " or during " coupled " to another element, and it can be directly connected or coupled to other elements, or can also exist Intermediary element.Additionally, " connection " used herein or " coupling " can include wireless connections or wireless couple.Used herein arrange Diction "and/or" includes that one or more list the whole of item or any cell being associated combines with whole.
Those skilled in the art of the present technique are appreciated that unless otherwise defined, and all terms used herein (include technology art Language and scientific terminology), have with the those of ordinary skill in art of the present invention be commonly understood by identical meaning.Also should Be understood by, those terms defined in such as general dictionary, it should be understood that have with in the context of prior art The meaning that meaning is consistent, and unless by specific definitions as here, otherwise will not use idealization or the most formal implication Explain.
Those skilled in the art of the present technique are appreciated that " terminal " used herein above, " terminal unit " had both included wireless communication The equipment of number receptor, it only possesses the equipment of wireless signal receiver of non-emissive ability, includes again receiving and launching hardware Equipment, its have can on bidirectional communication link, perform two-way communication reception and launch hardware equipment.This equipment May include that honeycomb or other communication equipments, it has single line display or multi-line display or does not has multi-line to show The honeycomb of device or other communication equipments;PCS (Personal Communications Service, PCS Personal Communications System), it can Process with combine voice, data, fax and/or its communication ability;PDA (Personal Digital Assistant, individual Digital assistants), it can include the access of radio frequency receiver, pager, the Internet/intranet, web browser, notepad, day Go through and/or GPS (Global Positioning System, global positioning system) receptor;Conventional laptop and/or palm Type computer or other equipment, its have and/or include the conventional laptop of radio frequency receiver and/or palmtop computer or its His equipment." terminal " used herein above, " terminal unit " can be portable, can transport, be arranged on the vehicles (aviation, Sea-freight and/or land) in, or be suitable for and/or be configured at local runtime, and/or with distribution form, operate in the earth And/or any other position operation in space." terminal " used herein above, " terminal unit " can also is that communication terminal, on Network termination, music/video playback terminal, such as, can be PDA, MID (Mobile Internet Device, mobile Internet Equipment) and/or there is the mobile phone of music/video playing function, it is also possible to it is the equipment such as intelligent television, Set Top Box.
Those skilled in the art of the present technique are appreciated that remote network devices used herein above, and it includes but not limited to meter The cloud that calculation machine, network host, single network server, multiple webserver collection or multiple server are constituted.Here, Yun Youji A large amount of computers or the webserver in cloud computing (Cloud Computing) are constituted, and wherein, cloud computing is Distributed Calculation One, the super virtual machine being made up of a group loosely-coupled computer collection.In embodiments of the invention, far-end Can realize communicating by any communication mode between the network equipment, terminal unit with WNS server, include but not limited to, based on The mobile communication of 3GPP, LTE, WIMAX, based on TCP/IP, the computer network communication of udp protocol and based on bluetooth, infrared The low coverage wireless transmission method of transmission standard.
The method of augmented reality described below, can realize with the form of application program (APP).
Fig. 1 is the method flow diagram of the augmented reality of an embodiment.
The present invention provides a kind of method of augmented reality, comprises the steps:
Step S100: start terminal camera and obtain filmed image.
Typically, when marker occurs, user, can be actively in APP in the case of knowing that it is marker The photographic head starting terminal removes to shoot this marker, and obtains real-time filmed image.In filmed image, can be by following Marker is identified as destination object by the identification process that will describe.
Marker can be the object of plane form, it is also possible to be the object of stereogenic.These markers should have bright The prompting of the written forms such as true prompting, such as " scanning obtains excellent AR content ", to facilitate user to know.
The object of plane form can be the plane that poster, book cover etc. have image, pattern or color scheme Media, it is also possible to be paintings, the plane art such as the mural painting portrayed on wall.In a word, the carrier of plane shows there is figure Picture, pattern or color scheme can serve as marker.
The object of stereogenic can be statue, building, artware, pottery etc. stereo article, such as, can be natural science In shop, the historical relic of the various stereogenic of exhibition, can be the animation role's sculpture in recreation ground and theme park and artware, It can be secret room unique sculpture of escaping in game in secret room.
User, by these markers are scanned shooting, can get corresponding from the corresponding AR image of output Introduction explanation, visitor's prompting, audiovisual entertainment etc. service, below step will be explained in.
After getting filmed image, perform step S200.
Step S200: provide more than one model database, according to user's designated order from more than one pattern number Set the goal data base according to storehouse middle finger.
For escaping as museum, theme park, recreation ground, secret room, may have more marker.With rich As a example by thing shop, museum has the paintings of a lot of historical relics, such as ancient times, stone carving, porcelain, pottery, jadeware, bronze, brass or copper ware, if Can using historical relic important for part all as marker (destination object) and make correspondence AR image, then tourist can pass through AR technology is better understood by this historical relic, improves the visit enjoyment of user.Again as a example by theme park, theme park can be arrived Place is studded with the statue of various animation role, such as Mickey Mouse, Donald duck, Snow White etc., if can be dynamic by part Unrestrained role is as marker (corresponding destination object in filmed image) the AR image that makes correspondence, then tourist can be led to Cross AR technology and be better understood by this animation role, improve the visit enjoyment of user.The most such as secret room is escaped, and secret room is hidden in escaping The various clues hidden can also be embodied by backroom various markers, such as the pattern of uniqueness, statue etc., and player is permissible Scan these markers and obtain corresponding AR image, therefrom discover a clue.
Having the occasion of more marker for these, each marker correspondence one stores at terminal memory Model, identify time need in data base one by one model mate, it is obviously desirable to the substantial amounts of process time, cause identify effect Rate is relatively low.Therefore, it can be divided into multiple models more than one group, the often corresponding model database of group, and by these moulds Type data base is supplied to user and selects.
Model is stored in advance in terminal memory, and every kind of model is all in advance according to corresponding marker structure, The key character being required for marker when structure is analyzed and learns, so that can pass through Matching Model and be clapped later This marker is effectively identified by the marker taken the photograph.Being similar to the fingerprint recognition of current trend, marker is equivalent to people's Fingerprint, model is equivalent to the finger print data of terminal storage;After shooting marker, by (namely the shooting of taken marker Destination object in image) carry out mating contrast with the model in model database, to confirm and to identify this marker.
Model carries out packet to be grouped according to the type of model or attribute, it is also possible to anticipated according to it by designer It is willing to or self-defining rule of classification is grouped.According to the attribute of each model, model can be grouped, often organize correspondence One model database, has same or analogous attribute between the model in each model database.
Model such as can be divided into two classes, one group of areal model being to have plane properties, correspond to plane class model Data base;One group of areal model being to have three-dimensional attribute, correspond to three-dimensional class model data base;Or can also be flat to having The areal model of face attribute and the areal model with three-dimensional attribute are further segmented, such as, will have three-dimensional attribute Areal model is further subdivided into model and the model of major embodiment curved surface of major embodiment corner angle.
Such as can be grouped according to the appearance attribute of model by model, appearance attribute includes color scheme, shape, figure At least one in case, there is between the model in each model database same or analogous appearance attribute.Such as can be by Model is divided into three groups, and one group is the model of major embodiment color scheme feature in appearance, and one group is major embodiment shape in appearance The model of feature, one group is the model of major embodiment pattern characteristics in appearance.Or above-mentioned three group models can also be entered The segmentation of one step, such as the model of major embodiment pattern characteristics in appearance is further divided into the pattern being mainly made up of straight line and The main pattern being made up of curve.
Model is carried out these packets, the recognizer below can be applied the most several even a kind of recognizer Just can quickly recognize marker, reduce the difficulty of design and improve recognition efficiency.The model the most often organized has Its same or analogous attribute, designer may have only to a kind of recognizer when arranging algorithm just can be at corresponding model Data base identifies rapidly marker.
Therefore, after obtaining filmed image, multiple model databases are supplied to user, allow user according to actual marker Select corresponding model database, improve the efficiency identified.After user chooses model database, perform step S300.
Step S300: show filmed image in real time, identifies from filmed image and matches with the model in target database Destination object, the AR image being associated with model is carried on filmed image.
After getting filmed image by the imageing sensor of terminal camera, shown in real time by the display module of terminal Shooting image.Imageing sensor can be that (such as, CMOS has source image to CMOS (complementary metal oxide semiconductors (CMOS)) imageing sensor Element sensor (APS)) or CCD (charge-coupled image sensor) sensor.Filmed image is after display, and the marker in reality is taken Constituting destination object in filmed image, that is to say that the marker in reality correspond to is exactly the target pair in filmed image As.
In the present embodiment, each model database is to there being respective recognizer;Identify and mesh from filmed image During the destination object that the model in mark data base matches, known by the Target Recognition Algorithms corresponding with target database Not.Assume that there is multiple model database K1, K2 ... Kn, these model databases correspond to respectively recognizer f1, f2、……fn;If user have selected K2 as target database, then terminal will use recognizer f2 as target recognition Algorithm is identified.
Recognizer includes at least one in SIFT algorithm, Harris algorithm, SURF algorithm, FAST algorithm.SIFT calculates Method, i.e. Scale invariant features transform (Scale-invariant feature transform) algorithm;SURF algorithm, the most quickly Robust features image matching algorithm (Speed-up Robust Feature);FAST algorithm, i.e. Features From Accelerated Segment Test algorithm.
Owing to SIFT algorithm is provided with stronger adaptability, simultaneously arithmetic speed to complex deformation and the illumination variation of image Than very fast, positioning precision is higher, thus is suitable for being applied in the present embodiment.
In step S300, show filmed image in real time, identify and the model phase in target database from filmed image The destination object coincideing, is carried in the AR image being associated with model on filmed image.Therefore, AR image can be at screen Setting position show, even if filmed image changes due to moving of photographic head, AR image is all the time at screen Setting position shows, and is displayed on real-time filmed image.This is for the viewing angle of user, AR shadow As being carried on filmed image all the time.
But so that user has preferable augmented reality to experience, in the present embodiment, in step S300, terminal will The AR image being associated with model is carried on the destination object on filmed image, and AR image follow the tracks of destination object movement and Mobile.In other words, AR image is shown on the destination object on filmed image, if filmed image is due to the movement of photographic head And change, thus causing the change in location of destination object, then AR image follows the tracks of destination object and all the time at destination object Upper display.
Tracking to destination object, can pass through LK optical flow algorithm (Lucas Kanade streamer algorithm), efficient second order Destination object is carried out by littleization (Efficient Second-order Minimization, ESM) algorithm or ESM_blur algorithm Follow the tracks of.
LK algorithm is by calculating gray scale difference quadratic sum (SSD, the Sum of Square between certain window hypograph Differences) realize the tracking to destination object, and ESM algorithm is by Taylor's second order expension, removes the extra large gloomy square of calculating from Battle array, improves the speed of target tracking algorism, and ESM_blur algorithm is the target tracking algorism combining motion blur model.Invention People finds, LK streamer algorithm is better than ESM_blur algorithm in performances such as frame per second, success rate and back projection's errors, therefore this enforcement Example uses the tracking to destination object of the LK optical flow algorithm.
Therefore, the present embodiment at the strategy realizing identifying destination object and tracking destination object is: calculate first with SIFT After method successfully identifies destination object, utilize LK streamer algorithm that destination object is tracked.
AR image includes at least one of image, animation, video, 3D model.
In concrete application, AR image may be used for being introduced destination object, explaining orally, or for carrying to user Show, or be used for providing audiovisual entertainment service.Such as, in museum, marker is figure sculpture's historical relic, then user is sweeping When retouching this sculpture, the AR image of output can be the animation image of this figure sculpture, and then this animation image is carried in shooting shadow The introduction to this historical relic and explanation is carried out on this historical relic of picture;The most such as, in theme park, marker is an animation role Statue, then user is when scanning this sculpture, and the AR image of output can be this animation role, and then this animation role is at shooting shadow Carry out, on this statue of picture, the prompting that shows the way;The most such as, in secret room is escaped, marker is a poster being decorated with Conan, then use Family is when scanning this poster, and the AR image of output can be cartoon role Conan, and then Conan is on this statue of filmed image Say and escape clue prompting.
The AR image only one of which in the present embodiment that one model is corresponding, but in other embodiments, Ke Yicun In such situation: a corresponding multiple AR image of model.Determine the model of correspondence according to destination object after, this model is corresponding More than one AR image, and the AR image being carried on filmed image has only to one, thus can be according to individual subscriber Data determine the AR image needing to be carried on filmed image.The AR image being i.e. associated with model is more than one, terminal according to Users personal data determines the AR image needing to be carried on filmed image;Users personal data includes user identity, Yong Hunian At least one in age, user location, user interest.
Such as, a model to there being three AR images, be respectively pupilage be suitable for AR image, working clan's identity fit AR image, business elite identity be suitable for AR image.As such, it is possible to select output AR shadow according to user identity specific aim Picture.The most such as, a model to there being three AR images, be respectively teenager be suitable for AR image, middle age be suitable for AR image, The old AR image being suitable for.As such, it is possible to select output AR image according to age of user specific aim.The most such as, a model pair Should there be three AR images, be to say the AR image in northeast, say the AR image of Cantonese, say the AR image in the south of Fujian Province respectively.So, Output AR image can be selected according to user location specific aim.The most such as, a model, to there being two AR images, is respectively The AR image of animation form, the AR image of 3D true man's form.As such, it is possible to select output AR shadow according to user interest specific aim Picture.
Users personal data can be that user prestores, or APP is when mounted from other social class application programs In transfer, or APP transfers in use from other social class application programs, does not limit acquisition individual subscriber number wherefrom According to.
Mutual in order to strengthen with user, it may be considered that by monitoring user instruction or monitoring related sensor with sensing The method of user action converts the AR image of output.Illustrate having the terminal of touch controllable function below.
In one embodiment, after the AR image being associated with model is carried on filmed image, further comprise the steps of: Receive user's transformation directive;According to user's transformation directive AR image changed accordingly or change.User changes instruction, can Being by the touch event of terminal is generated.Such as during the operations such as AR image is clicked on by user, slip, AR image enters Row is corresponding to be changed.Such as AR image is cartoon character, and when clicking on cartoon character, cartoon character makes the change of action or defeated Go out corresponding voice message;During slip cartoon character, it is replaced by another one cartoon character and carries out interaction with user.
In another embodiment, after being carried on filmed image by the AR image being associated with model, also include step Rapid: to receive the sensor values of end sensor;According to sensor values AR image changed accordingly or change.Such as user When blow terminal, shake etc. operates, the pneumatic sensor of terminal, vibrating sensor generate various sensors respectively Value, AR image is changed accordingly by terminal according to sensor values.Such as when AR image is cartoon character, terminal is blown by user During gas, the clothing skirt of cartoon character or hair wave accordingly;When user shakes terminal, cartoon character is made astasia and is dropped to Action, or change cartoon character.
The method of corresponding above-mentioned augmented reality, below describes the terminal of a kind of augmented reality.This terminal is with rearmounted The mobile terminal of photographic head, such as panel computer, smart mobile phone, intelligent game computer.
Fig. 2 is the terminal module figure of the augmented reality of an embodiment.
A kind of terminal of augmented reality, including: photographing module 100, designated module 200 and display module 300.
Photographing module 100 is used for starting terminal camera and obtains filmed image;Designated module 200 is used for providing more than one Model database, set the goal data base from more than one model database middle finger according to user's designated order;Display module 300 show filmed image in real time, identify the destination object matched with the model in target database from filmed image, The AR image being associated with model is carried on filmed image.
Photographing module 100 is used for starting terminal camera and obtains filmed image.
Typically, when marker occurs, user, can be actively in APP in the case of knowing that it is marker The photographic head starting terminal removes to shoot this marker, and obtains real-time filmed image.In filmed image, can be by following Marker is identified as destination object by the identification process that will describe.
Marker can be the object of plane form, it is also possible to be the object of stereogenic.These markers should have bright The prompting of the written forms such as true prompting, such as " scanning obtains excellent AR content ", to facilitate user to know.
The object of plane form can be the plane that poster, book cover etc. have image, pattern or color scheme Media, it is also possible to be paintings, the plane art such as the mural painting portrayed on wall.In a word, the carrier of plane shows there is figure Picture, pattern or color scheme can serve as marker.
The object of stereogenic can be statue, building, artware, pottery etc. stereo article, such as, can be natural science In shop, the historical relic of the various stereogenic of exhibition, can be the animation role's sculpture in recreation ground and theme park and artware, It can be secret room unique sculpture of escaping in game in secret room.
User, by these markers are scanned shooting, can get corresponding from the corresponding AR image of output Introduction explanation, visitor's prompting, audiovisual entertainment etc. service, below step will be explained in.
After photographing module 100 gets filmed image, it is intended that module 200 provides more than one model database, according to User's designated order sets the goal data base from more than one model database middle finger.
For escaping as museum, theme park, recreation ground, secret room, may have more marker.With rich As a example by thing shop, museum has the paintings of a lot of historical relics, such as ancient times, stone carving, porcelain, pottery, jadeware, bronze, brass or copper ware, if Can using historical relic important for part all as marker (destination object) and make correspondence AR image, then tourist can pass through AR technology is better understood by this historical relic, improves the visit enjoyment of user.Again as a example by theme park, theme park can be arrived Place is studded with the statue of various animation role, such as Mickey Mouse, Donald duck, Snow White etc., if can be dynamic by part Unrestrained role is as marker (corresponding destination object in filmed image) the AR image that makes correspondence, then tourist can be led to Cross AR technology and be better understood by this animation role, improve the visit enjoyment of user.The most such as secret room is escaped, and secret room is hidden in escaping The various clues hidden can also be embodied by backroom various markers, such as the pattern of uniqueness, statue etc., and player is permissible Scan these markers and obtain corresponding AR image, therefrom discover a clue.
Having the occasion of more marker for these, each marker correspondence one stores at terminal memory Model, identify time need in data base one by one model mate, it is obviously desirable to the substantial amounts of process time, cause identify effect Rate is relatively low.Therefore, it can be divided into multiple models more than one group, the often corresponding model database of group, and by these moulds Type data base is supplied to user and selects.
Model is stored in advance in terminal memory, and every kind of model is all in advance according to corresponding marker structure, The key character being required for marker when structure is analyzed and learns, so that can pass through Matching Model and be clapped later This marker is effectively identified by the marker taken the photograph.Being similar to the fingerprint recognition of current trend, marker is equivalent to people's Fingerprint, model is equivalent to the finger print data of terminal storage;After shooting marker, by (namely the shooting of taken marker Destination object in image) carry out mating contrast with the model in model database, to confirm and to identify this marker.
Model carries out packet to be grouped according to the type of model or attribute, it is also possible to anticipated according to it by designer It is willing to or self-defining rule of classification is grouped.According to the attribute of each model, model can be grouped, often organize correspondence One model database, has same or analogous attribute between the model in each model database.
Model such as can be divided into two classes, one group of areal model being to have plane properties, correspond to plane class model Data base;One group of areal model being to have three-dimensional attribute, correspond to three-dimensional class model data base;Or can also be flat to having The areal model of face attribute and the areal model with three-dimensional attribute are further segmented, such as, will have three-dimensional attribute Areal model is further subdivided into model and the model of major embodiment curved surface of major embodiment corner angle.
Such as can be grouped according to the appearance attribute of model by model, appearance attribute includes color scheme, shape, figure At least one in case, there is between the model in each model database same or analogous appearance attribute.Such as can be by Model is divided into three groups, and one group is the model of major embodiment color scheme feature in appearance, and one group is major embodiment shape in appearance The model of feature, one group is the model of major embodiment pattern characteristics in appearance.Or above-mentioned three group models can also be entered The segmentation of one step, such as the model of major embodiment pattern characteristics in appearance is further divided into the pattern being mainly made up of straight line and The main pattern being made up of curve.
Model is carried out these packets, the recognizer below can be applied the most several even a kind of recognizer Just can quickly recognize marker, reduce the difficulty of design and improve recognition efficiency.The model the most often organized has Its same or analogous attribute, designer may have only to a kind of recognizer when arranging algorithm just can be at corresponding model Data base identifies rapidly marker.
Therefore, after obtaining filmed image, multiple model databases are supplied to user, allow user according to actual marker Select corresponding model database, improve the efficiency identified.After user chooses model database, display module 300 is real-time Display filmed image, identifies the destination object matched with the model in target database from filmed image, will be with model phase The AR image of association is carried on filmed image.
After photographing module 100 gets filmed image by the imageing sensor of terminal camera, by the display of terminal Module 300 shows shooting image in real time.Imageing sensor can be CMOS (complementary metal oxide semiconductors (CMOS)) imageing sensor (such as, CMOS CMOS active pixel sensor (APS)) or CCD (charge-coupled image sensor) sensor.Filmed image is after display, real In marker be taken and constitute destination object in filmed image, that is to say that what the marker in reality correspond to claps exactly Destination object in photogram.
In the present embodiment, each model database is to there being respective recognizer;Identify and mesh from filmed image During the destination object that the model in mark data base matches, known by the Target Recognition Algorithms corresponding with target database Not.Assume that there is multiple model database K1, K2 ... Kn, these model databases correspond to respectively recognizer f1, f2、……fn;If user have selected K2 as target database, then the designated module 200 of terminal will use recognizer F2 is identified as Target Recognition Algorithms.
Recognizer includes at least one in SIFT algorithm, Harris algorithm, SURF algorithm, FAST algorithm.SIFT calculates Method, i.e. Scale invariant features transform (Scale-invariant feature transform) algorithm;SURF algorithm, the most quickly Robust features image matching algorithm (Speed-up Robust Feature);FAST algorithm, i.e. Features From Accelerated Segment Test algorithm.
Owing to SIFT algorithm is provided with stronger adaptability, simultaneously arithmetic speed to complex deformation and the illumination variation of image Than very fast, positioning precision is higher, thus is suitable for being applied in the present embodiment.
Display module 300 shows filmed image in real time, identifies and the model kissing in target database from filmed image The destination object closed, is carried in the AR image being associated with model on filmed image.Therefore, AR image can be at screen Setting position shows, even if filmed image changes due to moving of photographic head, and AR image setting at screen all the time Location is put and is shown, and is displayed on real-time filmed image.This is for the viewing angle of user, AR image All the time it is carried on filmed image.
But so that user has preferable augmented reality to experience, in this example it is shown that module 300 will be with model The AR image being associated is carried on the destination object on filmed image, and AR image is followed the tracks of the movement of destination object and moved. In other words, AR image is shown on the destination object on filmed image, if filmed image is sent out due to the movement of photographic head Changing, thus cause the change in location of destination object, then AR image is followed the tracks of destination object all the time and shows on destination object Show.
Tracking to destination object, can pass through LK optical flow algorithm (Lucas Kanade streamer algorithm), efficient second order Destination object is carried out by littleization (Efficient Second-order Minimization, ESM) algorithm or ESM_blur algorithm Follow the tracks of.
LK algorithm is by calculating gray scale difference quadratic sum (SSD, the Sum of Square between certain window hypograph Differences) realize the tracking to destination object, and ESM algorithm is by Taylor's second order expension, removes the extra large gloomy square of calculating from Battle array, improves the speed of target tracking algorism, and ESM_blur algorithm is the target tracking algorism combining motion blur model.Invention People finds, LK streamer algorithm is better than ESM_blur algorithm in performances such as frame per second, success rate and back projection's errors, therefore this enforcement Example uses the tracking to destination object of the LK optical flow algorithm.
Therefore, the present embodiment at the strategy realizing identifying destination object and tracking destination object is: calculate first with SIFT After method successfully identifies destination object, utilize LK streamer algorithm that destination object is tracked.
AR image includes at least one of image, animation, video, 3D model.
In concrete application, AR image may be used for being introduced destination object, explaining orally, or for carrying to user Show, or be used for providing audiovisual entertainment service.Such as, in museum, marker is figure sculpture's historical relic, then user is sweeping When retouching this sculpture, the AR image of output can be the animation image of this figure sculpture, and then this animation image is carried in shooting shadow The introduction to this historical relic and explanation is carried out on this historical relic of picture;The most such as, in theme park, marker is an animation role Statue, then user is when scanning this sculpture, and the AR image of output can be this animation role, and then this animation role is at shooting shadow Carry out, on this statue of picture, the prompting that shows the way;The most such as, in secret room is escaped, marker is a poster being decorated with Conan, then use Family is when scanning this poster, and the AR image of output can be cartoon role Conan, and then Conan is on this statue of filmed image Say and escape clue prompting.
The AR image only one of which in the present embodiment that one model is corresponding, but in other embodiments, Ke Yicun In such situation: a corresponding multiple AR image of model.Determine the model of correspondence according to destination object after, this model is corresponding More than one AR image, and the AR image being carried on filmed image has only to one, thus can be according to individual subscriber Data determine the AR image needing to be carried on filmed image.The AR image being i.e. associated with model is more than one, display module 300 determine the AR image needing to be carried on filmed image according to users personal data;Users personal data include user identity, At least one in age of user, user location, user interest.
Such as, a model to there being three AR images, be respectively pupilage be suitable for AR image, working clan's identity fit AR image, business elite identity be suitable for AR image.As such, it is possible to select output AR shadow according to user identity specific aim Picture.The most such as, a model to there being three AR images, be respectively teenager be suitable for AR image, middle age be suitable for AR image, The old AR image being suitable for.As such, it is possible to select output AR image according to age of user specific aim.The most such as, a model pair Should there be three AR images, be to say the AR image in northeast, say the AR image of Cantonese, say the AR image in the south of Fujian Province respectively.So, Output AR image can be selected according to user location specific aim.The most such as, a model, to there being two AR images, is respectively The AR image of animation form, the AR image of 3D true man's form.As such, it is possible to select output AR shadow according to user interest specific aim Picture.
Users personal data can be that user prestores, or APP is when mounted from other social class application programs In transfer, or APP transfers in use from other social class application programs, does not limit acquisition individual subscriber number wherefrom According to.
Mutual in order to strengthen with user, it may be considered that by monitoring user instruction or monitoring related sensor with sensing The method of user action converts the AR image of output.Illustrate having the terminal of touch controllable function below.
In one embodiment, also include changing module, for the AR being associated with model at described display module 300 After image is carried on filmed image, receive user's transformation directive;According to user's transformation directive, AR image is carried out accordingly Change or change.User changes instruction, can be by generating the touch event of terminal.AR image is carried out by such as user During the operation such as click, slip, AR image changes accordingly.Such as AR image is cartoon character, when clicking on cartoon character, dynamic Unrestrained personage makes the change of action or exports corresponding voice message;During slip cartoon character, it is replaced by another one animation Personage and user carry out interaction.
In another embodiment, also include changing module, for being associated with model at described display module 300 After AR image is carried on filmed image, receive the sensor values of end sensor;According to sensor values, AR image is carried out Corresponding change or replacing.Such as during the operations such as terminal is blown by user, shake, the pneumatic sensor of terminal, vibration pass Sensor generates various sensor values respectively, and AR image is changed accordingly by terminal according to sensor values.Such as AR image When being cartoon character, when terminal is blown by user, the clothing skirt of cartoon character or hair wave accordingly;When user shakes terminal, The action that cartoon character is made astasia and dropped to, or change cartoon character.
The method of above-mentioned augmented reality and terminal, start terminal camera and obtain filmed image;There is provided more than one Model database, sets the goal data base from more than one model database middle finger according to user's designated order;Show institute in real time State filmed image, from described filmed image, identify the destination object that matches with the model in described target database, will be with The AR image that described model is associated is carried on described filmed image.When having multiple model, multiple models can be divided Become more than one group (such as according to setting classification packet), often organize a corresponding model database, and by these model datas Storehouse is supplied to user and selects.For a user, user is it is known that the model of the marker that shooting which type corresponding Data base, the model in the target database that terminal selects according to user carries out match cognization to destination object, it is possible to reduce Join the quantity of model, reduce data processing amount and saving processes the time, improve recognition efficiency.
Although each step that it should be understood that in the flow chart of Fig. 1 shows successively according to the instruction of arrow, but this A little steps are not that the inevitable order indicated according to arrow performs successively.Unless expressly stated otherwise herein, these steps Performing the strictest order to limit, it can perform in the other order.And, at least some of step in Fig. 1 can To include many sub-steps or multiple stage, these sub-steps or stage are not necessarily to have performed at synchronization, But can perform in different moment, its execution sequence is also not necessarily and carries out successively, but can with other steps or The sub-step of other steps or at least some of of stage perform in turn or alternately.
The above is only the some embodiments of the present invention, it is noted that for the ordinary skill people of the art For Yuan, under the premise without departing from the principles of the invention, it is also possible to make some improvements and modifications, these improvements and modifications also should It is considered as protection scope of the present invention.

Claims (10)

1. the method for an augmented reality, it is characterised in that comprise the steps:
Start terminal camera and obtain filmed image;
More than one model database is provided, sets the goal from more than one model database middle finger according to user's designated order Data base;
Showing described filmed image in real time, from described filmed image, identification matches with the model in described target database Destination object, is carried in the AR image being associated with described model on described filmed image.
The method of augmented reality the most according to claim 1, it is characterised in that the model in each described model database Between there is same or analogous attribute.
The method of augmented reality the most according to claim 2, it is characterised in that described more than one model database In, at least include plane class model data base and three-dimensional class model data base.
The method of augmented reality the most according to claim 2, it is characterised in that the model in each described model database Between there is same or analogous appearance attribute.
The method of augmented reality the most according to claim 4, it is characterised in that described appearance attribute include color scheme, At least one in shape, pattern.
The method of augmented reality the most according to claim 1, it is characterised in that each described model database is each to having From recognizer;When identifying, from described filmed image, the destination object matched with the model in described target database, It is identified by the Target Recognition Algorithms corresponding with described target database.
The method of augmented reality the most according to claim 6, it is characterised in that described recognizer include SIFT algorithm, At least one in Harris algorithm, SURF algorithm, FAST algorithm.
The method of augmented reality the most according to claim 1, it is characterised in that the described AR being associated with described model Image is carried on described filmed image and includes:
The AR image being associated with described model is carried on the destination object on described filmed image, and described AR image with The movement of destination object described in track and move.
The method of augmented reality the most according to claim 8, it is characterised in that by LK optical flow algorithm, efficient second order Described destination object is tracked by littleization (ESM) algorithm or ESM_blur algorithm.
10. the terminal of an augmented reality, it is characterised in that including:
Photographing module, is used for starting terminal camera and obtains filmed image;
Designated module, is used for providing more than one model database, according to user's designated order from more than one pattern number Set the goal data base according to storehouse middle finger;
Display module, shows described filmed image in real time, identifies and in described target database from described filmed image The destination object that matches of model, the AR image being associated with described model is carried on described filmed image.
CN201610597140.0A 2016-07-26 2016-07-26 The method of augmented reality and terminal Pending CN106251405A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610597140.0A CN106251405A (en) 2016-07-26 2016-07-26 The method of augmented reality and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610597140.0A CN106251405A (en) 2016-07-26 2016-07-26 The method of augmented reality and terminal

Publications (1)

Publication Number Publication Date
CN106251405A true CN106251405A (en) 2016-12-21

Family

ID=57604037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610597140.0A Pending CN106251405A (en) 2016-07-26 2016-07-26 The method of augmented reality and terminal

Country Status (1)

Country Link
CN (1) CN106251405A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106773051A (en) * 2016-12-28 2017-05-31 太仓红码软件技术有限公司 Show the augmented reality devices and methods therefor of the virtual nutritional information of AR markers
CN107464290A (en) * 2017-08-07 2017-12-12 上海白泽网络科技有限公司 Three-dimensional information methods of exhibiting, device and mobile terminal
WO2018152685A1 (en) * 2017-02-22 2018-08-30 Tencent Technology (Shenzhen) Company Limited Image processing in a vr system
CN108876880A (en) * 2018-04-22 2018-11-23 平安科技(深圳)有限公司 Learning method, device, equipment and storage medium based on ARkit
CN109525538A (en) * 2017-09-20 2019-03-26 丰盛数位有限公司 Interactive approach and system for augmented reality certification
CN109582687A (en) * 2017-09-29 2019-04-05 白欲立 A kind of data processing method and device based on augmented reality
CN109963163A (en) * 2017-12-26 2019-07-02 阿里巴巴集团控股有限公司 Internet video live broadcasting method, device and electronic equipment
CN110264393A (en) * 2019-05-15 2019-09-20 联想(上海)信息技术有限公司 A kind of information processing method, terminal and storage medium
WO2021088161A1 (en) * 2019-11-08 2021-05-14 福建工程学院 Ar-based description method and description system
WO2021197016A1 (en) * 2020-04-01 2021-10-07 Guangdong Oppo Mobile Telecommunications Corp., Ltd. System and method for enhancing subjects in videos

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050295A (en) * 2014-07-01 2014-09-17 彩带网络科技(北京)有限公司 Interaction method and system
CN104115182A (en) * 2011-12-06 2014-10-22 韦俊诚 Foreign language acquisition and learning service providing method based on context-aware using smart device
CN104219584A (en) * 2014-09-25 2014-12-17 广州市联文信息科技有限公司 Reality augmenting based panoramic video interaction method and system
CN105184858A (en) * 2015-09-18 2015-12-23 上海历影数字科技有限公司 Method for augmented reality mobile terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104115182A (en) * 2011-12-06 2014-10-22 韦俊诚 Foreign language acquisition and learning service providing method based on context-aware using smart device
CN104050295A (en) * 2014-07-01 2014-09-17 彩带网络科技(北京)有限公司 Interaction method and system
CN104219584A (en) * 2014-09-25 2014-12-17 广州市联文信息科技有限公司 Reality augmenting based panoramic video interaction method and system
CN105184858A (en) * 2015-09-18 2015-12-23 上海历影数字科技有限公司 Method for augmented reality mobile terminal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张望鹏 等: "一种面向对象知识的模型数据库管理系统的设计与实现", 《图像图形技术研究与应用》 *
邵文坚: "面向增强现实的目标检测和跟踪算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106773051B (en) * 2016-12-28 2020-05-15 北京中投视讯文化传媒股份有限公司 Augmented reality device and method for displaying virtual nutrition information of AR marker
CN106773051A (en) * 2016-12-28 2017-05-31 太仓红码软件技术有限公司 Show the augmented reality devices and methods therefor of the virtual nutritional information of AR markers
WO2018152685A1 (en) * 2017-02-22 2018-08-30 Tencent Technology (Shenzhen) Company Limited Image processing in a vr system
US11003707B2 (en) 2017-02-22 2021-05-11 Tencent Technology (Shenzhen) Company Limited Image processing in a virtual reality (VR) system
CN107464290A (en) * 2017-08-07 2017-12-12 上海白泽网络科技有限公司 Three-dimensional information methods of exhibiting, device and mobile terminal
CN109525538B (en) * 2017-09-20 2021-08-24 丰盛数位有限公司 Interaction method and system for augmented reality authentication
CN109525538A (en) * 2017-09-20 2019-03-26 丰盛数位有限公司 Interactive approach and system for augmented reality certification
CN109582687A (en) * 2017-09-29 2019-04-05 白欲立 A kind of data processing method and device based on augmented reality
CN109963163A (en) * 2017-12-26 2019-07-02 阿里巴巴集团控股有限公司 Internet video live broadcasting method, device and electronic equipment
WO2019205411A1 (en) * 2018-04-22 2019-10-31 平安科技(深圳)有限公司 Arkit-based learning method and apparatus, device, and storage medium
CN108876880A (en) * 2018-04-22 2018-11-23 平安科技(深圳)有限公司 Learning method, device, equipment and storage medium based on ARkit
CN110264393A (en) * 2019-05-15 2019-09-20 联想(上海)信息技术有限公司 A kind of information processing method, terminal and storage medium
WO2021088161A1 (en) * 2019-11-08 2021-05-14 福建工程学院 Ar-based description method and description system
WO2021197016A1 (en) * 2020-04-01 2021-10-07 Guangdong Oppo Mobile Telecommunications Corp., Ltd. System and method for enhancing subjects in videos

Similar Documents

Publication Publication Date Title
CN106251405A (en) The method of augmented reality and terminal
WO2019223468A1 (en) Camera orientation tracking method and apparatus, device, and system
CN103973969B (en) Electronic installation and its image system of selection
KR101763132B1 (en) Methods and systems for content processing
KR20200020960A (en) Image processing method and apparatus, and storage medium
CN110288547A (en) Method and apparatus for generating image denoising model
WO2019007258A1 (en) Method, apparatus and device for determining camera posture information, and storage medium
US10262464B2 (en) Dynamic, local augmented reality landmarks
CN111556278A (en) Video processing method, video display device and storage medium
CN109035334A (en) Determination method and apparatus, storage medium and the electronic device of pose
US9135752B2 (en) Image display system
CN114003190B (en) Augmented reality method and device suitable for multiple scenes and multiple devices
EP3671411B1 (en) Location-enabled augmented reality (ar) system and method for interoperability of ar applications
CN112308977B (en) Video processing method, video processing device, and storage medium
Ambeth Kumar et al. IOT-based smart museum using wearable device
CN105117399A (en) Image search method and device
CN110309721A (en) Method for processing video frequency, terminal and storage medium
Dougherty Electronic imaging technology
Xu The research on applying artificial intelligence technology to virtual YouTuber
CN110135329B (en) Method, device, equipment and storage medium for extracting gestures from video
CN110060285A (en) A kind of remote sensing image registration method and system based on SURF algorithm
Hasper et al. Remote execution vs. simplification for mobile real-time computer vision
JP6892557B2 (en) Learning device, image generator, learning method, image generation method and program
Woodward et al. Case Digitalo-A range of virtual and augmented reality solutions in construction application
Cappellini Electronic Imaging & the Visual Arts. EVA 2013 Florence

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20161221