CN109427096A - A kind of automatic guide method and system based on augmented reality - Google Patents
A kind of automatic guide method and system based on augmented reality Download PDFInfo
- Publication number
- CN109427096A CN109427096A CN201710758651.0A CN201710758651A CN109427096A CN 109427096 A CN109427096 A CN 109427096A CN 201710758651 A CN201710758651 A CN 201710758651A CN 109427096 A CN109427096 A CN 109427096A
- Authority
- CN
- China
- Prior art keywords
- module
- described image
- virtual identifying
- target object
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003190 augmentative effect Effects 0.000 title claims abstract description 42
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000012545 processing Methods 0.000 claims description 25
- 239000003550 marker Substances 0.000 claims description 8
- 230000000007 visual effect Effects 0.000 claims description 4
- 239000011800 void material Substances 0.000 claims description 3
- 230000003993 interaction Effects 0.000 abstract description 9
- 230000015654 memory Effects 0.000 description 21
- 230000006870 function Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 9
- 230000000694 effects Effects 0.000 description 8
- 238000009877 rendering Methods 0.000 description 8
- 230000006854 communication Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 239000000463 material Substances 0.000 description 4
- 238000005286 illumination Methods 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 238000001467 acupuncture Methods 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000006386 neutralization reaction Methods 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000010355 oscillation Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 210000004761 scalp Anatomy 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
Abstract
The embodiment of the invention discloses a kind of automatic guide method and system based on augmented reality.The described method includes: obtaining camera for image captured by current scene, identify the target object in image, virtual identifying object associated with the target object is obtained in database, and above-mentioned virtual identifying object is added in image according to the position of target object in the picture.This programme realizes that, on mobile terminals to the actual situation interaction of three dimensional virtual models and seamless switching, opposite and two-dimensional image has more three-dimensional sense, facilitates understanding by the way that the associated virtual identifying object of target object to be added in present image.
Description
Technical field
The present invention relates to data processing fields, and in particular to a kind of automatic guide method and system based on augmented reality.
Background technique
Augmented reality (Augmented Reality, AR) technology is gradually to develop on the basis of virtual reality in recent years
The new technology come, principle are the technologies perceived by the increase user of effective information provided by computer to real world, and
3D object, virtual scene or prompt information that computer system generates are added in real scene, to realize to reality
" enhancing ".1997 University of North Carolina Peter Lonard A Zuma (Ronald Azuma) propose the definition of augmented reality, he thinks
Real-world object in augmented reality, dummy object and user environment must seamless combination together, and real-world object and virtual
It to be also able to carry out interaction between object, is just able to achieve real virtual reality fusion in this way.Therefore there is actual situation to combine, is real for AR technology
When interaction, three-dimensional orientation new feature, expanded the sensing range of the mankind.User both can see real world, it can be seen that
The virtual objects being superimposed upon on real world.
Summary of the invention
The embodiment of the present invention provides a kind of automatic guide method and system based on augmented reality, may be implemented mobile whole
To the actual situation interaction of three dimensional virtual models and seamless switching on end.
In a first aspect, the embodiment of the present invention provides a kind of automatic guide method based on augmented reality, comprising:
Camera is obtained for image captured by current scene;
Identify the target object in described image;
Virtual identifying object associated with the target object is obtained in database;
The virtual identifying object is added in described image according to position of the target object in described image.
Second aspect, the embodiment of the invention also provides a kind of automatic navigation systems based on augmented reality, comprising: first
Obtain module, identification module, the second acquisition module and adding module;
Described first obtains module, for obtaining camera for image captured by current scene;
The identification module, for identification target object in described image;
Described second obtains module, for obtaining virtual identifying associated with the target object in database
Object;
The adding module, for being added the virtual identifying object according to position of the target object in described image
It adds in described image.
Automatic guide method provided in an embodiment of the present invention based on augmented reality, which obtains camera first and is directed to, works as front court
Image captured by scape identifies the target object in image, and void associated with the target object is obtained in database
Above-mentioned virtual identifying object, is added in image by quasi- marker according to the position of target object in the picture.This programme passes through
The associated virtual identifying object of target object is added in present image, is realized on mobile terminals to three dimensional virtual models
Actual situation interaction and seamless switching, opposite and two-dimensional image have more three-dimensional sense, facilitate understanding.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment
Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for
For those skilled in the art, without creative efforts, it can also be obtained according to these attached drawings other attached
Figure.
Fig. 1 is a kind of flow diagram of the automatic guide method based on augmented reality provided in an embodiment of the present invention.
Fig. 2 is the flow diagram of another automatic guide method based on augmented reality provided in an embodiment of the present invention.
Fig. 3 is the schematic diagram of a scenario of the automatic guide method provided in an embodiment of the present invention based on augmented reality.
Fig. 4 is a kind of structural schematic diagram of the automatic navigation system based on augmented reality provided in an embodiment of the present invention.
Fig. 5 is the structural schematic diagram of another automatic navigation system based on augmented reality provided in an embodiment of the present invention.
Fig. 6 is server architecture schematic diagram provided in an embodiment of the present invention.
Specific embodiment
Schema is please referred to, wherein identical component symbol represents identical component, the principle of the present invention is to implement one
It is illustrated in computing environment appropriate.The following description be based on illustrated by the specific embodiment of the invention, should not be by
It is considered as the limitation present invention other specific embodiments not detailed herein.
In the following description, specific embodiments of the present invention will refer to the step as performed by one or multi-section computer
And symbol illustrates, unless otherwise stating clearly.Therefore, these steps and operation will have to mention for several times is executed by computer, this paper institute
The computer execution of finger includes by representing with the computer processing unit of the electronic signal of the data in a structuring pattern
Operation.This operation is converted at the data or the position being maintained in the memory system of the computer, reconfigurable
Or in addition change the running of the computer in mode known to the tester of this field.The maintained data structure of the data
For the provider location of the memory, there is the specific feature as defined in the data format.But the principle of the invention is with above-mentioned text
Word illustrates that be not represented as a kind of limitation, this field tester will appreciate that plurality of step and behaviour as described below
Also it may be implemented in hardware.
The principle of the present invention is grasped using many other wide usages or specific purpose operation, communication environment or configuration
Make.The known example suitable for arithmetic system of the invention, environment and configuration may include (but being not limited to) hold phone,
Personal computer, server, multicomputer system, system, body frame configuration computer and distributed arithmetic ring based on micro computer
Border, which includes any above system or devices.
It will be described in detail respectively below.
The present embodiment will be described from the angle of the automatic navigation system based on augmented reality, which can specifically collect
At in the terminal.
Referring to Fig. 1, Fig. 1 is a kind of process of the automatic guide method based on augmented reality provided in an embodiment of the present invention
The automatic guide method based on augmented reality of schematic diagram, the present embodiment includes:
Step S101 obtains camera for image captured by current scene.
In the embodiment of the present invention, above-mentioned image can carry out acquisition of taking pictures to current scene for terminal, specifically, eventually
After termination receives the solicited message of taking pictures of user's input, camera can star, current scene is shot by the camera
To obtain image.Wherein, mobile terminal receives the solicited message of taking pictures of user's input, can pass through mobile terminal to receive user
The solicited message of taking pictures that the combination of a button or multiple buttons issues in upper pre-set button or existing button;Alternatively, connecing
The solicited message of taking pictures (such as " taking pictures " button in click screen) that user is issued by mobile terminal touch screen is received, it is mobile whole
After termination receives the solicited message of taking pictures of user's input, image is obtained by camera, and save described image, is subsequent place
Reason is prepared.Wherein, above-mentioned camera can may be rear camera for front camera.
It should be noted that above-mentioned image may be pre-stored image in terminal in other embodiments, than
The image such as downloaded from internet, or the image etc. sent by other terminal devices, the present invention do not make this further
It limits.
Step S102 identifies the target object in described image.
In embodiments of the present invention, characteristic information can be extracted in above-mentioned image, wherein features described above information can be with
Profile including image object included in, then determines the target object in image according to above-mentioned profile.
In an embodiment, above-mentioned target object can also be obtained according to user instructions, for example, user instruction can
To include the pressing area of the first image, for example, some region of user in one finger pressing image, then terminal is detectable
The pressing area touched in abutment to touch screen in one's hands can be used as target area, then obtain target in the region
Object.
In an embodiment, above-mentioned user instruction can also be for by the sliding trace for carrying out touch control operation generation in image
The closed trajectory constituted.For example, user carries out touch control operation for image on the touchscreen, the closure of a sliding can be formed
The closed trajectory can be made included image as target area, and choose target object in target area by track.
In one embodiment, target area can also be determined according to the touch point of pressing touch screen, wherein touch point can be
The user that terminal detects presses the finger number of touch screen, for example, 3 touch points then can be detected, when 3 if 3 fingers
When a touch point is not arranged on the same straight line, then 3 touch points can be linked to be a triangle, can be using the triangle as mesh
Mark region.Similarly, by N number of touch point, N is the integer more than or equal to 3, and at least 3 touch points in N number of touch point
It is not on same straight line, then, N number of touch point can be successively attached, the enclosed region obtained after connection can be made
For target area, target object is finally chosen in target area.
Step S103 obtains virtual identifying object associated with target object in database.
In an embodiment, after obtaining the associated marker of target object, then the marker is modeled,
To obtain virtual three-dimensional model.Wherein, to improve modeling efficiency, the rapid modeling strategy based on model complexity can be used,
Basic thought is effect of the first analysis model in virtual scene before modeling, to determine using which kind of modeling pattern.If model
For in irregular shape, structure is complicated, the model having the call in virtual scene is (such as: Practical training equipment, assembling spare and accessory parts
Deng), then it is modeled using professional three-dimensional software, using general modeling developing process;If model is regular shape, structure is simple,
It is in the model (such as: real training chamber inner space, wall fitting) of secondary role in virtual scene, then is directly built using VRML
Mould calls material or the domain Texture to quote textures to realize by the domain material of Appearance node.
Virtual identifying object is added in image by step S104 according to position of the target object in described image.
In the embodiment of the present invention, after modeling above-mentioned marker to obtain virtual three-dimensional model, by virtual three-dimensional mould
Before type is added in image, image rendering can be carried out to virtual three-dimensional model, finally accord with image by image rendering
Close the stage of 3D scene.There are many softwares for rendering, and such as: each CG software carries rendering engine, and there are also RenderMan etc..?
To the virtual image of virtual three-dimensional model, then the three-dimensional virtual image is added in above-mentioned image.
Wherein, above-mentioned virtual identifying object can be the information such as threedimensional model, animation, video, text.For example, in laboratory
LOGO nameplate on be superimposed three virtual push buttons, laboratory brief introduction, student can be understood respectively by clicking different button
Brief introduction and teacher's brief introduction, default display " the Custom House Welcome to Custom House Qingdao University of Science and Technology virtual reality in the case where not clicking on any button
Laboratory ".It for another example has been superimposed a built in video in test block, which can be local resource or Internet resources, as long as
Mobile terminal is in connected state, so that it may play the network video set in advance.Also such as it is superimposed on teacher's desk
One static 3D vase, the adjustable mobile phone camera of user observe vase etc. from different perspectives.
From the foregoing, it will be observed that the automatic guide method provided in an embodiment of the present invention based on augmented reality obtains camera shooting scalp acupuncture first
To image captured by current scene, the target object in image is identified, obtain and the target object phase in database
Above-mentioned virtual identifying object is added in image by associated virtual identifying object according to the position of target object in the picture.This
Scheme is realized by the way that the associated virtual identifying object of target object to be added in present image on mobile terminals to three-dimensional
The actual situation interaction of dummy model and seamless switching, opposite and two-dimensional image have more three-dimensional sense, facilitate understanding.
According to the description of a upper embodiment, below by the automatic guide method further to of the invention based on augmented reality
It is illustrated.
Referring to Fig. 2, Fig. 2 is another stream of the automatic guide method provided in an embodiment of the present invention based on augmented reality
Journey schematic diagram, comprising:
Step S201 obtains camera for image captured by current scene.
In the embodiment of the present invention, above-mentioned image can carry out acquisition of taking pictures to current scene for terminal, specifically, eventually
After termination receives the solicited message of taking pictures of user's input, camera can star, current scene is shot by the camera
To obtain image.
Step S202 identifies the target object in image.
In embodiments of the present invention, characteristic information can be extracted in above-mentioned image, wherein features described above information can be with
Profile including image object included in, then determines the target object in image according to above-mentioned profile.
Step S203 obtains virtual identifying object associated with target object in database.
Step S204 determines the Viewing-angle information of terminal and target object according to image.
Step S205 establishes the threedimensional model of virtual identifying object according to Viewing-angle information.
In embodiments of the present invention, it can use Unity3D and three-dimensional modeling carried out to marker, examined by Vuforia engine
Marker characteristic point is surveyed and tracked, corresponding three-dimensional mould is drawn on view plane according to the relative position of different identification object and posture
Type.Wherein, the step of above-mentioned threedimensional model that virtual identifying object is established according to Viewing-angle information can specifically include:
Determine the type of virtual identifying object;
Target Modeling strategy is determined according to the type of virtual identifying object;
It is modeled according to Viewing-angle information and Target Modeling strategy to generate the threedimensional model of virtual identifying object.
Wherein, Unity3D is a cross-platform comprehensive 3D game developed by Unity Technologies company
Engine can combine with some augmented reality developing instruments and realize actual situation superposition, human-computer interaction function.The software includes one
Real-time tool after rendering engine (supporting Direct3D, OpenGL and other proprietary softwares) and processing neutralisation treatment, such as light
Mapping.
Threedimensional model and image are overlapped by step S206 according to the position of target object in the picture.
Before being overlapped threedimensional model and image, figure rendering can also be carried out to above-mentioned threedimensional model, wherein
Above-mentioned rendering can specifically include light processing and texture processing, and light processing includes carrying out lighting effect mould to threedimensional model
Quasi-, texture processing includes carrying out grain effect simulation to threedimensional model.Specifically, light can be constructed by light processing module
According to model.In basic illumination model, the surface color of an object be radiation (emissive), Ambient (ambient),
The summation of the illumination such as diffusing reflection (diffuse) and mirror-reflection (specular) effect.Every kind of illumination effect depends on surface material
The collective effect of the property (such as color and position of light) of the property (such as brightness and material color) and light source of matter.The module
Support various light sources model, including directional light, spotlight, floodlight etc., and can be by adjusting parameter real time inspection lighting effect.
It is then based on GPU Shader technology and carries out lighting simulation.Pass through the texture number of texture processing module management and schedule virtual scene
According to.The core of the submodule is a texture management device (TextureManager), supports common data texturing format, including
Tga, png, jpg, bmp, dds etc..The data texturing for being loaded onto memory carries out parsing and information extraction, using texture counter
The public texture of technical management rendering engine avoids the repetition of identical texture from being loaded into, and saves memory, video memory space.Meanwhile the mould
Block is supported to render a variety of GPU texture special efficacys, in conjunction with FBO (Frame Buffer Object), PBO (Pixel Buffer
The display cachings technology such as Object), carries out more life-like Specially Effect Simulation to model texture, including Z-Correct bump mapping Z-correct, bright special efficacy,
AVI video etc..
Step S207 receives the operation that user is directed to virtual identifying object.
Step S208 is performed corresponding processing according to image of the user's operation to virtual identifying object.
The embodiment of the present invention has interactivity abundant, and user can be by virtual push button, operating stick etc. to the object of superposition
Interact control.
As shown in figure 3, user can control a 3D person of low position by rocking bar, user is touched left on mobile terminal screen with hand
The rocker button of inferior horn, moves up and down, and the 3D dummy model being superimposed upon on experimental bench also accordingly can be moved all around.
This programme proposes a kind of automatic guide method of the mobile augmented reality based on Unity3D, using Unity3D to field
Scape carries out three-dimensional modeling, carries out secondary development on the basis of high pass Vuforia SDK, realizes on mobile terminals to 3D mould
The actual situation interaction of type, animation and video and seamless switching.
From the foregoing, it will be observed that the available camera of the embodiment of the present invention identifies image for image captured by current scene
Target object in the middle obtains virtual identifying object associated with target object in database, determines terminal according to image
With the Viewing-angle information of target object, the threedimensional model of virtual identifying object is established according to Viewing-angle information, according to target object in image
In position threedimensional model and image are overlapped, receive user be directed to virtual identifying object operation, according to user's operation pair
The image of virtual identifying object performs corresponding processing.This programme is worked as by the way that the associated virtual identifying object of target object to be added to
In preceding image, realize on mobile terminals to the actual situation interaction of three dimensional virtual models and seamless switching, opposite and two-dimensional surface
Image has more three-dimensional sense, facilitates understanding.
For the ease of the better implementation automatic guide method provided in an embodiment of the present invention based on augmented reality, the present invention
Embodiment additionally provides a kind of system of automatic guide method based on above-mentioned based on augmented reality.Wherein the meaning of noun with it is upper
State identical in the automatic guide method based on augmented reality, specific implementation details can be with reference to the explanation in embodiment of the method.
Referring to Fig. 4, Fig. 4 is a kind of structure of the automatic navigation system based on augmented reality provided in an embodiment of the present invention
Schematic diagram, should automatic navigation system 30 based on augmented reality include: the first acquisition module 301, identification module 302, second obtain
Modulus block 303 and adding module 304;
First obtains module 301, for obtaining camera for image captured by current scene;
Identification module 302, for identification target object in described image;
Second obtains module 303, for obtaining virtual identifying object associated with the target object in database;
Adding module 304, for being added the virtual identifying object according to position of the target object in described image
It adds in described image.
In one embodiment, as shown in figure 5, the identification module 302 includes: extracting sub-module 3021 and determining submodule
3022;
Extracting sub-module 3021, for extracting the characteristic information in described image, the characteristic information includes described image
The profile of the object included in;
Submodule 3022 is determined, for determining the target object in described image according to the profile.
In one embodiment, the system 30 further include: visual angle determining module 305 and model building module 306;
Visual angle determining module 305, for being obtained in database and the target pair in the second acquisition module 303
After associated virtual identifying object, the Viewing-angle information of terminal Yu the target object is determined according to described image;
Model building module 306, for establishing the threedimensional model of the virtual identifying object according to the Viewing-angle information;
Adding module 304, specifically for according to position of the target object in described image by the threedimensional model
It is overlapped with described image.
In one embodiment, model building module 306 includes: type determination module 3061, the determining submodule of strategy
3062 and setting up submodule 3063;
Type determination module 3061, for determining the type of the virtual identifying object;
Strategy determines submodule 3062, for determining Target Modeling strategy according to the type of the virtual identifying object;
Setting up submodule 3063, for being modeled according to the Viewing-angle information and the Target Modeling strategy to generate
State the threedimensional model of virtual identifying object.
In one embodiment, the system 30 further include: receiving module 307 and processing module 308;
Receiving module 307, in the adding module according to position of the target object in described image by institute
It states after virtual identifying object is added in described image, receives the operation that user is directed to the virtual identifying object;
Processing module 308, for being performed corresponding processing according to image of the user's operation to the virtual identifying object.
From the foregoing, it will be observed that the automatic navigation system provided in an embodiment of the present invention based on augmented reality can obtain mould by first
Block 301 obtains camera for image captured by current scene, and identification module 302 identifies the target object in image, the
Two acquisition modules 303 obtain virtual identifying object associated with the target object in database, and adding module 304 is according to mesh
Above-mentioned virtual identifying object is added in image by the position of mark object in the picture.This programme is by the way that target object to be associated
Virtual identifying object be added in present image, realize on mobile terminals it is interactive to the actual situations of three dimensional virtual models with it is seamless
Switching, opposite and two-dimensional image have more three-dimensional sense, facilitate understanding.
Correspondingly, the embodiment of the present invention also provides a kind of server 500, as shown in fig. 6, the server 500 includes radio frequency
(RF, Radio Frequency) circuit 501, the memory for including one or more computer readable storage medium
502, input unit 503, power supply 504, Wireless Fidelity (WiFi, Wireless Fidelity) module 505, include one or
The components such as the processor 506 of more than one processing core of person.It will be understood by those skilled in the art that structure shown in Fig. 6 is simultaneously
It does not constitute or not server and limit, may include perhaps combining certain components or not than illustrating more or fewer components
Same component layout.
Radio circuit 501 can be used for receiving and sending messages or communication process in signal send and receive, particularly, by base station
Downlink information receive after, transfer to one or more than one processor 506 processing;In addition, the data for being related to uplink are sent
To base station.In general, radio circuit 501 includes but is not limited to antenna, at least one amplifier, tuner, one or more oscillations
Device, subscriber identity module (SIM, Subscriber Identity Module) card, transceiver, coupler, low noise amplification
Device (LNA, Low Noise Amplifier), duplexer etc..In addition, radio circuit 501 can also by wireless communication with network
It is communicated with other equipment.Any communication standard or agreement, including but not limited to global system for mobile telecommunications can be used in the wireless communication
System (GSM, Global System of Mobile communication), general packet radio service (GPRS, General
Packet Radio Service), CDMA (CDMA, Code Division Multiple Access), wideband code division it is more
Location (WCDMA, Wideband Code Division Multiple Access), long term evolution (LTE, Long Term
Evolution), Email, short message service (SMS, Short Messaging Service) etc..
Memory 502 can be used for storing application program and data.It include that can hold in the application program that memory 502 stores
Line code.Application program can form various functional modules.Processor 506 is stored in the application journey of memory 502 by operation
Sequence, thereby executing various function application and data processing.Memory 502 can mainly include storing program area and storing data
Area, wherein storing program area can application program needed for storage program area, at least one function (such as sound-playing function,
Image player function etc.) etc.;Storage data area can store according to server use created data (such as audio data,
Phone directory etc.) etc..In addition, memory 502 may include high-speed random access memory, it can also include non-volatile memories
Device, for example, at least a disk memory, flush memory device or other volatile solid-state parts.Correspondingly, memory
502 can also include Memory Controller, to provide the access of processor 506 and input unit 503 to memory 502.
Input unit 503 can be used for receiving the information of other equipment transmission, and generate and user setting and function control
Make related keyboard, mouse, operating stick, optics or trackball signal input.Give the information of input to processor 506 again,
And order that processor 506 is sent can be received and executed.Input unit 503 can also include other input equipments.Specifically
Ground, other input equipments can include but is not limited to physical keyboard, function key (such as volume control button, switch key etc.),
One of mouse, operating stick etc. are a variety of.
Server further includes the power supply 504 (such as battery) powered to all parts.Preferably, power supply can pass through power supply
Management system and processor 506 are logically contiguous, to realize management charging, electric discharge and power consumption pipe by power-supply management system
The functions such as reason.Power supply 504 can also include one or more direct current or AC power source, recharging system, power failure
The random components such as detection circuit, power adapter or inverter, power supply status indicator.
Wireless Fidelity (WiFi) belongs to short range wireless transmission technology, and server can be helped by wireless fidelity module 505
Help user to send and receive e-mail, browse webpage and access streaming video etc., it provides wireless broadband internet for user and visits
It asks.Although Fig. 6 shows wireless fidelity module 505, but it is understood that, and be not belonging to server must be configured into portion
Point, it can according to need within the scope of not changing the essence of the invention and omit completely.
Processor 506 is the control centre of server, utilizes each portion of various interfaces and the entire server of connection
Point, by running or executing the application program being stored in memory 502, and the data that calling is stored in memory 502,
The various functions and processing data of execute server, to carry out integral monitoring to server.Optionally, processor 506 can wrap
Include one or more processing cores;Preferably, processor 506 can integrate application processor and modem processor, wherein answer
With the main processing operation system of processor, user interface and application program etc., modem processor mainly handles wireless communication.
It is understood that above-mentioned modem processor can not also be integrated into processor 506.
Processor 506 is for realizing following functions: obtaining camera for image captured by current scene, identifies image
Target object in the middle obtains virtual identifying object associated with the target object in database, is existed according to target object
Above-mentioned virtual identifying object is added in image by the position in image.
When it is implemented, the above modules can be used as independent entity to realize, any combination can also be carried out, is made
It is realized for same or several entities, the specific implementation of the above modules can be found in the embodiment of the method for front, herein not
It repeats again.
It should be noted that those of ordinary skill in the art will appreciate that whole in the various methods of above-described embodiment or
Part steps are relevant hardware can be instructed to complete by program, which can store in computer-readable storage medium
It in matter, is such as stored in the memory of terminal, and is executed by least one processor in the terminal, can be wrapped in the process of implementation
Include the process of the embodiment such as information issuing method.Wherein, storage medium may include: read-only memory (ROM, Read Only
Memory), random access memory (RAM, Random Access Memory), disk or CD etc..
A kind of automatic guide method and system based on augmented reality provided in an embodiment of the present invention has been carried out in detail above
Thin to introduce, each functional module can integrate in a processing chip, be also possible to modules and physically exist alone, can also
It is integrated in a module with two or more modules.Above-mentioned integrated module both can take the form of hardware realization,
It can also be realized in the form of software function module.Specific case used herein is to the principle of the present invention and embodiment
It is expounded, the above description of the embodiment is only used to help understand the method for the present invention and its core ideas;Meanwhile for
Those skilled in the art, according to the thought of the present invention, there will be changes in the specific implementation manner and application range, comprehensive
Upper described, the contents of this specification are not to be construed as limiting the invention.
Claims (10)
1. a kind of automatic guide method based on augmented reality, which comprises the following steps:
Camera is obtained for image captured by current scene;
Identify the target object in described image;
Virtual identifying object associated with the target object is obtained in database;
The virtual identifying object is added in described image according to position of the target object in described image.
2. the automatic guide method based on augmented reality as described in claim 1, which is characterized in that in identification described image
Target object the step of include:
The characteristic information in described image is extracted, the characteristic information includes the profile of described image object included in;
The target object in described image is determined according to the profile.
3. the automatic guide method based on augmented reality as described in claim 1, which is characterized in that obtained in database
After taking virtual identifying object associated with the target object, the method also includes:
The Viewing-angle information of terminal Yu the target object is determined according to described image;
The threedimensional model of the virtual identifying object is established according to the Viewing-angle information;
The virtual identifying object is added to the step in described image according to position of the target object in described image
Suddenly include:
The threedimensional model is overlapped with described image according to position of the target object in described image.
4. the automatic guide method based on augmented reality as claimed in claim 3, which is characterized in that according to the Viewing-angle information
The step of establishing the threedimensional model of the virtual identifying object include:
Determine the type of the virtual identifying object;
Target Modeling strategy is determined according to the type of the virtual identifying object;
It is modeled according to the Viewing-angle information and the Target Modeling strategy to generate the threedimensional model of the virtual identifying object.
5. the automatic guide method based on augmented reality as described in claim 1, which is characterized in that according to the target pair
After the virtual identifying object is added in described image as the position in described image, the method also includes:
Receive the operation that user is directed to the virtual identifying object;
It is performed corresponding processing according to image of the user's operation to the virtual identifying object.
6. a kind of automatic navigation system based on augmented reality characterized by comprising first obtains module, identification module, the
Two obtain module and adding module;
Described first obtains module, for obtaining camera for image captured by current scene;
The identification module, for identification target object in described image;
Described second obtains module, for obtaining virtual identifying object associated with the target object in database;
The adding module, for being added to the virtual identifying object according to position of the target object in described image
In described image.
7. as claimed in claim 6 based on the automatic navigation system of augmented reality, which is characterized in that the identification module packet
It includes: extracting sub-module and determining submodule;
The extracting sub-module, for extracting the characteristic information in described image, the characteristic information includes in described image
The profile of included object;
The determining submodule, for determining the target object in described image according to the profile.
8. as claimed in claim 6 based on the automatic navigation system of augmented reality, which is characterized in that the system also includes:
Visual angle determining module and model building module;
The visual angle determining module is related to the target object for obtaining in database in the second acquisition module
After the virtual identifying object of connection, the Viewing-angle information of terminal Yu the target object is determined according to described image;
The model building module, for establishing the threedimensional model of the virtual identifying object according to the Viewing-angle information;
The adding module, specifically for according to position of the target object in described image by the threedimensional model and institute
Image is stated to be overlapped.
9. as claimed in claim 8 based on the automatic navigation system of augmented reality, which is characterized in that the model building module
It include: type determination module, the determining submodule of strategy and setting up submodule;
The type determination module, for determining the type of the virtual identifying object;
The strategy determines submodule, for determining Target Modeling strategy according to the type of the virtual identifying object;
The setting up submodule, for being modeled according to the Viewing-angle information and the Target Modeling strategy to generate the void
The threedimensional model of quasi- marker.
10. as claimed in claim 6 based on the automatic navigation system of augmented reality, which is characterized in that the system also includes:
Receiving module and processing module;
The receiving module, in the adding module according to position of the target object in described image by the void
After quasi- marker is added in described image, the operation that user is directed to the virtual identifying object is received;
The processing module, for being performed corresponding processing according to image of the user's operation to the virtual identifying object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710758651.0A CN109427096A (en) | 2017-08-29 | 2017-08-29 | A kind of automatic guide method and system based on augmented reality |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710758651.0A CN109427096A (en) | 2017-08-29 | 2017-08-29 | A kind of automatic guide method and system based on augmented reality |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109427096A true CN109427096A (en) | 2019-03-05 |
Family
ID=65501882
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710758651.0A Pending CN109427096A (en) | 2017-08-29 | 2017-08-29 | A kind of automatic guide method and system based on augmented reality |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109427096A (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110009793A (en) * | 2019-04-03 | 2019-07-12 | 广东工业大学 | A kind of cultural activity intelligence guidance system |
CN110148222A (en) * | 2019-05-27 | 2019-08-20 | 重庆爱车天下科技有限公司 | It is a kind of that vehicle method and system are seen based on AR technology |
CN110189418A (en) * | 2019-05-27 | 2019-08-30 | 浙江开奇科技有限公司 | Image generating method and terminal device for digital guide to visitors |
CN110232743A (en) * | 2019-06-11 | 2019-09-13 | 珠海格力电器股份有限公司 | A kind of method and apparatus that article is shown by augmented reality |
CN111127669A (en) * | 2019-12-30 | 2020-05-08 | 北京恒华伟业科技股份有限公司 | Information processing method and device |
CN111240483A (en) * | 2020-01-13 | 2020-06-05 | 维沃移动通信有限公司 | Operation control method, head-mounted device, and medium |
CN111352505A (en) * | 2020-01-13 | 2020-06-30 | 维沃移动通信有限公司 | Operation control method, head-mounted device, and medium |
CN111654688A (en) * | 2020-05-29 | 2020-09-11 | 亮风台(上海)信息科技有限公司 | Method and equipment for acquiring target control parameters |
CN111783504A (en) * | 2019-04-30 | 2020-10-16 | 北京京东尚科信息技术有限公司 | Method and apparatus for displaying information |
CN111833461A (en) * | 2020-07-10 | 2020-10-27 | 北京字节跳动网络技术有限公司 | Method and device for realizing special effect of image, electronic equipment and storage medium |
CN111833459A (en) * | 2020-07-10 | 2020-10-27 | 北京字节跳动网络技术有限公司 | Image processing method and device, electronic equipment and storage medium |
CN112037339A (en) * | 2020-09-01 | 2020-12-04 | 北京字节跳动网络技术有限公司 | Image processing method, apparatus and storage medium |
TWI719561B (en) * | 2019-07-29 | 2021-02-21 | 緯創資通股份有限公司 | Electronic device, interactive information display method and computer readable recording medium |
CN112399125A (en) * | 2019-08-19 | 2021-02-23 | 中国移动通信集团广东有限公司 | Remote assistance method, device and system |
CN113052982A (en) * | 2021-03-23 | 2021-06-29 | 深圳市瑞立视多媒体科技有限公司 | Method and device for assembling and disassembling accessories in industrial model, computer equipment and storage medium |
WO2022055421A1 (en) * | 2020-09-09 | 2022-03-17 | 脸萌有限公司 | Augmented reality-based display method, device, and storage medium |
WO2022132033A1 (en) * | 2020-12-18 | 2022-06-23 | 脸萌有限公司 | Display method and apparatus based on augmented reality, and device and storage medium |
WO2022205026A1 (en) * | 2021-03-31 | 2022-10-06 | 深圳市大疆创新科技有限公司 | Product display method and apparatus, and electronic device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105491365A (en) * | 2015-11-25 | 2016-04-13 | 罗军 | Image processing method, device and system based on mobile terminal |
CN106355153A (en) * | 2016-08-31 | 2017-01-25 | 上海新镜科技有限公司 | Virtual object display method, device and system based on augmented reality |
CN106453864A (en) * | 2016-09-26 | 2017-02-22 | 广东欧珀移动通信有限公司 | Image processing method and device and terminal |
EP3166079A1 (en) * | 2014-07-02 | 2017-05-10 | Huizhou TCL Mobile Communication Co., Ltd. | Augmented reality method and system based on wearable device |
-
2017
- 2017-08-29 CN CN201710758651.0A patent/CN109427096A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3166079A1 (en) * | 2014-07-02 | 2017-05-10 | Huizhou TCL Mobile Communication Co., Ltd. | Augmented reality method and system based on wearable device |
CN105491365A (en) * | 2015-11-25 | 2016-04-13 | 罗军 | Image processing method, device and system based on mobile terminal |
CN106355153A (en) * | 2016-08-31 | 2017-01-25 | 上海新镜科技有限公司 | Virtual object display method, device and system based on augmented reality |
CN106453864A (en) * | 2016-09-26 | 2017-02-22 | 广东欧珀移动通信有限公司 | Image processing method and device and terminal |
Non-Patent Citations (3)
Title |
---|
张燕翔等, 《虚拟/增强现实技术及其应用》, pages 46 * |
罗永东,张淑军: ""一种基于Unity3D的移动增强现实自动导览方法"", 《计算机与数字工程》 * |
罗永东,张淑军: ""一种基于Unity3D的移动增强现实自动导览方法"", 《计算机与数字工程》, 20 November 2015 (2015-11-20), pages 2026 - 2028 * |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110009793A (en) * | 2019-04-03 | 2019-07-12 | 广东工业大学 | A kind of cultural activity intelligence guidance system |
CN111783504A (en) * | 2019-04-30 | 2020-10-16 | 北京京东尚科信息技术有限公司 | Method and apparatus for displaying information |
CN110148222A (en) * | 2019-05-27 | 2019-08-20 | 重庆爱车天下科技有限公司 | It is a kind of that vehicle method and system are seen based on AR technology |
CN110189418A (en) * | 2019-05-27 | 2019-08-30 | 浙江开奇科技有限公司 | Image generating method and terminal device for digital guide to visitors |
CN110232743A (en) * | 2019-06-11 | 2019-09-13 | 珠海格力电器股份有限公司 | A kind of method and apparatus that article is shown by augmented reality |
TWI719561B (en) * | 2019-07-29 | 2021-02-21 | 緯創資通股份有限公司 | Electronic device, interactive information display method and computer readable recording medium |
CN112399125B (en) * | 2019-08-19 | 2022-06-10 | 中国移动通信集团广东有限公司 | Remote assistance method, device and system |
CN112399125A (en) * | 2019-08-19 | 2021-02-23 | 中国移动通信集团广东有限公司 | Remote assistance method, device and system |
CN111127669A (en) * | 2019-12-30 | 2020-05-08 | 北京恒华伟业科技股份有限公司 | Information processing method and device |
CN111352505A (en) * | 2020-01-13 | 2020-06-30 | 维沃移动通信有限公司 | Operation control method, head-mounted device, and medium |
CN111240483B (en) * | 2020-01-13 | 2022-03-29 | 维沃移动通信有限公司 | Operation control method, head-mounted device, and medium |
CN111352505B (en) * | 2020-01-13 | 2023-02-21 | 维沃移动通信有限公司 | Operation control method, head-mounted device, and medium |
CN111240483A (en) * | 2020-01-13 | 2020-06-05 | 维沃移动通信有限公司 | Operation control method, head-mounted device, and medium |
CN111654688A (en) * | 2020-05-29 | 2020-09-11 | 亮风台(上海)信息科技有限公司 | Method and equipment for acquiring target control parameters |
CN111833459A (en) * | 2020-07-10 | 2020-10-27 | 北京字节跳动网络技术有限公司 | Image processing method and device, electronic equipment and storage medium |
CN111833461A (en) * | 2020-07-10 | 2020-10-27 | 北京字节跳动网络技术有限公司 | Method and device for realizing special effect of image, electronic equipment and storage medium |
CN111833461B (en) * | 2020-07-10 | 2022-07-01 | 北京字节跳动网络技术有限公司 | Method and device for realizing special effect of image, electronic equipment and storage medium |
CN112037339A (en) * | 2020-09-01 | 2020-12-04 | 北京字节跳动网络技术有限公司 | Image processing method, apparatus and storage medium |
CN112037339B (en) * | 2020-09-01 | 2024-01-19 | 抖音视界有限公司 | Image processing method, apparatus and storage medium |
WO2022055421A1 (en) * | 2020-09-09 | 2022-03-17 | 脸萌有限公司 | Augmented reality-based display method, device, and storage medium |
US11587280B2 (en) | 2020-09-09 | 2023-02-21 | Beijing Zitiao Network Technology Co., Ltd. | Augmented reality-based display method and device, and storage medium |
WO2022132033A1 (en) * | 2020-12-18 | 2022-06-23 | 脸萌有限公司 | Display method and apparatus based on augmented reality, and device and storage medium |
CN113052982A (en) * | 2021-03-23 | 2021-06-29 | 深圳市瑞立视多媒体科技有限公司 | Method and device for assembling and disassembling accessories in industrial model, computer equipment and storage medium |
CN113052982B (en) * | 2021-03-23 | 2023-11-28 | 深圳市瑞立视多媒体科技有限公司 | Fitting dismounting method and device in industrial model, computer equipment and storage medium |
WO2022205026A1 (en) * | 2021-03-31 | 2022-10-06 | 深圳市大疆创新科技有限公司 | Product display method and apparatus, and electronic device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109427096A (en) | A kind of automatic guide method and system based on augmented reality | |
CN106504311B (en) | A kind of rendering intent and device of dynamic fluid effect | |
CN109598777A (en) | Image rendering method, device, equipment and storage medium | |
CN109213728A (en) | Cultural relic exhibition method and system based on augmented reality | |
CN109427100A (en) | A kind of assembling fittings method and system based on virtual reality | |
CN110232696A (en) | A kind of method of image region segmentation, the method and device of model training | |
CN106127673B (en) | A kind of method for processing video frequency, device and computer equipment | |
CN104134230B (en) | A kind of image processing method, device and computer equipment | |
CN109905754A (en) | Virtual present collection methods, device and storage equipment | |
CN108961890A (en) | The drilling method and system of fire incident | |
CN107038455A (en) | A kind of image processing method and device | |
CN110533755A (en) | A kind of method and relevant apparatus of scene rendering | |
CN113498532B (en) | Display processing method, display processing device, electronic apparatus, and storage medium | |
US11386613B2 (en) | Methods and systems for using dynamic lightmaps to present 3D graphics | |
CN108958459A (en) | Display methods and system based on virtual location | |
CN109213885A (en) | Car show method and system based on augmented reality | |
CN112070906A (en) | Augmented reality system and augmented reality data generation method and device | |
CN105447124A (en) | Virtual article sharing method and device | |
CN109725956A (en) | A kind of method and relevant apparatus of scene rendering | |
CN109214876A (en) | A kind of fitting method and system based on augmented reality | |
CN109426343A (en) | Cooperation training method and system based on virtual reality | |
CN109686161A (en) | Earthquake training method and system based on virtual reality | |
CN109902282A (en) | A kind of character typesetting method, device and storage medium | |
CN109753892A (en) | Generation method, device, computer storage medium and the terminal of face wrinkle | |
CN108665523A (en) | A kind of drilling method and system of traffic accident |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |