CN108446026A - A kind of bootstrap technique, guiding equipment and a kind of medium based on augmented reality - Google Patents

A kind of bootstrap technique, guiding equipment and a kind of medium based on augmented reality Download PDF

Info

Publication number
CN108446026A
CN108446026A CN201810256465.1A CN201810256465A CN108446026A CN 108446026 A CN108446026 A CN 108446026A CN 201810256465 A CN201810256465 A CN 201810256465A CN 108446026 A CN108446026 A CN 108446026A
Authority
CN
China
Prior art keywords
navigational
information
augmented reality
terminal
control terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810256465.1A
Other languages
Chinese (zh)
Other versions
CN108446026B (en
Inventor
武乃福
马希通
冯莎
寇立欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN201810256465.1A priority Critical patent/CN108446026B/en
Publication of CN108446026A publication Critical patent/CN108446026A/en
Application granted granted Critical
Publication of CN108446026B publication Critical patent/CN108446026B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K15/00Devices for taming animals, e.g. nose-rings or hobbles; Devices for overturning animals in general; Training or exercising equipment; Covering boxes
    • A01K15/02Training or exercising equipment, e.g. mazes or labyrinths for animals ; Electric shock devices ; Toys specially adapted for animals
    • A01K15/021Electronic training devices specially adapted for dogs or cats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Environmental Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Animal Behavior & Ethology (AREA)
  • Animal Husbandry (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Zoology (AREA)
  • Health & Medical Sciences (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Bootstrap technique, guiding equipment and a kind of medium that the invention discloses a kind of based on augmented reality.Bootstrap technique includes:Control terminal receive capabilities instruct, and navigational figure information is extracted according to function command;Control terminal receives position command, and determines target position information according to position command;Terminal receives navigational figure information and target position information from control terminal, and navigational figure is superimposed into reality scene according to target position information and navigational figure information.When will use the guiding equipment application of this method when in terms of animal training, animal can be made to observe that target location has the augmented reality scene of navigational figure by remote operation control terminal, under the guiding of navigational figure, animal makes reaction corresponding with training content, complete training, round-the-clock training is realized, the time cost of trained pet is reduced, extends the application scenarios of augmented reality.

Description

A kind of bootstrap technique, guiding equipment and a kind of medium based on augmented reality
Technical field
The present invention relates to augmented reality fields, and in particular to a kind of bootstrap technique based on augmented reality, guiding are set Standby and a kind of medium.
Background technology
With being constantly progressive for human society, demand of the mankind to pet is gradually converted into affection need by functional requirement, Such as canine.Canine animals are helpful to the mankind at many aspects.For example, guard dog, working dog, shepherd dog etc..It develops to Now, role of the canine in human society is more extensive, for example, seeing-eye dog, army dog, also meet the daily affection need of people Pet dog etc..
However, unquestionably, in order to enable canine meets human wants, needing to carry out prolonged exercise to canine animals. In the prior art, the pattern of special messenger's practical training is mainly used to the training of canine.It will be apparent that the mode of special messenger's practical training is not Through wasting manpower, and round-the-clock training is cannot achieve, increases trained time cost.
Invention content
The purpose of the embodiment of the present invention is to provide a kind of bootstrap technique, guiding equipment and one kind based on augmented reality Medium, to realize round-the-clock, the long-range training to animal.
In order to solve the above-mentioned technical problem, an embodiment of the present invention provides a kind of bootstrap technique based on augmented reality, draw It includes control terminal and the augmented reality glasses terminal that is worn on user to lead equipment, the method includes:
Control terminal receive capabilities instruct, and navigational figure information is extracted according to the function command;
Control terminal receives position command, and determines target position information according to the position command;
Terminal receives navigational figure information and target position information from control terminal, and according to the target position information Navigational figure is superimposed into reality scene with the navigational figure information so that user is observed by augmented reality glasses in mesh Cursor position has the augmented reality scene of navigational figure.
Optionally, the control terminal receives position command, and determines target position information according to the position command, wraps It includes:
Control terminal receives the ambient image information of terminal and shows corresponding ambient image;
Control terminal receives position signal;
Control terminal determines target position information according to the ambient image and position signal.
Optionally, the terminal is superimposed according to the target position information and the navigational figure information into reality scene Navigational figure, including:
Terminal is superimposed navigational figure according to the target position information and the navigational figure information in target location.
Optionally, the terminal is superimposed according to the target position information and the navigational figure information into reality scene Navigational figure, including:
Terminal is reached the road of target location according to terminal positional information and target position information planning by terminal location Diameter;
Terminal is according to the path and navigational figure information, and on the path and target location is superimposed navigational figure.
Optionally, the bootstrap technique further includes:Control terminal receives the augmented reality scene information and display terminal of terminal Augmented reality image.
Optionally, the navigational figure is still image or dynamic image.
In order to solve the above-mentioned technical problem, the embodiment of the present invention additionally provides a kind of drawing using bootstrap technique described above Lead equipment, including control terminal and the augmented reality glasses terminal that is worn on user,
The control terminal includes extraction module, determining module, and the augmented reality glasses include lens body and are located at institute The image-forming module on the inside of lens body is stated,
The extraction module is instructed for receive capabilities, and navigational figure information is extracted according to the function command;
The determining module determines target position information for receiving position command according to the position command;
The image-forming module receives navigational figure information and target position information from control terminal, according to the target position Confidence ceases and the navigational figure information is superimposed navigational figure into reality scene so that user is observed by augmented reality glasses To the augmented reality scene in target location with navigational figure.
Optionally, the determining module includes:
Display unit, ambient image information for receiving terminal simultaneously show corresponding ambient image;
Receiving unit, for receiving position signal;
Determination unit, for according to the ambient image and position signal, determining target position information.
Optionally, the augmented reality glasses further include the image capture module being located on the inside of the lens body, described Image capture module is used for the augmented reality scene information of acquisition terminal, and described image acquisition module is electrically connected with the display unit It connects, the display unit is additionally operable to receive the augmented reality scene information of terminal and the augmented reality image of display terminal.
In order to solve the above-mentioned technical problem, the embodiment of the present invention additionally provides a kind of medium, and being stored thereon with can handled The computer program run on device, the computer program realize bootstrap technique as described above when being executed by the processor Step.
The bootstrap technique based on augmented reality that the embodiment of the present invention proposes is believed according to target position information and navigational figure It ceases and is superimposed navigational figure into reality scene so that user observes there is guiding figure in target location by augmented reality glasses The augmented reality scene of picture, to which it is opposite with function command that the navigational figure in augmented reality scene can guide user to make The reaction answered.When applying the method in terms of animal training, trainer operates in control terminal, can be so that as user's Wait for that animal training observes that there is the augmented reality scene of navigational figure, animal to be done under the guiding of navigational figure for target location Go out reaction corresponding with training content, completes training.Trainer is no longer needed in this way to be trained animal on the spot, it is real Remote operation of the trainer to animal training is showed, round-the-clock training may be implemented, reduced the time cost of trained pet, has expanded Application scenarios and the field of augmented reality are opened up.
Other features and advantages of the present invention will be illustrated in the following description, also, partly becomes from specification It obtains it is clear that understand through the implementation of the invention.The purpose of the present invention and other advantages can be by specification, rights Specifically noted structure is realized and is obtained in claim and attached drawing.
Description of the drawings
Attached drawing is used for providing further understanding technical solution of the present invention, and a part for constitution instruction, with this The embodiment of application technical solution for explaining the present invention together, does not constitute the limitation to technical solution of the present invention.
Fig. 1 is the schematic diagram of bootstrap technique of the first embodiment of the invention based on augmented reality;
Fig. 2 is the ambient image that control terminal is shown in one embodiment;
Fig. 3 is augmented reality scene corresponding with Fig. 2;
Fig. 4 is the schematic diagram that navigational figure is superimposed in second embodiment of the invention bootstrap technique;
Fig. 5 is the structural schematic diagram of guiding equipment of the third embodiment of the invention based on augmented reality;
Fig. 6 is the structural schematic diagram for the determining module that third embodiment of the invention guides equipment.
Reference sign:
The room of 10- dogs;11- navigational figures;21- determining modules;
22- extraction modules;31- image collecting devices;32- imaging devices;
50- human eye follow-up mechanisms;60- alarming devices, 211- display units;
212- receiving units;213- determination units.
Specific implementation mode
To make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with attached drawing to the present invention Embodiment be described in detail.It should be noted that in the absence of conflict, in the embodiment and embodiment in the application Feature mutually can arbitrarily combine.
The technology contents of the present invention will be discussed in detail by specific embodiment below.
First embodiment:
Fig. 1 is the schematic diagram of bootstrap technique of the first embodiment of the invention based on augmented reality.It includes control to guide equipment It holds and is worn on the enhancing with user and show glasses terminal, from figure 1 it appears that this method includes:
S1:Control terminal receive capabilities instruct, and navigational figure information is extracted according to the function command;
S2:Control terminal receives position command, and determines target position information according to the position command;
S3:Terminal receives navigational figure information and target position information from control terminal, and according to the target location Information and the navigational figure information are superimposed navigational figure into reality scene so that user is observed by augmented reality glasses There is the augmented reality scene of navigational figure in target location.
The bootstrap technique based on augmented reality that the embodiment of the present invention proposes is believed according to target position information and navigational figure It ceases and is superimposed navigational figure into reality scene so that user observes there is guiding figure in target location by augmented reality glasses The augmented reality scene of picture, to which it is opposite with function command that the navigational figure in augmented reality scene can guide user to make The reaction answered.When applying the method in terms of animal training, trainer operates in control terminal, can be so that as user's Wait for that animal training observes that there is the augmented reality scene of navigational figure, animal to be done under the guiding of navigational figure for target location Go out reaction corresponding with training content, completes training.Trainer is no longer needed in this way to be trained animal on the spot, it is real Remote operation of the trainer to animal training is showed, round-the-clock training may be implemented, reduced the time cost of trained pet, has expanded Application scenarios and the field of augmented reality are opened up.
It is easily understood that in the present embodiment, the step of " extraction navigational figure information " and " determining target position information " Sequence can be interchanged, that is to say, that can first carry out the step of " extraction navigational figure information ", then carry out " determining mesh again The step of cursor position information ", the step of can also first carrying out " determining target position information ", then carry out " extraction guiding figure again As information " the step of.
Herein it should be noted that trainer uses augmented reality (Augmented Reality, AR) equipment to animal It is trained, animal such as pet dog of training to be subjected etc. is with reality enhancing AR glasses.
In the present embodiment, S1 may include:Control terminal receives type instruction, and corresponding function is shown according to type instruction Menu;Control terminal receive capabilities instruct, and navigational figure information is extracted according to function command.
Wherein, control terminal receives type instruction, shows corresponding function menu according to type instruction, specially:Work as training When teacher needs to be trained animal, trainer is in control terminal according to the type Selective type instruction for waiting for animal training.For example, class Type menu includes dog, cat etc., if animal to be trained is dog, trainer needs to select " dog " from type menu.Control When system termination receives the type instruction of " dog ", " dog " corresponding function menu is shown in control terminal." dog " corresponding function menu May include " accompanying object for appreciation ", " feed ", " going home ", " training ", " hiding " etc..
Wherein, control terminal receive capabilities instruct, and extract navigational figure information according to function command, specially:Trainer from Selection function instructs in the function menu that control terminal is shown, for example, selection " feed " function, control terminal receive the work(of " feed " After capable of instructing, corresponding navigational figure information is extracted according to " feed ".When type is " dog ", " feed " corresponding navigational figure Information can be bone image information, and when type is " cat ", " feed " corresponding navigational figure can be fish diagram as information.It is easy Understand, the kind of object and color interested to each animal are different, therefore, can be according to type of animal and work( Can, it is set with the navigational figure information conducive to training.
In the present embodiment, S2 may include:
S211:Control terminal receives the ambient image information of terminal and shows corresponding ambient image;
S212:Control terminal receives position signal;
S213:Control terminal determines target position information according to the ambient image and position signal.
Wherein, S211 can be specially:Augmented reality glasses terminal prestores Position Fixing Navigation System, passes through location navigation system System, determines the location information of terminal and the ambient image information of terminal, and the ambient image information of terminal is sent to control End, control terminal receive the ambient image information of terminal and show corresponding ambient image.It is easily understood that when to be trained is moved For level when interior, Position Fixing Navigation System refers to indoor positioning navigation system, and accordingly, ambient image is indoor environment map Picture;When the animal trained is located outside, Position Fixing Navigation System can be outdoor positioning navigation system, such as global positioning system (Global Positioning System, GPS), accordingly, ambient image are the ambient image of outdoor.Further it is easy reason Solution, ambient image can be zoomed in and out according to the needs of operating personnel, so that operating personnel can more accurately choose mesh Cursor position.For example, Fig. 2 is the ambient image that control terminal is shown in one embodiment.In the embodiment of fig. 2, to be trained dynamic Object is in interior, and Fig. 2 shows indoor ambient image, including the room 10 of dog.
S212 can be specially:Control terminal receives operating personnel and acts on the position signal in ambient image, that is to say, that The ambient image that operating personnel show according to control terminal determines the specific location for needing that enhancing image is arranged, then by touching or Mouse etc. operates equipment and selectes the target location for needing that enhancing image is arranged in ambient image, and selected target location is position Confidence number.S212 can also be specially:Control terminal shows " dialog box of input target location ", and control terminal receives the target of input The target location of position, input is position signal.For example, in fig. 2, operating personnel are in the ambient image that control terminal is shown By touching selected target position, selected target location is the surface in room 10, and control terminal is when receiving the touch, i.e., Judge the position where the touch point for position signal.
S213 can be specially:Control terminal is believed after receiving and acting on the position signal in ambient image according to position Specific location number in ambient image determines target position information, and herein, target position information is target location in ring Position coordinates in the image of border relative to augmented reality glasses terminal.It is easily understood that ambient image is to be based on location navigation The image of system, therefore, determining target position information is target location in reality scene relative to augmented reality glasses here The position coordinates of terminal.It will be understood by those skilled in the art that determining two position relative position coordinates in Position Fixing Navigation System Technology belongs to the prior art, and details are not described herein.
In the present embodiment, the terminal in S3 according to the target position information and the navigational figure information to real field It is superimposed navigational figure in scape, may include:Terminal is folded in target location according to target position information and the navigational figure information Add navigational figure.Specifically, the pupil information of terminal acquisition user, pupil information of the terminal according to user, target position information With navigational figure information, projected on AR glasses to realize that the target location into reality scene is superimposed navigational figure so that use The augmented reality scene for having navigational figure in target location is observed in family by augmented reality glasses, to guide user according to drawing It leads image and reaches target location, user is made to make reaction corresponding with function command.For example, Fig. 3 is enhancing corresponding with Fig. 2 Reality scene.From figure 3, it can be seen that in augmented reality scene, the surface in the room 10 of dog has navigational figure 11, Here, navigational figure 11 is " bone ".User waits for that animal training reaches target location under the guiding of navigational figure 11, reaches The purpose of animal training.
Wherein, the pupil information of terminal acquisition user.Human eye follow-up mechanism can be set in the inside of AR glasses to acquire The pupil information of user.Human eye follow-up mechanism can be depth camera etc..
Wherein, terminal is projected according to the pupil information of user, target position information and navigational figure information on AR glasses It is superimposed navigational figure with the target location realized into reality scene, specially:Can imaging mould be set in the inside of AR glasses Block, after image-forming module receives target position information and navigational figure information, according to the pupil information of user, to AR glasses upslides Shadow, user see that target location has the augmented reality scene of navigational figure by AR glasses.In specific implementation, it may be used into As device such as projecting apparatus are projected on AR glasses.
In the present embodiment, target position information is target location in reality scene relative to augmented reality glasses terminal Position coordinates.It is easily understood that as user moves closer to target location, target position information is dynamic change, to, User is observed that the augmented reality scene for having navigational figure in target location always by augmented reality glasses.
The bootstrap technique of the present embodiment can also include:In terminal while being superimposed navigational figure into reality scene Send out function prompt information.It is easily understood that when animal training, trainer usually carrys out indicator animal with certain sound and carries out certain Kind training, for example, when trainer's animal training returns house feed, trainer can manufacture the sound for tapping service plate, and animal hears After this sound, house feed will be returned.Therefore, it while being superimposed navigational figure into reality scene, is sent out in user terminal Function prompt information, function prompt information are preferably function prompt sound, thus after user senses function prompt information, Under the guiding of navigational figure, for example, the function prompt information for " feed " function is to tap the sound of service plate, sent out when in terminal When going out to tap the sound of service plate, after animal hears this sound, can be immediately recognized that now it is eating time, to which animal exists Under the guiding of navigational figure in augmented reality scene, target location feed is gone to, ensure that the promptness of reaction of animals.Function Prompt message is not limited to sound, or the vibrations etc. of different frequency, as long as can be sensed by animal.
The bootstrap technique of the present embodiment can also include:According to the Position Fixing Navigation System of terminal, the position of real-time monitoring terminal Confidence ceases, and when the location information of terminal is consistent with target position information, closes navigational figure.Specifically:According in AR glasses The Position Fixing Navigation System to prestore, control terminal obtain the location information of terminal, and the location information of real-time monitoring terminal in real time.Work as control When the location information of acquisition terminal in end processed is consistent with target position information, indicate that user has arrived under the guiding of navigational figure Target location, training terminate, and close navigational figure.It is easily understood that when target position information is target in reality scene When position coordinates of the position relative to augmented reality glasses terminal, when target position information is close to 0, guiding figure is closed Picture.
The bootstrap technique of the present embodiment can also include:Control terminal receives the augmented reality scene information of terminal and shows end The augmented reality image at end.Specifically:Can image collecting device be set on the inside of AR glasses, the image collecting device is with user Visual angle acquire augmented reality image in real time, in the control terminal real-time display augmented reality image, to which operating personnel can be with Augmented reality scene is seen at the visual angle of user, the actual effect trained according to augmented reality scene prediction.Meanwhile it is aobvious in control terminal Show augmented reality image, additionally aid the ambient enviroment of teleworker real time inspection user, to judge user's local environment Safety.
In the present embodiment, waiting for that animal training reaches target location to guide, navigational figure can be still image, In other embodiments, navigational figure can also be dynamic image.For example, when " accompanying object for appreciation " function of cats animal, navigational figure is Dynamic ball of yarn.When cat sees the augmented reality scene with mobile ball of yarn, ball of yarn can be chased, interaction is realized Accompany the function of object for appreciation.
Second embodiment:
Fig. 4 is the schematic diagram that navigational figure is superimposed in second embodiment of the invention bootstrap technique.It is different from the first embodiment , in the present embodiment, as shown in figure 4, the terminal in S3 is according to the target position information and the navigational figure information It is superimposed navigational figure into reality scene, may include:
S31:Terminal reaches target location according to terminal positional information and target position information planning by terminal location Path;
S32:Terminal is schemed on the path with target location superposition guiding according to the path and navigational figure information Picture.
Wherein, S31 is specially:After terminal receives target position information, believed according to terminal positional information and target location Breath plans the path that target location is reached by terminal location on the Position Fixing Navigation System itself to prestore.S32 is specially:Terminal According to the path and navigational figure information cooked up in S31, navigational figure is superimposed with target location so that user on the path The augmented reality scene for all having navigational figure with target location on the path is observed by augmented reality glasses.
The bootstrap technique of the present embodiment is typically used in outdoor, for example, training seeing-eye dog is led the way for blind person.Seeing-eye dog head Wear AR glasses, trainer by using the present embodiment method, by control terminal in the position where seeing-eye dog to target On the path of position and target location is superimposed navigational figure so that seeing-eye dog is observed on path by AR glasses and target position The augmented reality scene for all having navigational figure is set, to which seeing-eye dog leads blind person along planning under the guiding of navigational figure Path towards target location be advanced until reach target location, to have trained seeing-eye dog.
3rd embodiment:
Fig. 5 is the structural schematic diagram of guiding equipment of the third embodiment of the invention based on augmented reality.
The guiding equipment includes moveable AR control terminals and AR glasses terminals.AR control terminals can be arranged on mobile phone. AR eyeglasses-wearings are in user's head.Since the head construction of animal is different from the mankind, the AR eyeglasses-wearings in the present embodiment Mode be strap configurations, AR glasses are worn on user's head by strap configurations.The accessory such as battery etc. of equipment is guided, It can be bundled on animal bodies or be integrated in AR glasses.
AR control terminals include extraction module 22, determining module 21, and AR glasses include lens body and are located in lens body The image-forming module 32 of side.
Extraction module 22 is instructed for receive capabilities, and navigational figure information is extracted according to the function command;
Determining module 21 determines target position information for receiving position command according to the position command;
Image-forming module 32 receives navigational figure information and target position information from control terminal, according to the target location Information and the navigational figure information are superimposed navigational figure into reality scene so that user is observed by augmented reality glasses There is the augmented reality scene of navigational figure in target location.
Image-forming module 32 is electrically connected with extraction module 22 and determining module 21 respectively.
Further, Fig. 6 is the structural schematic diagram for the determining module that third embodiment of the invention guides equipment.It can from Fig. 6 To find out, determining module 21 may include:
Display unit 211, ambient image information for receiving terminal simultaneously show corresponding ambient image;
Receiving unit 212, for receiving position signal;
Determination unit 213, for according to the ambient image and position signal, determining target position information.
In the present embodiment, as shown in figure 5, determining module 21 is electrically connected with extraction module 22, to display unit 211 It is additionally operable to display type menu, and corresponding function menu is shown according to type instruction.
AR glasses further include the human eye follow-up mechanism 50 for being arranged on the inside of lens body and being electrically connected with image-forming module 32, such as Shown in Fig. 5, human eye follow-up mechanism 50 is used for acquiring the pupil information of user.Image-forming module be used for according to the pupil information of user, Target position information and navigational figure information are projected in lens body to realize that the superposition of the target location into reality scene is drawn Leading image so that user observes the augmented reality scene for having navigational figure in target location by augmented reality glasses, with Guiding user reaches target location according to navigational figure and makes reaction corresponding with function command.
It can also be seen that AR glasses further include the alarming device 60 being electrically connected with image-forming module 32, alarming device from Fig. 5 60 while forming augmented reality scene for sending out function prompt information.
AR glasses further include the image capture module 31 being located on the inside of lens body, and image capture module 31 is for acquiring end The augmented reality scene information at end.Image capture module 31 is electrically connected with the display unit in determining module 21, display unit 211 It is additionally operable to receive the augmented reality scene information of terminal and the augmented reality image of display terminal.Image capture module can be to take the photograph As the first-class device that can acquire image.
The guiding equipment based on augmented reality that the embodiment of the present invention proposes is believed according to target position information and navigational figure It ceases and is superimposed navigational figure into reality scene so that user observes there is guiding figure in target location by augmented reality glasses The augmented reality scene of picture, to which it is opposite with function command that the navigational figure in augmented reality scene can guide user to make The reaction answered.When by the guiding equipment application, when in terms of animal training, trainer, can be so as to make in remote control terminal operation Wait for that animal training observes that there is the augmented reality scene of navigational figure, animal to draw navigational figure for target location for user It leads down, makes reaction corresponding with training content, complete training.Trainer is no longer needed in this way to carry out animal on the spot Training, realizes remote operation of the trainer to animal training, round-the-clock training may be implemented, reduce the time of trained pet Cost extends application scenarios and the field of augmented reality.
The guiding equipment that the embodiment of the present invention proposes can also be used to train seeing-eye dog that stress hide energy to dangerous situation Power guides the navigational figure in equipment seeing-eye dog can be guided to walk on zebra stripes when passing through the street of vehicle traveling, Seeing-eye dog can be guided to rest in suitable position, seeing-eye dog is avoided to cause uneasiness to pedestrian around.The guiding equipment can also be used In training police dog, for example, navigational figure identical with danger may be used to train police dog hazard recognition object.
Fourth embodiment:
The present embodiment proposes a kind of medium, is stored thereon with the computer program that can be run on a processor, the meter The step of bootstrap technique in such as the first embodiment or the second embodiment is realized when calculation machine program is executed by the processor.
In the description of the embodiment of the present invention, it is to be understood that term " middle part ", "upper", "lower", "front", "rear", The orientation or positional relationship of the instructions such as "vertical", "horizontal", "top", "bottom", "inner", "outside" be orientation based on ... shown in the drawings or Position relationship is merely for convenience of description of the present invention and simplification of the description, and does not indicate or imply the indicated device or element must There must be specific orientation, with specific azimuth configuration and operation, therefore be not considered as limiting the invention.
In the description of the embodiment of the present invention, it should be noted that unless otherwise clearly defined and limited, term " peace Dress ", " connected ", " connection " shall be understood in a broad sense, for example, it may be being fixedly connected, may be a detachable connection, or integrally Connection;It can be mechanical connection, can also be electrical connection;Can be directly connected, can also indirectly connected through an intermediary, It can be the connection inside two elements.For the ordinary skill in the art, it can understand above-mentioned art with concrete condition The concrete meaning of language in the present invention.
Although disclosed herein embodiment it is as above, the content only for ease of understanding the present invention and use Embodiment is not limited to the present invention.Technical staff in any fields of the present invention is taken off not departing from the present invention Under the premise of the spirit and scope of dew, any modification and variation, but the present invention can be carried out in the form and details of implementation Scope of patent protection, still should be subject to the scope of the claims as defined in the appended claims.

Claims (10)

1. a kind of bootstrap technique based on augmented reality, which is characterized in that guiding equipment includes control terminal and is worn on user's body On augmented reality glasses terminal, the method includes:
Control terminal receive capabilities instruct, and navigational figure information is extracted according to the function command;
Control terminal receives position command, and determines target position information according to the position command;
Terminal receives navigational figure information and target position information from control terminal, and according to the target position information and institute It states navigational figure information and is superimposed navigational figure into reality scene so that user is observed by augmented reality glasses in target position Set the augmented reality scene with navigational figure.
2. bootstrap technique according to claim 1, which is characterized in that the control terminal receives position command, and according to institute It states position command and determines target position information, including:
Control terminal receives the ambient image information of terminal and shows corresponding ambient image;
Control terminal receives position signal;
Control terminal determines target position information according to the ambient image and position signal.
3. bootstrap technique according to claim 1 or 2, which is characterized in that the terminal is according to the target position information Navigational figure is superimposed into reality scene with the navigational figure information, including:
Terminal is superimposed navigational figure according to the target position information and the navigational figure information in target location.
4. bootstrap technique according to claim 1 or 2, which is characterized in that the terminal is according to the target position information Navigational figure is superimposed into reality scene with the navigational figure information, including:
Terminal is reached the path of target location according to terminal positional information and target position information planning by terminal location;
Terminal is according to the path and navigational figure information, and on the path and target location is superimposed navigational figure.
5. bootstrap technique according to claim 1, which is characterized in that further include:Control terminal receives the augmented reality of terminal The augmented reality image of scene information and display terminal.
6. bootstrap technique according to claim 1, which is characterized in that the navigational figure is still image or Dynamic Graph Picture.
7. a kind of guiding equipment using any one of claim 1-6 bootstrap techniques, which is characterized in that including control terminal and The augmented reality glasses terminal being worn on user,
The control terminal includes extraction module, determining module, and the augmented reality glasses include lens body and are located at the eye Image-forming module on the inside of mirror main body,
The extraction module is instructed for receive capabilities, and navigational figure information is extracted according to the function command;
The determining module determines target position information for receiving position command according to the position command;
The image-forming module receives navigational figure information and target position information from control terminal, is believed according to the target location Breath and the navigational figure information are superimposed navigational figure into reality scene so that user is observed by augmented reality glasses Target location has the augmented reality scene of navigational figure.
8. guiding equipment according to claim 7, which is characterized in that the determining module includes:
Display unit, ambient image information for receiving terminal simultaneously show corresponding ambient image;
Receiving unit, for receiving position signal;
Determination unit, for according to the ambient image and position signal, determining target position information.
9. guiding setting according to claim 8, which is characterized in that the augmented reality glasses further include being located at the eye Image capture module on the inside of mirror main body, described image acquisition module is used for the augmented reality scene information of acquisition terminal, described Image capture module is electrically connected with the display unit, and the display unit is additionally operable to receive the augmented reality scene information of terminal And the augmented reality image of display terminal.
10. a kind of medium, which is characterized in that be stored thereon with the computer program that can be run on a processor, the computer It is realized such as the step of bootstrap technique according to any one of claims 1 to 6 when program is executed by the processor.
CN201810256465.1A 2018-03-26 2018-03-26 Guiding method and guiding equipment based on augmented reality and medium Active CN108446026B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810256465.1A CN108446026B (en) 2018-03-26 2018-03-26 Guiding method and guiding equipment based on augmented reality and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810256465.1A CN108446026B (en) 2018-03-26 2018-03-26 Guiding method and guiding equipment based on augmented reality and medium

Publications (2)

Publication Number Publication Date
CN108446026A true CN108446026A (en) 2018-08-24
CN108446026B CN108446026B (en) 2020-12-22

Family

ID=63197378

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810256465.1A Active CN108446026B (en) 2018-03-26 2018-03-26 Guiding method and guiding equipment based on augmented reality and medium

Country Status (1)

Country Link
CN (1) CN108446026B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109636922A (en) * 2018-08-28 2019-04-16 亮风台(上海)信息科技有限公司 A kind of method and apparatus of the content of augmented reality for rendering
CN109669541A (en) * 2018-09-04 2019-04-23 亮风台(上海)信息科技有限公司 It is a kind of for configuring the method and apparatus of augmented reality content
CN111027734A (en) * 2018-10-10 2020-04-17 阿里巴巴集团控股有限公司 Information processing method, information display method and device, electronic equipment and server
CN111158475A (en) * 2019-12-20 2020-05-15 华中科技大学鄂州工业技术研究院 Method and device for generating training path in virtual scene
CN112447272A (en) * 2019-08-30 2021-03-05 华为技术有限公司 Prompting method for fitness training and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102981603A (en) * 2011-06-01 2013-03-20 索尼公司 Image processing apparatus, image processing method, and program
CN103218854A (en) * 2013-04-01 2013-07-24 成都理想境界科技有限公司 Method for realizing component marking during augmented reality process and augmented reality system
CN105739704A (en) * 2016-02-02 2016-07-06 上海尚镜信息科技有限公司 Remote guidance method and system based on augmented reality
CN106383578A (en) * 2016-09-13 2017-02-08 网易(杭州)网络有限公司 Virtual reality system, and virtual reality interaction apparatus and method
US9779633B2 (en) * 2014-08-08 2017-10-03 Greg Van Curen Virtual reality system enabling compatibility of sense of immersion in virtual space and movement in real space, and battle training system using same
CN107402633A (en) * 2017-07-25 2017-11-28 深圳市鹰硕技术有限公司 A kind of safety education system based on image simulation technology
CN107481327A (en) * 2017-09-08 2017-12-15 腾讯科技(深圳)有限公司 On the processing method of augmented reality scene, device, terminal device and system
CN107798932A (en) * 2017-12-08 2018-03-13 快创科技(大连)有限公司 A kind of early education training system based on AR technologies

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102981603A (en) * 2011-06-01 2013-03-20 索尼公司 Image processing apparatus, image processing method, and program
CN103218854A (en) * 2013-04-01 2013-07-24 成都理想境界科技有限公司 Method for realizing component marking during augmented reality process and augmented reality system
US9779633B2 (en) * 2014-08-08 2017-10-03 Greg Van Curen Virtual reality system enabling compatibility of sense of immersion in virtual space and movement in real space, and battle training system using same
CN105739704A (en) * 2016-02-02 2016-07-06 上海尚镜信息科技有限公司 Remote guidance method and system based on augmented reality
CN106383578A (en) * 2016-09-13 2017-02-08 网易(杭州)网络有限公司 Virtual reality system, and virtual reality interaction apparatus and method
CN107402633A (en) * 2017-07-25 2017-11-28 深圳市鹰硕技术有限公司 A kind of safety education system based on image simulation technology
CN107481327A (en) * 2017-09-08 2017-12-15 腾讯科技(深圳)有限公司 On the processing method of augmented reality scene, device, terminal device and system
CN107798932A (en) * 2017-12-08 2018-03-13 快创科技(大连)有限公司 A kind of early education training system based on AR technologies

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109636922A (en) * 2018-08-28 2019-04-16 亮风台(上海)信息科技有限公司 A kind of method and apparatus of the content of augmented reality for rendering
CN109636922B (en) * 2018-08-28 2023-07-11 亮风台(上海)信息科技有限公司 Method and device for presenting augmented reality content
CN109669541A (en) * 2018-09-04 2019-04-23 亮风台(上海)信息科技有限公司 It is a kind of for configuring the method and apparatus of augmented reality content
CN109669541B (en) * 2018-09-04 2022-02-25 亮风台(上海)信息科技有限公司 Method and equipment for configuring augmented reality content
CN111027734A (en) * 2018-10-10 2020-04-17 阿里巴巴集团控股有限公司 Information processing method, information display method and device, electronic equipment and server
CN111027734B (en) * 2018-10-10 2023-04-28 阿里巴巴集团控股有限公司 Information processing method, information display device, electronic equipment and server
CN112447272A (en) * 2019-08-30 2021-03-05 华为技术有限公司 Prompting method for fitness training and electronic equipment
CN111158475A (en) * 2019-12-20 2020-05-15 华中科技大学鄂州工业技术研究院 Method and device for generating training path in virtual scene
CN111158475B (en) * 2019-12-20 2024-01-23 华中科技大学鄂州工业技术研究院 Method and device for generating training path in virtual scene

Also Published As

Publication number Publication date
CN108446026B (en) 2020-12-22

Similar Documents

Publication Publication Date Title
CN108446026A (en) A kind of bootstrap technique, guiding equipment and a kind of medium based on augmented reality
US11080882B2 (en) Display control device, display control method, and program
CN107782314B (en) Code scanning-based augmented reality technology indoor positioning navigation method
US20190138859A1 (en) Display Control System And Recording Medium
US20190057696A1 (en) Information processing apparatus, information processing method, and program
US20080225137A1 (en) Image information processing apparatus
JP2005037181A (en) Navigation device, server, navigation system, and navigation method
CN110686694A (en) Navigation method, navigation device, wearable electronic equipment and computer readable storage medium
CN204972147U (en) Blind person navigation based on kinect
WO2018100883A1 (en) Display control device, display control method, and program
WO2019221416A1 (en) Method for providing service for guiding visually impaired person by using real-time on-site video relay broadcast
CN106791565A (en) Robot video calling control method, device and terminal
CN109191737A (en) A kind of escape route generating method, system, equipment and readable storage medium storing program for executing
CN109040968A (en) Road conditions based reminding method, mobile terminal and computer readable storage medium
CN102980586A (en) Navigation terminal and navigation method using the same
JP2018030223A (en) Robot for searching lost objects
US11521515B2 (en) Stereophonic apparatus for blind and visually-impaired people
US20180082119A1 (en) System and method for remotely assisted user-orientation
CN107562069B (en) Autonomous patrol navigation unmanned aerial vehicle
CN109921818B (en) Intelligent multi-point real-time three-dimensional interactive helmet display system
WO2019085945A1 (en) Detection device, detection system, and detection method
CN106898156A (en) The vehicles park assisted method, aircraft and system
KR20130031423A (en) Sensor of object recognition and for the visually impaired pedestrian guidance system
KR20120088320A (en) Object recognition and for the visually impaired pedestrian guidance system
CN210776078U (en) Real-time navigation bluetooth glasses

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant