CN110440789A - Intelligent guiding method and device - Google Patents
Intelligent guiding method and device Download PDFInfo
- Publication number
- CN110440789A CN110440789A CN201810582033.XA CN201810582033A CN110440789A CN 110440789 A CN110440789 A CN 110440789A CN 201810582033 A CN201810582033 A CN 201810582033A CN 110440789 A CN110440789 A CN 110440789A
- Authority
- CN
- China
- Prior art keywords
- user
- path
- current
- prelocalization
- unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 230000004888 barrier function Effects 0.000 claims abstract description 47
- 230000007613 environmental effect Effects 0.000 claims description 30
- 238000005070 sampling Methods 0.000 claims description 21
- 230000001965 increasing effect Effects 0.000 claims description 12
- 230000008676 import Effects 0.000 claims description 8
- 241000208340 Araliaceae Species 0.000 claims description 2
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 claims description 2
- 235000003140 Panax quinquefolius Nutrition 0.000 claims description 2
- 235000008434 ginseng Nutrition 0.000 claims description 2
- 230000009471 action Effects 0.000 abstract description 8
- 230000004438 eyesight Effects 0.000 description 21
- 230000002950 deficient Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 239000011800 void material Substances 0.000 description 2
- 241001166076 Diapheromera femorata Species 0.000 description 1
- 241000555686 Heteronemiidae Species 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 239000000872 buffer Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000012797 qualification Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Navigation (AREA)
Abstract
The invention discloses an intelligent guiding method and device, wherein the method comprises the following steps: performing real-time three-dimensional scanning on the current spatial environment information of the user by adopting mixed reality equipment; determining the distance between the current location of the user and a guide destination according to the current location information of the user and a target place provided by the user; collecting road condition analysis parameters of the current positioning and guiding destination of the user according to the distance to establish a path planning model; scanning current space environment information in real time according to a path planning model, judging whether a barrier exists on a current path, if so, re-planning the path, and finally calibrating a walking path; adding road condition prompts in each path node according to the determined walking path; and the intelligent guidance of reaching the target place specified by the user is finished by utilizing the road condition prompt. The invention can reduce the use cost, realize the guidance of the users needing the auxiliary tool to walk, recover certain autonomous action capability and more accurately serve the user groups needing the auxiliary tool to walk.
Description
Technical field
The present invention relates to mixed reality technical field, in particular to a kind of intelligent guidance method and device.
Background technique
When user is in dim or dark environment, action will be inconvenient, when user is dim or dark
When needing to walk under environment, at this moment certain auxiliary tool is just needed to navigate it, to facilitate user dim or dark
In the environment of walk, so that its is restored certain automatic ability to act.In addition, for the user that eyesight is fainter or has defective vision
For, even if not being and to have bright place on daytime under dim or dark environment, normal action can also exist not
Just.Therefore, when the user that eyesight is fainter or has defective vision needs to walk, it is also desirable to which certain auxiliary tool carries out it
Navigation makes it restore certain automatic ability to act to facilitate walking.For the user that eyesight is faint or has defective vision,
Most simple common auxiliary tool is exactly common walking stick, is tapped on the ground with walking stick, eyesight can be helped faint or eyesight not
Good user has found the barrier within 0.5 meter.Its major defect is cannot to find barrier more a little further and aerial
Barrier outstanding.In addition, existing eyesight it is faint or have defective vision user navigation can also be led the way using seeing-eye dog, although
Seeing-eye dog can complete the basic navigation of the daily action of owner after training, but seeing-eye dog culture and cost of carry it is excessively high,
It is difficult to spread to public use, therefore the use of seeing-eye dog is limited.In addition, also have eyesight faint or the user that has defective vision due to
The physical qualification reason of itself is not suitable for the limitations such as raising seeing-eye dog.In order to preferably help the eyesight faint or have defective vision
Various electronics blind-guide devices are also studied and produced in user's walking, many countries, such as: all kinds of blind-guidance robots and other electronics dress
It sets, but most higher cost, the family of ordinary user is difficult to bear.
Mixed reality (Mix reality, abbreviation MR) had both included that augmented reality and enhancing are virtual, and had referred to merging reality
The new visible environment generated with virtual world.Physics and digital object coexist in new visible environment, and in real time
Interaction.It is the further development of virtual reality technology, and the technology in virtual environment by introducing reality scene information, in void
The quasi- information circuits that an interaction feedback is set up between the world, real world and user, to enhance the sense of reality of user experience.Mesh
Preceding mixed reality technology is not applied to assisting navigation field also, if future can lead mixed reality equipment application in assisting navigation
Domain, it will bringing great convenience property.
Summary of the invention
The technical problem to be solved in the present invention is that in view of the above drawbacks of the prior art, providing a kind of can reduce and using
Cost is realized to the guidance for the user for needing auxiliary tool to walk, is allowed to restore certain autonomous actions ability, more accurately takes
It is engaged in the intelligent guidance method and device of the group for needing auxiliary tool to walk.
The technical solution adopted by the present invention to solve the technical problems is: constructing a kind of intelligent guidance method, including as follows
Step:
Real-time three-dimensional scanning is carried out to user's current spatial environmental information using mixed reality equipment, it is current to obtain user
Location information;
The target provided according to user current location information and user, determine that the user's works as prelocalization and guiding mesh
Ground distance;
According to the distance, the road condition analyzing parameter of the user worked as prelocalization and guide destination is collected, to establish
Path planning model;
According to the path planning model, real time scan current spatial environmental information judges whether there is barrier on current path
Hinder object, if there is the barrier, then planning path again, and finally demarcate walking path;
According to the determining walking path, increase road condition advisory in each path node;
Using the road condition advisory, the intelligence guiding for reaching the target ground that the user specifies is completed.
In intelligent guidance method of the present invention, the road condition advisory includes virtual strong light guide beacon and 3D virtual
Stereo sound prompt.
In intelligent guidance method of the present invention, described according to the distance, that collects the user works as prelocalization
With the road condition analyzing parameter of guiding destination, include: to establish path planning model
Using the user when prelocalization is as root node;
Increase leaf node by stochastical sampling, generates a Stochastic propagation tree;
Judge in the Stochastic propagation tree with the presence or absence of comprising the terminal for guiding destination or entering target area
Leaf node;The target area includes the guiding destination;
If so, one is found in the Stochastic propagation tree from the root node to the path of the terminal leaf node,
And as the path planning model.
In intelligent guidance method of the present invention, described according to the distance, the current fixed of the user is collected
The road condition analyzing parameter of position and guiding destination, to establish after path planning model further include:
The path planning model described in the current positioning runout of the user and when deviateing range and being more than setting value, is advised again
Path is drawn, and finally demarcates the walking path.
In intelligent guidance method of the present invention, described according to the path planning model, real time scan is currently empty
Between environmental information, judge whether there is barrier on current path, if there is the barrier, then planning path again, and most
Calibration walking path includes: eventually
Judge whether there is barrier on the current path according to the current spatial environmental information;
If so, setting a starting point in the set distance range of the barrier as root node;
Increase leaf node by stochastical sampling, generates a Stochastic propagation tree;
Judge in the Stochastic propagation tree with the presence or absence of comprising the terminal for guiding destination or entering target area
Leaf node;The target area includes the guiding destination;
If so, one is found in the Stochastic propagation tree from the root node to the path of the terminal leaf node,
And as the walking path.
In intelligent guidance method of the present invention, the path planning described in the current positioning runout of the user
Model and when deviateing range and being more than setting value, planning path again, and finally demarcate the walking path and include:
When the user when prelocalization set the position from next virtual strong light guide beacon it is more and more remoter when, institute
It states mixed reality equipment and carries out 3D virtual three-dimensional auditory tone cues;
When the prelocalization of working as of the user is more than the setting value at a distance from next virtual strong light guide beacon
When, using the user when prelocalization is as root node;
Increase leaf node by stochastical sampling, generates a Stochastic propagation tree;
Judge in the Stochastic propagation tree with the presence or absence of comprising the terminal for guiding destination or entering target area
Leaf node, the target area include the guiding destination;
If so, one is found in the Stochastic propagation tree from the root node to the path of the terminal leaf node,
And as the walking path.
In intelligent guidance method of the present invention, use mixed reality equipment to user's current spatial environment described
Information carries out real-time three-dimensional scanning, before obtaining user current location information further include:
Receive three-dimensional building model, wherein the three-dimensional building model is imported by the mixed reality equipment;
The three-dimensional building model is identified, plan view is generated;
It is each room frame favored area in the plan view, and marks room title, institute by hand for each room
Mixed reality equipment is stated to generate the indoor 3D map of the current building and stored.
The invention further relates to a kind of devices for realizing above-mentioned intelligent guidance method, comprising:
Scanning element: for carrying out real-time three-dimensional scanning to user's current spatial environmental information using mixed reality equipment,
To obtain user current location information;
Determination unit: for according to user current location information and the target of user's offer, determining working as the user
The distance of prelocalization and guiding destination;
Establish unit: for collecting the road conditions worked as prelocalization and guide destination point of the user according to the distance
Parameter is analysed, to establish path planning model;
First judging unit: it is used for according to the path planning model, real time scan current spatial environmental information, judgement is worked as
Whether there is barrier on preceding path, if there is the barrier, then planning path again, and finally demarcate walking path;
Prompt unit: for increasing road condition advisory in each path node according to the determining walking path;
Guide unit: for utilizing the road condition advisory, the intelligence for reaching the target ground that the user specifies is completed
Guiding.
In device of the present invention, the road condition advisory includes virtual strong light guide beacon and 3D virtual three-dimensional sound
Prompt.
In device of the present invention, the unit of establishing includes:
First locator unit: for the user to be worked as prelocalization as root node;
First generates subelement: for increasing leaf node by stochastical sampling, generating a Stochastic propagation tree;
First judgment sub-unit: for judge in the Stochastic propagation tree with the presence or absence of comprising the guiding destination or into
The terminal leaf node of target area is entered;The target area includes the guiding destination;
First searches subelement: for finding one in the Stochastic propagation tree from the root node to the terminal leaf
The path of child node, and as the path planning model.
In device of the present invention, further includes:
Second judgment unit: for path planning model described in the current positioning runout as the user and deviation range surpasses
When crossing setting value, planning path again, and finally demarcate the walking path.
In device of the present invention, first judging unit includes:
Second judgment sub-unit: for judging whether there is barrier on the current path according to the current spatial environmental information
Hinder object;
Set subelement: for setting a starting point in the set distance range of the barrier as root node;
Second generates subelement: for increasing leaf node by stochastical sampling, generating a Stochastic propagation tree;
Third judgment sub-unit: for judge in the Stochastic propagation tree with the presence or absence of comprising the guiding destination or into
The terminal leaf node of target area is entered;The target area includes the guiding destination;
Second searches subelement: for finding one in the Stochastic propagation tree from the root node to the terminal leaf
The path of child node, and as the walking path.
In device of the present invention, the second judgment unit includes:
Prompt subelement: for when the user is when the position from next virtual strong light guide beacon is set in prelocalization
When setting more and more remoter, the mixed reality equipment carries out 3D virtual three-dimensional auditory tone cues;
Second locator unit: for when the user is when prelocalization and next virtual strong light guide beacon
When distance is more than the setting value, using the user when prelocalization is as root node;
Third generates subelement: for increasing leaf node by stochastical sampling, generating a Stochastic propagation tree;
4th judgment sub-unit: for judge in the Stochastic propagation tree with the presence or absence of comprising the guiding destination or into
The terminal leaf node of target area is entered, the target area includes the guiding destination;
Third searches subelement: for finding one in the Stochastic propagation tree from the root node to the terminal leaf
The path of child node, and as the walking path.
In device of the present invention, further includes:
Import unit: for receiving three-dimensional building model, wherein the three-dimensional building model is set by the mixed reality
It is standby to import;
Recognition unit: for identifying to the three-dimensional building model, plan view is generated;
Map generation unit: for being each room frame favored area in the plan view, and being each room hand
Work marks room title, and the mixed reality equipment generates the indoor 3D map of the current building and stored.
Implement intelligent guidance method and device of the invention, has the advantages that since user is using mixed reality
Equipment carries out real-time three-dimensional scanning to space environment information, to obtain user current location information;Believed according to user current location
The target that breath and user provide, determine that the distance of user worked as prelocalization and guide destination collects user's according to distance
When the road condition analyzing parameter of prelocalization and guiding destination, to establish path planning model;If there is obstacle on current path
Object, then planning path again, and walking path is finally demarcated, the present invention utilizes mixed reality technology, uses mixed reality equipment
Real-time three-dimensional scanning, disturbance in judgement object and space path are carried out to space environment information, increase road conditions in each path node
Prompt forms a series of road condition advisories on the walking path, and the user for needing auxiliary tool to walk is helped to navigate, in addition,
The cost of mixed reality equipment wants much lower relative to seeing-eye dog, therefore the present invention can reduce use cost, realizes auxiliary to needing
Assistant engineer has the guidance of the user of walking, is allowed to restore certain autonomous actions ability, more accurately serves and need auxiliary tool
The user group of walking.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with
It obtains other drawings based on these drawings.
Fig. 1 is the flow chart of method in the intelligent guidance method of the present invention and device one embodiment;
Fig. 2 is the actual effect of virtual strong light guide beacon and the setting of 3D virtual three-dimensional auditory tone cues in the embodiment
Figure;
Fig. 3 is the flow chart of the indoor 3D cartography of current building in the embodiment;
Fig. 4 is to collect the road condition analyzing ginseng of user worked as prelocalization and guide destination according to distance in the embodiment
Number, to establish the specific flow chart of path planning model;
Fig. 5 is in the embodiment according to path planning model, and real time scan current spatial environmental information judges current road
Whether there is barrier on diameter, if there is barrier, then planning path again, and finally demarcate the detailed process of walking path
Figure;
Fig. 6 be the embodiment in when user current positioning runout path planning model and deviate range be more than setting value
When, planning path, and the finally specific flow chart of calibration walking path again;
Fig. 7 is the structural schematic diagram of device in the embodiment.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
In intelligent guidance method of the invention and Installation practice, the flow chart of intelligent guidance method is as shown in Figure 1.Figure
In 1, which includes the following steps:
Step S01 carries out real-time three-dimensional scanning to user's current spatial environmental information using mixed reality equipment, to obtain
User current location information.
Specifically, user carries out real-time three-dimensional scanning (i.e. to its current spatial environmental information using mixed reality equipment
3D scanning), 3-D scanning is wirelessly specifically carried out using the 3D video camera in mixed reality equipment, such as: In
In the presence of WIFI signal, the 3D video camera in mixed reality equipment can be to the current spatial environmental information locating for it
Real-time three-dimensional scanning is carried out, current spatial environmental information can be the space environment letter in building, in outside building or courtyard
Breath etc., which can regard a wearable computer+stereo vision sensor as.It, should in the present embodiment
Mixed reality equipment can use the hololens AR helmet, and the 3D video camera on the hololens AR helmet includes two infrared
Laser structure optical transmitting set and a RGB camera can install all types of operating system in the hololens AR helmet.
Certainly, the 3-D scanning of the mixed reality equipment can also be realized using other modes.
In the present embodiment, it is previously stored the indoor 3D map of current building in the mixed reality equipment, currently builds
3D map etc. in the outdoor 3D map of object or courtyard is illustrated by the indoor 3D map of current building, current building
Indoor 3D map be user by the way that three-dimensional building model is imported into mixed reality equipment after, generated after handling it, generate
The indoor 3D map of current building can be stored in mixed reality equipment, such as: the boss of a coffee shop is by coffee shop
Three-dimensional building model imported into mixed reality equipment after, the indoor 3D map of the coffee shop can be generated after handling it.About
The indoor 3D map of building, subsequent to will be described in detail before how making.
The target that step S02 is provided according to user current location information and user, determine user when prelocalization and lead
Draw the distance of destination.
Specifically, virtual reality device can be inquired by voice after mixed reality equipment scans space environment information
The target to be gone of user, that is, mixed reality equipment sending voice signal, the inquiry target to be gone of user.Mixed reality
It is equipped with microphone array and speech processes part in equipment, the acquisition to voice may be implemented using microphone array, using language
Sound processing part can be handled the voice signal of acquisition.The language of microphone array acquisition user in mixed reality equipment
Sound is replied, such as: user can be with voice answering, I will go to toilet, after mixed reality equipment receives the speech answering of user,
The target provided according to user current location information and user, determine the distance of user worked as prelocalization and guide destination,
When prelocalization corresponds to the position that user is presently in, guiding destination with corresponding to the target that user to be gone, so indoors 3D
The position that user is presently in and the target that user to be gone can be got information about on figure.
Step S03 collects the road condition analyzing parameter of user worked as prelocalization and guide destination, according to distance to establish road
Diameter plan model.
Specifically, when prelocalization and guide the distance of destination according to user, collect user when prelocalization and lead
Draw the road condition analyzing parameter of destination, path planning model can be established using hyperspace planning algorithm.Namely to 3D
Figure carries out the planning of initial path, and to cook up reasonable path, path is a route, and the reasonable path is as path
Plan model.About be specifically how to establish path planning model, it is subsequent to will be described in detail.This step has been executed, has been held
Row step S04 or step S04 '.
Step S04 judges whether have on current path according to path planning model, real time scan current spatial environmental information
Barrier, if there is barrier, then planning path again, and finally demarcate walking path.
Specifically, having executed above-mentioned steps S03, it can choose and execute this step.In this step, advised according to above-mentioned path
Model is drawn, real time scan current spatial environmental information judges whether there is barrier on current path, when there is barrier on current path
When hindering object, it is necessary to planning path again, that is, current path is adjusted, new path is planned again, and is finally marked
Determine walking path, it is subsequent to will be described in detail about specifically how planning again.This step has been executed, step S05 is executed.
Step S04 ' when the current positioning runout path planning model of user and deviate range be more than setting value when, advise again
Path is drawn, and finally demarcates walking path.
Specifically, detection user when prelocalization whether deflection path plan model, when deflection path plan model,
Voice prompting is had, and further judges whether the deviation range of deflection path plan model is more than setting value, if it exceeds setting
Definite value, then planning path again, and finally demarcate walking path.In the present embodiment, which is 1000mm, that is, when inclined
When being more than 1000mm from range, then planning path again, and finally demarcate walking path.Certainly, the size of the setting value be can
With what is adjusted accordingly according to the actual situation.It is subsequent to will be described in detail about specifically how planning again.It has executed
Step executes step S05.
Step S05 increases road condition advisory in each path node according to determining walking path.
Specifically, road condition advisory can be increased in each path node according to the above-mentioned walking path having determined,
The road condition advisory includes virtual strong light guide beacon and 3D virtual three-dimensional auditory tone cues.Mixed reality equipment is to carry out real-time three-dimensional
Scanning, after 3-D scanning, user when prelocalization, guiding destination and virtual strong light guide beacon on path
Position can calculate, the position of the virtual strong light guide beacon can according to the faint degree of the habit and eyesight of user into
The corresponding setting of row.In this step, phase can be placed according to the position of calculated each path node on the walking path
The virtual strong light guide beacon answered, interval between the strong light guide beacon of adjacent virtual can between 100mm~1000mm, when
So, in actual use, the interval between the strong light guide beacon of adjacent virtual can adjust accordingly as the case may be, than
Such as: under the environment that user is in dark or when the eyesight of user is very faint, at this time the strong light guide beacon of adjacent virtual it
Between interval can be arranged it is smaller, when user be in have bright place or user eyesight it is faint a little when, this
When the strong light guide beacon of adjacent virtual between interval can adjust it is larger.
After establishing virtual strong light guide beacon, while it is virtual to establish 3D corresponding with each virtual strong light guide beacon
Stereo sound prompt, that is, each virtual strong light guide beacon have 3D virtual three-dimensional auditory tone cues, right with it for prompting
The position for the virtual strong light guide beacon answered can assist user to position the strong light guide letter of current virtual by played in stereo
Target position.Fig. 2 is the actual effect figure of virtual strong light guide beacon and the setting of 3D virtual three-dimensional auditory tone cues in the present embodiment.
Step S06 utilizes road condition advisory, completes the intelligence guiding for reaching the target ground that user specifies.
Specifically, every by a virtual strong light guide beacon, next virtual strong light guide beacon is shown automatically
Show and start to carry out stereo sound prompt, as the position of next virtual strong light guide beacon is moved: in this step, using
Family is every by a virtual strong light guide beacon, and next virtual strong light guide beacon will do it automatic display, and start to carry out
Stereo sound prompt moves user with the position of next virtual strong light guide beacon, to give user more
Good intelligence guiding.The real time position of user is to may determine that out by the 3-D scanning of mixed reality equipment, using mixed
The 3-D scanning for closing real world devices helps the user for needing auxiliary tool to walk to realize space orientation, so user is every to pass through one
Virtual strong light guide beacon, mixed reality equipment is to can be determined that the position of user, particularly as being that user is worked as prelocalization
It compares to come in fact with the 3D map in the indoor 3D map of current building, the outdoor 3D map of current building or courtyard
Existing.It is realized using 3-D scanning and mixed reality technology to the guiding for the user for needing auxiliary tool to walk, is allowed to restore one
Fixed autonomous actions ability.Method of the invention is navigated using virtual reality, can be more quasi- in conjunction with positioning and speech recognition technology
True serves the user group for needing auxiliary tool to walk.In addition, the cost of mixed reality equipment wants low relative to seeing-eye dog
Very much, therefore method of the invention can also reduce use cost.
Illustrated with the indoor 3D map of building, the production of the indoor 3D map about building, be step S01 it
Preceding completion.Fig. 3 is the flow chart of the indoor 3D cartography of current building in the present embodiment;In Fig. 3, above-mentioned steps S01
Before further include following steps:
Step S001 receives three-dimensional building model, wherein three-dimensional building model is imported by mixed reality equipment: this step
In, three-dimensional building model is imported into mixed reality equipment, there are many kinds of the selections of the type of the three-dimensional building model, can be with
It is the three-dimensional building model that user is arbitrarily arranged according to the hobby of oneself, is also possible to already existing three-dimensional building model, than
Example: BIM buildings model, when using BIM buildings model, particularly as being to be imported using BIM buildings model to building, i.e.,
It imported into the operating system of mixed reality equipment.
Step S002 identifies three-dimensional building model, generates plan view:
In this step, three-dimensional building model is identified by the identification software in mixed reality equipment, three-dimensional is built
It builds after model identifies successfully, generates the indoor plane figure about the building.
Step S003 is each room frame favored area in the plan view, and marks room title by hand for each room, is mixed
Real world devices are closed to generate the indoor 3D map of current building and stored:
In this step, corresponding region can be selected for each room in the plan view, that is, be in the plan view every
A room frame selects corresponding region, and is that each room marks room title by manual mode, generates the room of current building
Interior 3D map, and store it in mixed reality equipment.Certainly, the multiple buildings in more buildings be can store in mixed reality equipment
Layer, and the selection of support area.Three-dimensional building model is uploaded by user, can satisfy the hobby and habit of different user.
For the present embodiment, above-mentioned steps S03 can also be refined further, and the flow chart after refinement is as shown in Figure 4.
In Fig. 4, step S03 further comprises:
Step S31 is using user when prelocalization is as root node.
Specifically, needing to select an initial point as root node, user specifically being worked as prelocalization in this step
As root node.
Step S32 increases leaf node by stochastical sampling, generates a Stochastic propagation tree.
Specifically, in such a way that stochastical sampling increases leaf node, generating a Stochastic propagation tree in this step.
Step S33 judges in Stochastic propagation tree with the presence or absence of comprising guiding destination or entering the terminal leaf of target area
Child node.
Specifically, judging in Stochastic propagation tree in this step with the presence or absence of such terminal leaf node, the terminal leaf
Comprising guiding destination or entering target area in child node, wherein target area includes guiding destination.In this step,
If it is determined that result be it is yes, then follow the steps S34;Otherwise, return step S32.
Step S34 finds one from root node to the path of terminal leaf node in Stochastic propagation tree, and as
Path planning model.
Specifically, if the judging result of above-mentioned steps S33 be it is yes, i.e., in Stochastic propagation tree exist include guiding purpose
Ground or the terminal leaf node for entering target area, then execute this step.In this step, one is found in Stochastic propagation tree
From root node to the path of terminal leaf node, and using the path as path planning model.By implementing above-mentioned steps S31 extremely
The foundation to path plan model may be implemented in step S34.
For the present embodiment, above-mentioned steps S04 can also be refined further, and the flow chart after refinement is as shown in Figure 5.
In Fig. 5, above-mentioned steps S04 further comprises following steps:
Step S41 judges whether there is barrier on current path according to current spatial environmental information.
Specifically, can scan that whether there are obstacles, this step when mixed reality equipment carries out 3-D scanning
In, judge whether there is barrier on current path according to current spatial environmental information, if it is determined that result be it is yes, then execute step
Rapid S43;Otherwise, step S42 is executed.
Step S42 is guided according to current path and is continued setting time.
Specifically, if the judging result of above-mentioned steps S41 be it is no, i.e., on current path be not present barrier, then hold
This step of row.In this step, setting time is guided and continued according to current path, the size of setting time can be according to reality
Border demand is accordingly arranged, and buffers regular hour, i.e. setting time, is to reduce mixed reality equipment to internal resource
Consumption.This step, return step S41 are executed.
Step S43 sets a starting point as root node in the set distance range of barrier.
Specifically, if the judging result of above-mentioned steps S41 be it is yes, i.e., there are barriers on current path, then execute
This step.In this step, a starting point is set in the set distance range of barrier as root node, that is, in barrier
Around regenerate a starting point, using the starting point as root node.This step has been executed, step S44 is executed.
Step S44 increases leaf node by stochastical sampling, generates a Stochastic propagation tree.
Specifically, increasing the mode of leaf node by using stochastical sampling in this step, generating a Stochastic propagation
Tree.
Step S45 judges in Stochastic propagation tree with the presence or absence of comprising guiding destination or entering the terminal leaf of target area
Child node.
Specifically, judging in Stochastic propagation tree in this step with the presence or absence of such terminal leaf node, the terminal leaf
Comprising guiding destination or entering target area in child node, wherein target area includes guiding destination.In this step,
If it is determined that result be it is yes, then follow the steps S46;Otherwise, return step S44.
Step S46 finds one from root node to the path of terminal leaf node in Stochastic propagation tree, and as
Walking path.
Specifically, if the judging result of above-mentioned steps S45 be it is yes, i.e., in Stochastic propagation tree exist include guiding purpose
Ground or the terminal leaf node for entering target area, then execute this step.In this step, one is found in Stochastic propagation tree
From root node to the path of terminal leaf node, and using the path as walking path.By implementing above-mentioned steps S41 to step
S46 can regenerate a starting point when there is barrier around barrier, and utilize hyperspace planning algorithm again,
New path is calculated, which is exactly walking path.
For the present embodiment, above-mentioned steps S04 ' can also be refined further, flow chart such as Fig. 6 institute after refinement
Show.In Fig. 6, step S04 ' further comprises following steps:
Step S41 ' when user when prelocalization set the position from next virtual strong light guide beacon it is more and more remoter when, mix
It closes real world devices and carries out 3D virtual three-dimensional auditory tone cues.
Specifically, in this step, when position of the current location of user from next virtual strong light guide beacon is more next
When remoter, mixed reality equipment carries out 3D virtual three-dimensional auditory tone cues, for prompting user to have deviated from the path rule of planning
Model is drawn, user is allowed to adjust the orientation of walking in time.
Step S42 ', will when user is when prelocalization is more than setting value at a distance from next virtual strong light guide beacon
User's works as prelocalization as root node.
Specifically, in this step, when the prelocalization of working as of user is more than at a distance from next virtual strong light guide beacon
When setting value, such as: when user when prelocalization at a distance from next virtual strong light guide beacon more than 1000mm when, then need
Will planning path again, at this time by user when prelocalization is considered starting point, using starting point as root node.
Step S43 ' increases leaf node by stochastical sampling, generates a Stochastic propagation tree.
Specifically, increasing the mode of leaf node by using stochastical sampling in this step, generating a Stochastic propagation
Tree.
With the presence or absence of comprising guiding destination or entering the terminal of target area in step S44 ' judgement Stochastic propagation tree
Leaf node.
Specifically, judging in Stochastic propagation tree in this step with the presence or absence of such terminal leaf node, the terminal leaf
Comprising guiding destination or entering target area in child node, wherein target area includes guiding destination.In this step,
If it is determined that result be it is yes, then follow the steps S45 ';Otherwise, return step S43 '.
Step S45 ' finds one from root node to the path of terminal leaf node in Stochastic propagation tree, and as
Walking path.
Specifically, if the judging result of above-mentioned steps S44 ' be it is yes, i.e., in Stochastic propagation tree exist include guiding mesh
Ground or enter the terminal leaf node of target area, then execute this step.In this step, one is found in Stochastic propagation tree
Item is from root node to the path of terminal leaf node, and using the path as walking path.By implementing above-mentioned steps S41 ' extremely
Step S45 ' has the prompt of 3D virtual three-dimensional auditory tone cues when user is more and more remoter from next virtual strong light guide beacon,
After deviation distance is more than 1000mm, current location zequin is pressed in path again, generates new path, which is exactly
Rational routes, in order to guide user to find nearest virtual strong light guide beacon.
The invention further relates to a kind of device for realizing above-mentioned intelligent guidance method, the structural schematic diagram of the device such as Fig. 7 institutes
Show.In Fig. 7, which includes scanning element 1, determination unit 2, establishes unit 3, the first judging unit 4, second judgment unit
4 ', prompt unit 5 and guide unit 6.
Wherein, scanning element 1 is used to carry out real-time three-dimensional to user's current spatial environmental information using mixed reality equipment
Scanning, to obtain user current location information.
Specifically, user wears mixed reality equipment, user is using mixed reality equipment to its current spatial environment
Information carries out real-time three-dimensional scanning (i.e. 3D scanning), specifically wirelessly using the 3D video camera in mixed reality equipment
3-D scanning is carried out, such as: in the presence of WIFI signal, the 3D video camera in mixed reality equipment can be to it
Locating current spatial environmental information carries out real-time three-dimensional scanning, and current spatial environmental information can be in building, building
Space environment information etc. in outer or courtyard, which can regard a wearable computer+stereopsis as
Feel sensor.In the present embodiment, which can use the hololens AR helmet, on the hololens AR helmet
3D video camera include two infrared laser structure optical transmitting sets and a RGB camera, can be in the hololens AR helmet
All types of operating system is installed.Certainly, the 3-D scanning of the mixed reality equipment can also be realized using other modes.
In the present embodiment, it is previously stored the indoor 3D map of current building in the mixed reality equipment, currently builds
3D map etc. in the outdoor 3D map of object or courtyard is illustrated by the indoor 3D map of current building, current building
Indoor 3D map be user by the way that three-dimensional building model is imported into mixed reality equipment after, generated after handling it, generate
The indoor 3D map of current building can be stored in mixed reality equipment, such as: the boss of a coffee shop is by coffee shop
Three-dimensional building model imported into mixed reality equipment after, the indoor 3D map of the coffee shop can be generated after handling it.
The target that determination unit 2 is used to be provided according to user current location information and user, determine that user's is current fixed
The distance of position and guiding destination.
Specifically, virtual reality device can be inquired by voice after mixed reality equipment scans space environment information
The target to be gone of user, that is, mixed reality equipment sending voice signal, the inquiry target to be gone of user.Mixed reality
It is equipped with microphone array and speech processes part in equipment, the acquisition to voice may be implemented using microphone array, using language
Sound processing part can be handled the voice signal of acquisition.The language of microphone array acquisition user in mixed reality equipment
Sound is replied, such as: user can be with voice answering, I will go to toilet, after mixed reality equipment receives the speech answering of user,
The target provided according to user current location information and user, determine the distance of user worked as prelocalization and guide destination,
When prelocalization corresponds to the position that user is presently in, guiding destination with corresponding to the target that user to be gone, so indoors 3D
The position that user is presently in and the target that user to be gone can be got information about on figure.
The road condition analyzing parameter worked as prelocalization and guide destination that unit 3 is used to collect user according to distance is established, with
Establish path planning model.
Specifically, when prelocalization and guide the distance of destination according to user, collect user when prelocalization and lead
Draw the road condition analyzing parameter of destination, path planning model can be established using hyperspace planning algorithm.Namely to 3D
Figure carries out the planning of initial path, and to cook up reasonable path, path is a route, and the reasonable path is as path
Plan model.
First judging unit 4 is used for according to path planning model, and real time scan current spatial environmental information judges current road
Whether there is barrier on diameter, if there is barrier, then planning path again, and finally demarcate walking path.Specifically, root
According to above-mentioned path planning model, real time scan current spatial environmental information judges whether there is barrier on current path, when current
When having barrier on path, it is necessary to planning path again, that is, current path is adjusted, new road is planned again
Diameter, and finally demarcate walking path.
Second judgment unit 4 ' is used to the current positioning runout path planning model as user and deviates range be more than setting
When value, planning path again, and finally demarcate walking path.Specifically, detection user when prelocalization whether deflection path
Plan model has voice prompting when deflection path plan model, and further judges the deviation of deflection path plan model
Whether range is more than setting value, if it exceeds setting value, then planning path again, and finally demarcate walking path.The present embodiment
In, which is 1000mm, that is, when deviateing range and being more than 1000mm, then planning path again, and finally calibration is gone
Walk path.Certainly, the size of the setting value can adjust accordingly according to the actual situation.
Prompt unit 5 is used to increase road condition advisory in each path node according to determining walking path.
Specifically, road condition advisory can be increased in each path node according to the above-mentioned walking path having determined,
The road condition advisory includes virtual strong light guide beacon and 3D virtual three-dimensional auditory tone cues.Mixed reality equipment is to carry out real-time three-dimensional
Scanning, after 3-D scanning, user when prelocalization, guiding destination and virtual strong light guide beacon on path
Position can calculate, the position of the virtual strong light guide beacon can according to the faint degree of the habit and eyesight of user into
The corresponding setting of row.It can be placed on the walking path according to the position of calculated each path node corresponding virtual strong
Light guide beacon, the interval between the strong light guide beacon of adjacent virtual can be between 100mm~1000mm, certainly, in reality
In use, the interval between the strong light guide beacon of adjacent virtual can adjust accordingly as the case may be, and such as: work as user
Under dark environment or when the eyesight of user is very faint, the interval between the strong light guide beacon of adjacent virtual at this time
What can be arranged is smaller, when user be in have bright place or user eyesight it is faint a little when, adjacent void at this time
Intend the interval between strong light guide beacon can adjust it is larger.
After establishing virtual strong light guide beacon, while it is virtual to establish 3D corresponding with each virtual strong light guide beacon
Stereo sound prompt, that is, each virtual strong light guide beacon have 3D virtual three-dimensional auditory tone cues, right with it for prompting
The position for the virtual strong light guide beacon answered can assist user to position the strong light guide letter of current virtual by played in stereo
Target position.
Guide unit 6 is used to utilize road condition advisory, completes the intelligence guiding for reaching the target ground that user specifies.
Specifically, every by a virtual strong light guide beacon, next virtual strong light guide beacon is shown automatically
Show and start to carry out stereo sound prompt, as the position of next virtual strong light guide beacon is moved: the every process of user
One virtual strong light guide beacon, next virtual strong light guide beacon will do it automatic display, and start to carry out stereo sound
Prompt moves user with the position of next virtual strong light guide beacon, with preferably intelligent to user
Guiding.The real time position of user is to may determine that out by the 3-D scanning of mixed reality equipment, is set using mixed reality
Standby 3-D scanning helps the user for needing auxiliary tool to walk to realize space orientation, so user is every to pass through a virtual strong light
Navigation beacon, mixed reality equipment is to can be determined that the position of user, particularly as being to build when prelocalization and currently user
The 3D map built in the indoor 3D map of object, the outdoor 3D map of current building or courtyard compares to realize.Benefit
The guiding to the user for needing auxiliary tool to walk is realized with 3-D scanning and mixed reality technology, is allowed to restore centainly autonomous
Ability to act.The device of the invention is navigated using virtual reality, in conjunction with positioning and speech recognition technology, can more accurately be serviced
In the user group for needing auxiliary tool to walk.In addition, the cost of mixed reality equipment wants much lower relative to seeing-eye dog, therefore
The device of the invention can also reduce use cost.
It is illustrated with the indoor 3D map of building, when needing the indoor 3D map of building to be made, the device
It further include import unit 7, recognition unit 8 and map generation unit 9.
Wherein, import unit 7 is for receiving three-dimensional building model, wherein three-dimensional building model passes through mixed reality equipment
It imports.Specifically, three-dimensional building model is imported into mixed reality equipment for import unit 7, the three-dimensional building
There are many kinds of the selections of the type of model, can be the three-dimensional building model that user is arbitrarily arranged according to the hobby of oneself, can also
To be already existing three-dimensional building model, ratio: BIM buildings model, when using BIM buildings model, particularly as be using
BIM buildings model imports building, i.e., into the windows system for importeding into mixed reality equipment.
Recognition unit 8 generates plan view for identifying to three-dimensional building model;Specifically, passing through mixed reality
Identification software in equipment identifies three-dimensional building model, and after identifying successfully to three-dimensional building model, generation is built about this
Build the indoor plane figure of object.
Map generation unit 9 marks room for being in the plan view each room frame favored area, and for each room by hand
Between title, mixed reality equipment generates the indoor 3D map of current building and stored.For map generation unit 9
For, corresponding region can be selected for each room in the plan view, that is, be in the plan view the choosing pair of each room frame
The region answered, and be that each room marks room title by manual mode, the indoor 3D map of current building is generated, and will
It is stored in mixed reality equipment.Certainly, it can store the more multiple floors in building in mixed reality equipment, and support area
Selection.Three-dimensional building model is uploaded by user, can satisfy the hobby and habit of different user.
In the present embodiment, establishing unit 3 further comprises that the first locator unit 31, first generates subelement 32, first
Judgment sub-unit 33 and first searches subelement 34;Wherein, first locator unit 31 be used for using user when prelocalization as
Root node;First, which generates subelement 32, is used to increase leaf node by stochastical sampling, generates a Stochastic propagation tree;First sentences
Disconnected subelement 33 is used to judge in Stochastic propagation tree with the presence or absence of the terminal leaf comprising guiding destination or entering target area
Child node;Target area includes the guiding destination;Above-mentioned target area includes guiding destination;First searches subelement 34
For finding one in Stochastic propagation tree from root node to the path of terminal leaf node, and as path planning mould
Type.By implementing, the first locator unit 31, first generates subelement 32, the first judgment sub-unit 33 and first searches subelement
34 may be implemented the foundation to path plan model.
In the present embodiment, it includes the second judgment sub-unit 41, the setting life of subelement 42, second that the first judging unit, which includes 4,
Subelement 45 is searched at subelement 43, third judgment sub-unit 44 and second;Second judgment sub-unit 41 is used for according to current empty
Between environmental information judge whether there is barrier on current path;Subelement 42 is set to be used in the set distance range of barrier
A starting point is set as root node;Second, which generates subelement 43, is used to increase leaf node by stochastical sampling, generates one
Stochastic propagation tree;Third judgment sub-unit 44 is for judging in Stochastic propagation tree with the presence or absence of comprising guiding destination or entering
The terminal leaf node of target area;Target area includes the guiding destination;Second, which searches subelement 45, is used for random
One is found in expansion tree from root node to the path of the terminal leaf node, and as walking path.Pass through implementation
Second judgment sub-unit 41, setting subelement 42, second generate subelement 43, third judgment sub-unit 44 and second searches son list
Member 45 can regenerate a starting point when there is barrier around barrier, and be calculated again using hyperspace planning
Method calculates new path, which is exactly walking path.
In the present embodiment, second judgment unit 4 ' includes prompt subelement 41 ', the second locator unit 42 ', third generation
Subelement 43 ', the 4th judgment sub-unit 44 ' and third search subelement 45 ';Wherein, prompt subelement 41 ' is used for when user's
When prelocalization set the position from next virtual strong light guide beacon it is more and more remoter when, mixed reality equipment carries out 3D virtual three-dimensional
Auditory tone cues;Second locator unit 42 ' is used for when user is when prelocalization is at a distance from next virtual strong light guide beacon
When more than the setting value, using user when prelocalization is as root node;Third generates subelement 43 ' for passing through stochastical sampling
Increase leaf node, generates a Stochastic propagation tree;4th judgment sub-unit 44 ' is for judging to whether there is in Stochastic propagation tree
Comprising guiding destination or entering the terminal leaf node of target area, target area includes the guiding destination;Third
Subelement 45 ' is searched for finding one in Stochastic propagation tree from root node to the path of terminal leaf node, and is made
For walking path.Subelement 43 ', the 4th judgement are generated by implementation tip subelement 41 ', the second locator unit 42 ', third
Subelement 44 ' and third search subelement 45 ', when user is more and more remoter from next virtual strong light guide beacon, have 3D
The prompt of virtual three-dimensional auditory tone cues, after deviation distance is more than 1000mm, current location zequin is pressed in path again, is generated new
Path, which is exactly rational routes, in order to guide user to find nearest virtual strong light guide beacon.
In short, the present invention is that the user group for needing auxiliary tool to walk utilizes virtual strong light guide using mixed reality technology
The method that boat beacon and 3D virtual three-dimensional auditory tone cues realize navigation, mixed reality equipment is by voice and needs auxiliary tool row
The user's interaction walked, facilitates setting terminal destination, and virtual strong light guide beacon shows that exclusive PCR is virtual using 3D one by one
Stereo sound assists user's positioning, using real time scan judges the barrier in initial path, utilizes the indoor 3D of building
Figure and position scanning generate walking path.It certainly,, can be in Actual path under conditions of of less demanding in practical application
Increase the indicator light to the user for needing auxiliary tool to walk, but the method deployment is difficult and has strong light interference cannot normal person
It promotes, or guides robot using with leading the way for strong optical signal, but cost is higher and not portable.Advantage of the present invention compares
Obviously, use cost can be reduced, realizes to the guidance for the user for needing auxiliary tool to walk, is allowed to restore certain autonomous actions
Ability more accurately serves the user group for needing auxiliary tool to walk.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention
Within mind and principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.
Claims (14)
1. a kind of intelligence guidance method, which comprises the steps of:
Real-time three-dimensional scanning is carried out to user's current spatial environmental information using mixed reality equipment, to obtain user current location
Information;
The target provided according to user current location information and user, determine that the user's works as prelocalization and guiding destination
Distance;
According to the distance, the road condition analyzing parameter of the user worked as prelocalization and guide destination is collected, to establish path
Plan model;
According to the path planning model, real time scan current spatial environmental information judges whether there is barrier on current path,
If there is the barrier, then planning path again, and finally demarcate walking path;
According to the determining walking path, increase road condition advisory in each path node;
Using the road condition advisory, the intelligence guiding for reaching the target ground that the user specifies is completed.
2. intelligence guidance method according to claim 1, which is characterized in that the road condition advisory includes virtual strong light guide
Beacon and 3D virtual three-dimensional auditory tone cues.
3. intelligence guidance method according to claim 1, which is characterized in that it is described according to the distance, collect the use
Family when prelocalization and guide destination road condition analyzing parameter, include: to establish path planning model
Using the user when prelocalization is as root node;
Increase leaf node by stochastical sampling, generates a Stochastic propagation tree;
Judge in the Stochastic propagation tree with the presence or absence of comprising the terminal leaf for guiding destination or entering target area
Node;The target area includes the guiding destination;
If so, finding one in the Stochastic propagation tree from the root node to the path of the terminal leaf node, and will
It is as the path planning model.
4. intelligence guidance method according to claim 2, which is characterized in that described according to the distance, described in collection
The road condition analyzing parameter of user worked as prelocalization and guide destination, to establish after path planning model further include:
The path planning model described in the current positioning runout of the user and when deviateing range and being more than setting value, plans road again
Diameter, and finally demarcate the walking path.
5. intelligence guidance method according to claim 1, which is characterized in that it is described according to the path planning model, it is real
When scan current spatial environmental information, judge whether there is barrier on current path, if there is the barrier, then advise again
Path is drawn, and finally calibration walking path includes:
Judge whether there is barrier on the current path according to the current spatial environmental information;
If so, setting a starting point in the set distance range of the barrier as root node;
Increase leaf node by stochastical sampling, generates a Stochastic propagation tree;
Judge in the Stochastic propagation tree with the presence or absence of comprising the terminal leaf for guiding destination or entering target area
Node;The target area includes the guiding destination;
If so, finding one in the Stochastic propagation tree from the root node to the path of the terminal leaf node, and will
It is as the walking path.
6. intelligence guidance method according to claim 4, which is characterized in that the current positioning runout as the user
The path planning model and when deviateing range and being more than setting value, planning path again, and finally demarcate the walking path packet
It includes:
When the user when prelocalization set the position from next virtual strong light guide beacon it is more and more remoter when, it is described mixed
It closes real world devices and carries out 3D virtual three-dimensional auditory tone cues;
It, will when the user is when prelocalization is more than the setting value at a distance from next virtual strong light guide beacon
The user's works as prelocalization as root node;
Increase leaf node by stochastical sampling, generates a Stochastic propagation tree;
Judge in the Stochastic propagation tree with the presence or absence of comprising the terminal leaf for guiding destination or entering target area
Node, the target area include the guiding destination;
If so, finding one in the Stochastic propagation tree from the root node to the path of the terminal leaf node, and will
It is as the walking path.
7. according to claim 1 to intelligent guidance method described in 6 any one, which is characterized in that described existing using mixing
Real equipment carries out real-time three-dimensional scanning to user's current spatial environmental information, to obtain also wrapping before user current location information
It includes:
Receive three-dimensional building model, wherein the three-dimensional building model is imported by the mixed reality equipment;
The three-dimensional building model is identified, plan view is generated;
It is each room frame favored area in the plan view, and marks room title by hand for each room, it is described mixed
Real world devices are closed to generate the indoor 3D map of the current building and stored.
8. a kind of device for realizing intelligent guidance method as described in claim 1 characterized by comprising
Scanning element: for carrying out real-time three-dimensional scanning to user's current spatial environmental information using mixed reality equipment, with
To user current location information;
Determination unit: for according to user current location information and the target of user's offer, determining that the user's is current fixed
The distance of position and guiding destination;
Establish unit: for collecting the road condition analyzing ginseng of the user worked as prelocalization and guide destination according to the distance
Number, to establish path planning model;
First judging unit: for according to the path planning model, real time scan current spatial environmental information to judge current road
Whether there is barrier on diameter, if there is the barrier, then planning path again, and finally demarcate walking path;
Prompt unit: for increasing road condition advisory in each path node according to the determining walking path;
Guide unit: for utilizing the road condition advisory, the intelligence guiding for reaching the target ground that the user specifies is completed.
9. device according to claim 8, which is characterized in that the road condition advisory includes virtual strong light guide beacon and 3D
Virtual three-dimensional auditory tone cues.
10. device according to claim 8, which is characterized in that the unit of establishing includes:
First locator unit: for the user to be worked as prelocalization as root node;
First generates subelement: for increasing leaf node by stochastical sampling, generating a Stochastic propagation tree;
First judgment sub-unit: it whether there is comprising the guiding destination or enter in the Stochastic propagation tree for judging
The terminal leaf node of target area;The target area includes the guiding destination;
First searches subelement: for finding one in the Stochastic propagation tree from the root node to the terminal leaf section
The path of point, and as the path planning model.
11. device according to claim 9, which is characterized in that further include:
Second judgment unit: for path planning model described in the current positioning runout as the user and deviation range is more than to set
When definite value, planning path again, and finally demarcate the walking path.
12. device according to claim 8, which is characterized in that first judging unit includes:
Second judgment sub-unit: for judging whether there is obstacle on the current path according to the current spatial environmental information
Object;
Set subelement: for setting a starting point in the set distance range of the barrier as root node;
Second generates subelement: for increasing leaf node by stochastical sampling, generating a Stochastic propagation tree;
Third judgment sub-unit: it whether there is comprising the guiding destination or enter in the Stochastic propagation tree for judging
The terminal leaf node of target area;The target area includes the guiding destination;
Second searches subelement: for finding one in the Stochastic propagation tree from the root node to the terminal leaf section
The path of point, and as the walking path.
13. device according to claim 11, which is characterized in that the second judgment unit includes:
Prompt subelement: it is got over for the position set when prelocalization from next virtual strong light guide beacon as the user
When coming remoter, the mixed reality equipment carries out 3D virtual three-dimensional auditory tone cues;
Second locator unit: for when the user is when prelocalization is at a distance from next virtual strong light guide beacon
When more than the setting value, using the user when prelocalization is as root node;
Third generates subelement: for increasing leaf node by stochastical sampling, generating a Stochastic propagation tree;
4th judgment sub-unit: it whether there is comprising the guiding destination or enter in the Stochastic propagation tree for judging
The terminal leaf node of target area, the target area include the guiding destination;
Third searches subelement: for finding one in the Stochastic propagation tree from the root node to the terminal leaf section
The path of point, and as the walking path.
14. according to device described in claim 8 to 13 any one, which is characterized in that further include:
Import unit: for receiving three-dimensional building model, wherein the three-dimensional building model is led by the mixed reality equipment
Enter;
Recognition unit: for identifying to the three-dimensional building model, plan view is generated;
Map generation unit: it is marked by hand for being each room frame favored area in the plan view, and for each room
Room title is infused, the mixed reality equipment generates the indoor 3D map of the current building and stored.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810582033.XA CN110440789A (en) | 2018-06-07 | 2018-06-07 | Intelligent guiding method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810582033.XA CN110440789A (en) | 2018-06-07 | 2018-06-07 | Intelligent guiding method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110440789A true CN110440789A (en) | 2019-11-12 |
Family
ID=68427977
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810582033.XA Withdrawn CN110440789A (en) | 2018-06-07 | 2018-06-07 | Intelligent guiding method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110440789A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113434038A (en) * | 2021-05-31 | 2021-09-24 | 广东工业大学 | Control method of visual impairment child directional walking training auxiliary system based on augmented reality |
CN113566829A (en) * | 2021-07-19 | 2021-10-29 | 上海极赫信息技术有限公司 | High-precision positioning technology-based mixed reality navigation method and system and MR (magnetic resonance) equipment |
CN114089835A (en) * | 2022-01-18 | 2022-02-25 | 湖北工业大学 | Mixed reality interactive guidance and identification system and method based on self-adaptive visual difference |
CN114201560A (en) * | 2021-11-29 | 2022-03-18 | 中国科学院计算机网络信息中心 | Web-based real-time multi-user action path planning method and system in 5G environment |
-
2018
- 2018-06-07 CN CN201810582033.XA patent/CN110440789A/en not_active Withdrawn
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113434038A (en) * | 2021-05-31 | 2021-09-24 | 广东工业大学 | Control method of visual impairment child directional walking training auxiliary system based on augmented reality |
CN113566829A (en) * | 2021-07-19 | 2021-10-29 | 上海极赫信息技术有限公司 | High-precision positioning technology-based mixed reality navigation method and system and MR (magnetic resonance) equipment |
CN114201560A (en) * | 2021-11-29 | 2022-03-18 | 中国科学院计算机网络信息中心 | Web-based real-time multi-user action path planning method and system in 5G environment |
CN114201560B (en) * | 2021-11-29 | 2022-12-16 | 中国科学院计算机网络信息中心 | Web-based real-time multi-user action path planning method and system in 5G environment |
CN114089835A (en) * | 2022-01-18 | 2022-02-25 | 湖北工业大学 | Mixed reality interactive guidance and identification system and method based on self-adaptive visual difference |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110440789A (en) | Intelligent guiding method and device | |
CN1945351B (en) | Robot navigation positioning system and navigation positioning method | |
JP7061634B2 (en) | Intelligent disaster prevention system and intelligent disaster prevention method | |
CN204465738U (en) | A kind of disaster relief rescue visible system | |
CN106991681B (en) | Method and system for extracting and visualizing fire boundary vector information in real time | |
CN103186710B (en) | Optimum route search method and system | |
CN102177413A (en) | Data enrichment apparatus and method of determining temporal access information | |
CN107167138A (en) | A kind of intelligent Way guidance system and method in library | |
CN106292657A (en) | Mobile robot and patrol path setting method thereof | |
CN105662796A (en) | Intelligent walking assisting garment for blind person and navigation method of intelligent walking assisting garment | |
KR101109546B1 (en) | Way guidance apparatus for eyesight handicapped person and mobile terminal | |
US11785430B2 (en) | System and method for real-time indoor navigation | |
CN111195191A (en) | Intelligent system and method for guiding travel of blind people | |
JP2015105833A (en) | Route search system | |
CN107588780A (en) | A kind of intelligent blind guiding system | |
CN117579791B (en) | Information display system with image capturing function and information display method | |
CN113316083A (en) | Ultra-wideband-based positioning method and device | |
CN107504978A (en) | A kind of navigation methods and systems | |
CN205814622U (en) | A kind of blindmen intelligent walk help clothes | |
CN107631735A (en) | One kind is based on mobile phone inertial navigation and RFID blind man navigation methods | |
JP4804853B2 (en) | Point search device and in-vehicle navigation device | |
Gintner et al. | Improving reverse geocoding: Localization of blind pedestrians using conversational ui | |
CN107462256B (en) | A kind of navigation methods and systems | |
CN117572343A (en) | Short-distance navigation method and device for visually impaired user, electronic equipment and storage medium | |
Hersh et al. | Mobility: an overview |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20191112 |