CN107084740A - A kind of air navigation aid and device - Google Patents
A kind of air navigation aid and device Download PDFInfo
- Publication number
- CN107084740A CN107084740A CN201710188498.2A CN201710188498A CN107084740A CN 107084740 A CN107084740 A CN 107084740A CN 201710188498 A CN201710188498 A CN 201710188498A CN 107084740 A CN107084740 A CN 107084740A
- Authority
- CN
- China
- Prior art keywords
- image
- reality
- map
- scene image
- reality enhancing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3691—Retrieval, searching and output of information related to real-time traffic, weather, or environmental conditions
Abstract
The invention provides a kind of air navigation aid and device, obtain the scene image of current location, in reality enhancing map, it is determined that the position to be shown matched with scene image, regard the reality enhancing image of position-scheduled scope to be shown in reality enhancing map as reality enhancing image to be shown;Display reality enhancing image to be shown.Pass through the implementation of the present invention, reality enhancing image to be shown is matched from reality enhancing map with the scene image of current location and shown, the matching of image can largely lift the precision of positioning, reality enhancing image can then lift the intuitive of environment, the traffic trip having great convenience for the user.
Description
Technical field
The present invention relates to field of terminal technology, more particularly to a kind of air navigation aid and device.
Background technology
When user is in strange environment, the geography of surrounding is often obtained by the map application of lane terminal
Positional information, still, due to the precision problem of major alignment systems, the position that user is positioned in map application is not very
Accurately, it is difficult the position that is clearly presently in from simple map, also just more and for the weaker user of sense of direction
Hardly possible determines the information such as the stroke route of user, and this makes troubles to the trip traffic of user.
The content of the invention
The invention provides a kind of air navigation aid and device, it is intended to solves map application positioning in the prior art and is forbidden, and
The problem of poor intuition.
In order to solve the above-mentioned technical problem, the invention provides a kind of air navigation aid, including:
Obtain the scene image of current location;
In reality enhancing map, it is determined that the position to be shown matched with the scene image, by the reality enhancing ground
The reality enhancing image of position-scheduled scope to be shown strengthens image as reality to be shown in figure;
Show the reality enhancing image to be shown.
Optionally, the reality is being strengthened to the reality enhancing image of position-scheduled scope to be shown in map as waiting to show
Show reality enhancing image before, in addition to:
The application scenarios of user are determined according to the scene image;
The numerical value of the preset range is determined according to the application scenarios;
The reality enhancing image that the reality is strengthened to position-scheduled scope to be shown in map is as to be shown existing
Real enhancing image includes:Centered on the position to be shown, the preset range is radius, in the reality enhancing map
Draw circular on horizontal plane, the reality enhancing image of correspondence building in circle is carried out according to reality enhancing map location relation
Fusion, forms the reality enhancing image to be shown.
Optionally, the position to be shown for determining to match with the scene image includes:
The corresponding three-dimensional modeling data of processing generation is carried out to the scene image;
The three-dimensional map information of each position in the reality enhancing map is called, by the three-dimensional modeling data and described three
Enclose cartographic information and be overlapped interactive matching, calculate the three-dimensional map information of each position and matching for the three-dimensional modeling data
Degree;
The position that matching degree is more than to screening threshold value is used as the position to be shown.
Optionally, if exist multiple matching degrees be more than threshold value position when, in addition to:
Point out user selection, receive the selection operation of user;
The position that user is selected is used as the position to be shown.
Optionally, the mode of the scene image of the acquisition current location includes:
In taking pictures or imaging application, the scene image is obtained by camera, and triggers reality enhancing map and is answered
With;Or,
In reality enhancing map application, call and take pictures or image application, the scene image is obtained by camera.
Present invention also offers a kind of guider, including:
Acquisition module, the scene image for obtaining current location;
Determining module, used in strengthening map in reality, it is determined that the position to be shown with the scene matching, will be described existing
The reality enhancing image of position-scheduled scope to be shown strengthens image as reality to be shown in real enhancing map;
Display module, for showing the reality enhancing image to be shown.
Optionally, the determining module is additionally operable to:
The application scenarios of user are determined according to the scene image;
The numerical value of the preset range is determined according to the application scenarios;
Centered on the position to be shown, the preset range be radius, it is described reality enhancing map horizontal plane
It is upper to draw circular, the reality enhancing image of corresponding building in circle is strengthened to draw on the horizontal plane of map according to reality and justified
Shape, the reality enhancing image of correspondence building in circle is merged according to reality enhancing map location relation, forms described
Reality enhancing image to be shown.
Optionally, the determining module is additionally operable to:
The corresponding three-dimensional modeling data of processing generation is carried out to the scene image;
The three-dimensional map information of each position in the reality enhancing map is called, by the three-dimensional modeling data and described three
Dimension cartographic information is overlapped interactive matching, calculates the three-dimensional map information of each position and matching for the three-dimensional modeling data
Degree;
The position that matching degree is more than to screening threshold value is used as the position to be shown.
Optionally, if exist multiple matching degrees be more than threshold value position when, in addition to:
User's selection is proposed, the selection operation of user is received;
The position that user is selected is used as the position to be shown.
Optionally, the mode of the scene image of the acquisition module acquisition current location includes:
In taking pictures or imaging application, the scene image is obtained by camera, and triggers reality enhancing map and is answered
With;Or,
In reality enhancing map application, call and take pictures or image application, the scene image is obtained by camera.
The invention provides a kind of air navigation aid and device, the scene image of current location is obtained, map is strengthened in reality
In, it is determined that the position to be shown matched with scene image, the reality of position-scheduled scope to be shown in reality enhancing map is increased
Strong image strengthens image as reality to be shown;Display reality enhancing image to be shown.By the implementation of the present invention, with present bit
The scene image put matches reality enhancing to be shown from reality enhancing map and image and shown, the matching of image can be with
The precision of positioning is largely lifted, reality enhancing image can then lift the intuitive of environment, have great convenience for the user
Traffic trip.
Brief description of the drawings
Fig. 1 is the air navigation aid flow chart that the embodiment of the present invention one is provided;
Fig. 2 is the guider composition schematic diagram that the embodiment of the present invention two is provided;
Fig. 3 is the terminal composition schematic diagram that the embodiment of the present invention three is provided.
Embodiment
The present invention is applied to all mobile terminals with software and hardware part, and mobile terminal can be specifically such as hand
Machine, PC, server etc., are described in further detail below by embodiment combination accompanying drawing to the present invention.
Embodiment one:
Fig. 1 is the air navigation aid flow chart that the embodiment of the present invention one is provided, and refer to Fig. 1, including:
S101, the scene image for obtaining current location;
S102, in reality enhancing map, it is determined that the position to be shown match with scene image, reality is strengthened in map
The reality enhancing image of position-scheduled scope to be shown strengthens image as reality to be shown;
S103, display reality enhancing image to be shown.
Reality enhancing (Augmented Reality, abbreviation AR) technology is a kind of by real world information and virtual generation
The integrated new technology of boundary's information " seamless ", it script in the certain time spatial dimension of real world is difficult the reality experienced to be
Body information (visual information, sound, taste, tactile etc.), by science and technology such as computers, is superimposed again after analog simulation, will be virtual
Information application to real world, perceived by human sensory, so as to reach the sensory experience of exceeding reality.Real environment and
Virtual object has been added to same picture in real time or space exists simultaneously.In the present embodiment, AR information is then mainly related to
And to be image information, i.e. AR images.AR images are applied to be presented the cartographic information of two dimension in the form of three-dimensional, use
Family can obtain information more more than two dimensional image when checking AR maps according to the measurements of the chest, waist and hips image of presentation.
In the present embodiment, the scene image of current location is obtained, the scene image of the position where user is exactly obtained,
Image can include image or video.Obtaining the mode of the scene image of current location can include:Taking pictures or imaging
In, scene image is obtained by camera, AR applications are triggered afterwards;Or, in AR applications, call to take pictures or image and answer
With obtaining the scene image by camera.In other words, obtaining scene image can be held by the shoot function of terminal
OK, i.e., the image and/or video of surrounding are gathered using the camera in terminal.Wherein, any in terminal is related to collection image
Meeting call the application of camera may be used in the present embodiment acquisition current location scene image.Particularly, may be used also
To add shoot function in AR map applications, adopting for scene image can also be carried out on the premise of not by third-party application
Collection.When gathering scene image, user can only gather the scene image of current facing position, by way of shooting image;Also
The image in the range of maximum to 360 ° around pan-shot, collection current location can be used;The mode shot can also be used, led to
The scene image of videograph current location is crossed, the interference that dynamic object is brought to identification can be eliminated to a certain extent.
In the present embodiment, it is determined that the position to be shown matched with scene image can include:At scene image
The corresponding three-dimensional modeling data of reason generation;The three-dimensional map information of each position in AR maps is called, by three-dimensional modeling data and three
Dimension cartographic information is overlapped interactive matching, calculates the three-dimensional map information of each position and the matching degree of three-dimensional modeling data;Will
The position that matching degree is more than screening threshold value is used as position to be shown.The image or video of terminal taking, it is substantially all two dimension
Information, the essence of shooting is exactly that three-dimensional actual scene is carried out into record preservation in a two-dimensional manner.So, it is determined that with scene shadow
As the position to be shown of matching is then corresponding, the image and video of two dimension can be processed into corresponding three-dimensional modeling data.Tool
Body, the mode that two dimensional image is processed into corresponding 3-D view can include:According to the scene image got, acquisition pair
Answer the three-dimensional point cloud of scene.In the computer that three-dimensional point cloud in the present embodiment is formed on the basis of relative relation data immediately
Entity, the three-dimensional point cloud be used for further formed threedimensional model.In specific implementation process, according to the scene shadow photographed
Street information in key message as in, such as image and/video, the relation between storefront information, road surface and building, it is determined that
The direction of the scene image, so that the scene image of two dimension further is processed into threedimensional model.Further, it is also possible to be treated same
The collection scene image of display location repeatedly, analyzes the result of multi collect, and is overlapped, and can so lift the essence of collection
Degree so that the result of processing is more accurate.
The three-dimensional map information of each position in AR maps is called, three-dimensional modeling data and three-dimensional map information are carried out
Superposition interaction matching, calculates its matching degree.Here matching degree, the three-dimensional map information in AR maps just referred to this three
Dimension module data identical degree, can determine that the scene collected corresponds specifically to the where in map in this way, its
In, matching degree is higher, illustrates that the three-dimensional modeling data possibility consistent with corresponding place is bigger.In the present embodiment, will
Matching degree is more than the position of screening threshold value, is used as position to be shown.Screen the value of a matching degree of threshold preset, this threshold value
It can be determined according to the three-dimensional map information updating frequency in the discrimination and AR map applications of terminal, screening threshold value is higher, most
The precision recognized afterwards is also higher.In general, the setting of threshold value can be, the data of threedimensional model and the phase of three-dimensional map information
Like degree reach 80% and more than, then it is considered that its meet the requirements.
When three-dimensional modeling data and three-dimensional map information are overlapped into interaction matching, if there are multiple matching degrees
More than threshold value position when, can also include:Point out user selection, receive the selection operation of user;The position that user is selected
It is used as position to be shown.When being more than the position of threshold value in the event of matching degree, explanation system judges there is multiple in three-dimensional map
Position corresponding with three-dimensional modeling data, it may be possible to because the three-dimensional modeling data of these positions is too similar, then, this
In the case of these places can be presented to user, remind user to be selected, big position approximate of the user with reference to residing for oneself
The specific place of determination is put, or, these matching degrees can also be all higher than to the position of threshold value to carry out lateral comparison, it is selected
Middle matching degree highest, is used as position to be shown;Or, can be by the GPS function auxiliary positionings in terminal, no matter taking
Which kind of mode, it is final determined by position to be shown should only one of which, and this position to be shown determined is exactly current
The corresponding position of scene image of position.
, as before AR images to be shown, it can also be wrapped using the AR images of position-scheduled scope to be shown in AR maps
Include:The application scenarios of user are determined according to scene image;The numerical value of preset range is determined according to application scenarios;It will be treated in AR maps
The AR images of display location preset range include as AR images to be shown:Centered on position to be shown, preset range be half
Footpath, draws circular on the horizontal plane of AR maps, and the AR images of correspondence building in circle are entered according to AR map location relations
Row fusion, forms AR images to be shown.The application scenarios of user are determined according to scene image, refers to and is shot according to user
Scene image, determine the current state of user, such as walking, drive in, cycle, be the letter in scene image the characteristics of walking
Breath shows that user is nearer from building, and image is steady, is that there may be reflective, scene image in scene image the characteristics of in driving
Main body be road, and image may have sign of motion etc..Different application scenarios, user wants the AR images of display
It is also different, when user is in walking, may only needs the image in smaller range, and shape is driven when user is in
When in state, because the translational speed of vehicle is more much larger than not all right, therefore be also required to larger range of AR images, scope it is too small to
The help at family is simultaneously little.For specific scene, corresponding AR images can be shown with additional customized, such as when the upper grade separation of user
During bridge, due to the complicated construction of viaduct and road conditions, then can then select that whole viaduct will be included in this case
The AR images of scope are presented to user.Certainly, the preset range in use above scene, can be set to different pre- accordingly
Determine scope, such as the actual size of the AR images set when user's walking is 500m radius size, and when user is driving
When middle, the actual size of the AR images of setting is 2km radius size.Specific range data above is only a reference
Value, can be determined according to specific traffic information, the architectural feature in street, the discrimination of image etc. in actual applications.Remove
Outside this, all application scenarios can also be set to be applicable the radius size of same AR images, be also in the present embodiment
Feasible.
A kind of air navigation aid is present embodiments provided, the scene image of current location is obtained, in AR maps, it is determined that and field
The position to be shown of scape Image Matching, shadow is strengthened using the AR images of position-scheduled scope to be shown in AR maps as AR to be shown
Picture, shows AR display enhancing images.By the implementation of the present embodiment, map is strengthened from reality with the scene image of current location
In match reality enhancing to be shown and image and shown that the matching of image can largely lift the precision of positioning, show
Real enhancing image can then lift the intuitive of environment, the traffic trip having great convenience for the user.
Embodiment two
Fig. 2 is the guider composition schematic diagram that the embodiment of the present invention two is provided, and refer to Fig. 2, including:
Acquisition module 201, the scene image for obtaining current location;
Determining module 202, used in strengthening map in reality, it is determined that the position to be shown matched with scene image, will be existing
The reality enhancing image of position-scheduled scope to be shown strengthens image as reality to be shown in real enhancing map;
Display module 203, for showing reality enhancing image to be shown.
In the present embodiment, the scene image of current location is obtained, the scene image of the position where user is exactly obtained,
Image can include image or video.The mode that acquisition module 201 obtains the scene image of current location can include:Clapping
According to or shooting application in, pass through camera obtain scene image, afterwards trigger AR application;Or, in AR applications, call and take pictures
Or shooting application, the scene image is obtained by camera.In other words, obtaining scene image can be by the bat of terminal
Camera shooting function is performed, i.e., using the camera in terminal, gather around image and/or video.Wherein, any in terminal relates to
And the scene image for obtaining current location that the application of camera may be used in the present embodiment is called in the meeting of collection image.It is special
It is other, shoot function can also be added in AR map applications, field can also be carried out on the premise of not by third-party application
The collection of scape image.When gathering scene image, user can only gather the scene image of current facing position, pass through shooting image
Mode;The image in the range of maximum to 360 ° around pan-shot, collection current location can also be used;Shooting can also be used
Mode, by the scene image of videograph current location, can eliminate to a certain extent dynamic object to identification band
The interference come.
In the present embodiment, determining module 202 can be also used for:The corresponding three-dimensional mould of processing generation is carried out to scene image
Type data;The three-dimensional map information of each position in AR maps is called, three-dimensional modeling data is overlapped with three-dimensional map information
Interaction matching, calculates the three-dimensional map information of each position and the matching degree of three-dimensional modeling data;Matching degree is more than screening threshold value
Position be used as position to be shown.The image or video of terminal taking, its essence is all two-dimensional signal, and the essence of shooting is exactly
Three-dimensional actual scene is subjected to record preservation in a two-dimensional manner.So, it is determined that the position to be shown matched with scene image
It is then corresponding, the image and video of two dimension can be processed into corresponding three-dimensional modeling data.Specifically, by two dimensional image
Managing into the mode of corresponding 3-D view can include:According to the scene image got, the three-dimensional point cloud of correspondence scene is obtained.
The entity in computer that three-dimensional point cloud in the present embodiment is formed on the basis of relative relation data immediately, the three-dimensional point cloud
For further forming threedimensional model.In specific implementation process, according to the key message in the scene image photographed, than
Such as the street information in image and/video, the relation between storefront information, road surface and building determines the direction of the scene image,
So as to which the scene image of two dimension further is processed into threedimensional model.Further, it is also possible in same position to be shown adopting repeatedly
Collect scene image, analyze the result of multi collect, and be overlapped, can so lift the precision of collection so that the knot of processing
Fruit is more accurate.
The three-dimensional map information of each position in AR maps is called, three-dimensional modeling data and three-dimensional map information are carried out
Superposition interaction matching, calculates its matching degree.Here matching degree, the three-dimensional map information in AR maps just referred to this three
Dimension module data identical degree, can determine that the scene collected corresponds specifically to the where in map in this way, its
In, matching degree is higher, illustrates that the three-dimensional modeling data possibility consistent with corresponding place is bigger.In the present embodiment, will
Matching degree is more than the position of screening threshold value, is used as position to be shown.Screen the value of a matching degree of threshold preset, this threshold value
It can be determined according to the three-dimensional map information updating frequency in the discrimination and AR map applications of terminal, screening threshold value is higher, most
The precision recognized afterwards is also higher.In general, the setting of threshold value can be, the data of threedimensional model and the phase of three-dimensional map information
Like degree reach 80% and more than, then it is considered that its meet the requirements.
When three-dimensional modeling data and three-dimensional map information are overlapped into interaction matching, if there are multiple matching degrees
More than threshold value position when, can also include:Point out user selection, receive the selection operation of user;The position that user is selected
It is used as position to be shown.When being more than the position of threshold value in the event of matching degree, explanation system judges there is multiple in three-dimensional map
Position corresponding with three-dimensional modeling data, it may be possible to because the three-dimensional modeling data of these positions is too similar, then, this
In the case of these places can be presented to user, remind user to be selected, big position approximate of the user with reference to residing for oneself
The specific place of determination is put, or, these matching degrees can also be all higher than to the position of threshold value to carry out lateral comparison, it is selected
Middle matching degree highest, is used as position to be shown;Or, can be by the GPS function auxiliary positionings in terminal, no matter taking
Which kind of mode, it is final determined by position to be shown should only one of which, and this position to be shown determined is exactly current
The corresponding position of scene image of position.
In the present embodiment, determining module 202 can be also used for:The application scenarios of user are determined according to scene image;Root
The numerical value of preset range is determined according to application scenarios;It regard the AR images of position-scheduled scope to be shown in AR maps as AR to be shown
Image includes:Centered on position to be shown, preset range be radius, draw circular on the horizontal plane of AR maps, by circle
The AR images of interior correspondence building are merged according to AR map location relations, form AR images to be shown.According to scene image
The application scenarios of user are determined, the scene image shot according to user is referred to, the current state of user is determined, such as walks
Row, drive in, cycle, be that the presentation of information user in scene image is nearer from building the characteristics of walking, image is steady, drive
Middle the characteristics of is to there may be in scene image reflective, and the main body of scene image is road, and image may have the sign of motion
Etc..Different application scenarios, user want display AR images be also it is different, when user be in walking when, may be only
The image in smaller range is needed, and when user is in driving condition, because the translational speed of vehicle is bigger than not all right
A lot, larger range of AR images are also required to therefore, the too small help to user of scope is simultaneously little.For specific scene, also
Corresponding AR images can be shown with additional customized, such as when the upper viaduct of user, due to the complicated construction of viaduct and
Road conditions, then can then select the AR images including whole viaduct scope being presented to user in this case.Certainly, the above
Preset range in application scenarios, different preset ranges, such as the AR set when user's walking can be set to accordingly
The actual size of image is 500m radius size, and when user is in driving, the actual size of the AR images of setting is 2km
Radius size.Specific range data above is only a reference value, in actual applications can be according to specific road conditions
Information, the architectural feature in street, discrimination of image etc. are determined.In addition to this it is possible to set all application scenarios equal
The radius size of same AR images is applicable, is also feasible in the present embodiment.
A kind of guider, including acquisition module, determining module and display module are present embodiments provided, present bit is obtained
The scene image put, in AR maps, it is determined that the position to be shown matched with scene image, pre- by position to be shown in AR maps
The AR images of scope are determined as AR to be shown enhancing images, show AR display enhancing images.By the implementation of the present embodiment, with
The scene image of current location matches reality enhancing image to be shown from reality enhancing map and shown, of image
With can largely lift the precision of positioning, reality enhancing image can then lift the intuitive of environment, greatly facilitate
The traffic trip of user.
Embodiment three
For the ease of preferably implementing the air navigation aid in embodiment one, present embodiments provide for implementing embodiment one
In air navigation aid terminal, referring to Fig. 3, a kind of schematic diagram for terminal that Fig. 3 provides for the present embodiment;The terminal includes processing
Device 301, memory 302, camera 303, display screen 304.
Memory 302 can store software program of the processing performed by processor 301 and control operation etc., Huo Zheke
Temporarily to store oneself data (for example, telephone directory, message, still image, video etc.) through exporting or will export.And
And, memory 302 can store vibration and the audio signal of the various modes on being exported when touching and being applied to display screen 304
Data.
Memory 302 can include the storage medium of at least one type, and storage medium includes flash memory, hard disk, multimedia
Card, the storage of card-type memory (for example, SD or DX memories 302 etc.), random access storage device (RAM), static random-access
Device (SRAM), read-only storage (ROM), Electrically Erasable Read Only Memory (EEPROM), programmable read only memory
(PROM), magnetic storage 302, disk, CD etc..Moreover, terminal can be with performing memory 302 by network connection
The network storage device cooperation of store function.
Processor 301 generally performs the overall operation of terminal.For example processor 301 perform with voice call, data communication,
Video calling etc. related control and processing.
Camera 303 can gather the image information of surrounding, including image and video information, be then store in memory
In 302, processor 301 can carry out other handling processes based on the image information.
The content of terminal processes can be presented to user by display screen 304, be checked for user, and the further basis of user
The content of presentation applies touch control operation to touching display screen 304, performs the correlation function of terminal.
The internal memory of memory 302 contains multiple instruction to realize the air navigation aid in embodiment one, and processor 301 performs multiple
Instruct to realize:
Obtain the scene image of current location;
In reality enhancing map, it is determined that the position to be shown matched with scene image, will wait to show in reality enhancing map
Show the reality enhancing image of position-scheduled scope as reality enhancing image to be shown;
Display reality enhancing image to be shown.
In the present embodiment, the scene image of current location is obtained, the scene image of the position where user is exactly obtained,
Image can include image or video.Obtaining the mode of the image of current location can include:Taking pictures or imaging application
In, scene image is obtained by camera, AR applications are triggered afterwards;Or, in AR applications, call to take pictures or image and answer
With obtaining the scene image by camera.
In the present embodiment, it is determined that the position to be shown matched with scene image can include:At scene image
The corresponding three-dimensional modeling data of reason generation;The three-dimensional map information of each position in AR maps is called, by three-dimensional modeling data and three
Dimension cartographic information is overlapped interactive matching, calculates the three-dimensional map information of each position and the matching degree of three-dimensional modeling data;Will
Matching degree is more than the position of screening threshold value as band display location.
In the present embodiment, when three-dimensional modeling data and three-dimensional map information are overlapped into interaction matching, if
Exist multiple matching degrees be more than threshold value position when, can also include:Point out user selection, receive the selection operation of user;Will
The position of user's selection is used as position to be shown.
In the present embodiment, using the AR images of position-scheduled scope to be shown in AR maps as AR images to be shown it
Before, it can also include:The application scenarios of user are determined according to scene image;The numerical value of preset range is determined according to application scenarios;
Include the AR images of position-scheduled scope to be shown in AR maps as AR images to be shown:Centered on position to be shown,
Preset range is radius, draws circular on the horizontal plane of AR maps, by the AR images of correspondence building in circle according to AR
Figure position relationship is merged, and forms AR images to be shown.
Above content is to combine specific embodiment further description made for the present invention, it is impossible to assert this hair
Bright specific implementation is confined to these explanations.For general technical staff of the technical field of the invention, do not taking off
On the premise of from present inventive concept, some simple deduction or replace can also be made, the protection of the present invention should be all considered as belonging to
Scope.
Claims (10)
1. a kind of air navigation aid, it is characterised in that including:
Obtain the scene image of current location;
In reality enhancing map, it is determined that the position to be shown matched with the scene image, the reality is strengthened in map
The reality enhancing image of position-scheduled scope to be shown strengthens image as reality to be shown;
Show the reality enhancing image to be shown.
2. air navigation aid as claimed in claim 1, it is characterised in that position to be shown is pre- in the reality is strengthened into map
Determine scope reality enhancing image as it is to be shown reality strengthen image before, in addition to:
The application scenarios of user are determined according to the scene image;
The numerical value of the preset range is determined according to the application scenarios;
The reality enhancing image that the reality is strengthened to position-scheduled scope to be shown in map increases as reality to be shown
Strong image includes:Centered on the position to be shown, the preset range is radius, in the level of the reality enhancing map
Draw circular on face, the reality enhancing image of correspondence building in circle is melted according to reality enhancing map location relation
Close, form the reality enhancing image to be shown.
3. air navigation aid as claimed in claim 1, it is characterised in that it is to be shown that the determination is matched with the scene image
Position includes:
The corresponding three-dimensional modeling data of processing generation is carried out to the scene image;
The three-dimensional map information of each position in the reality enhancing map is called, by the three-dimensional modeling data and three exclosure
Figure information is overlapped interactive matching, calculates the three-dimensional map information of each position and the matching degree of the three-dimensional modeling data;
The position that matching degree is more than to screening threshold value is used as the position to be shown.
4. air navigation aid as claimed in claim 3, it is characterised in that if there are multiple matching degrees and be more than the position of threshold value,
Also include:
Point out user selection, receive the selection operation of user;
The position that user is selected is used as the position to be shown.
5. the air navigation aid as described in claim any one of 1-4, it is characterised in that the scene image of the acquisition current location
Mode include:
In taking pictures or imaging application, the scene image is obtained by camera, and trigger reality enhancing map application;
Or,
In reality enhancing map application, call and take pictures or image application, the scene image is obtained by camera.
6. a kind of guider, it is characterised in that including:
Acquisition module, the scene image for obtaining current location;
Determining module, in reality enhancing map, it is determined that the position to be shown with the scene matching, the reality is increased
The reality enhancing image of position-scheduled scope to be shown strengthens image as reality to be shown in strong map;
Display module, for showing the reality enhancing image to be shown.
7. guider as claimed in claim 6, it is characterised in that the determining module is additionally operable to:
The application scenarios of user are determined according to the scene image;
The numerical value of the preset range is determined according to the application scenarios;
Centered on the position to be shown, the preset range be radius, it is described reality enhancing map horizontal plane on paint
Round, the reality enhancing image of corresponding building in circle is strengthened according to reality and draws circular on the horizontal plane of map,
The reality enhancing image of corresponding building in circle is merged according to reality enhancing map location relation, waits to show described in formation
Show reality enhancing image.
8. guider as claimed in claim 6, it is characterised in that the determining module is additionally operable to:
The corresponding three-dimensional modeling data of processing generation is carried out to the scene image;
Call it is described reality enhancing map in each position three-dimensional map information, by the three-dimensional modeling data with it is described dimensionally
Figure information is overlapped interactive matching, calculates the three-dimensional map information of each position and the matching degree of the three-dimensional modeling data;
The position that matching degree is more than to screening threshold value is used as the position to be shown.
9. guider as claimed in claim 8, it is characterised in that if there are multiple matching degrees and be more than the position of threshold value,
Also include:
User's selection is proposed, the selection operation of user is received;
The position that user is selected is used as the position to be shown.
10. the guider as described in claim any one of 6-9, it is characterised in that the acquisition module obtains current location
The mode of scene image include:
In taking pictures or imaging application, the scene image is obtained by camera, and trigger reality enhancing map application;
Or,
In reality enhancing map application, call and take pictures or image application, the scene image is obtained by camera.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710188498.2A CN107084740B (en) | 2017-03-27 | 2017-03-27 | Navigation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710188498.2A CN107084740B (en) | 2017-03-27 | 2017-03-27 | Navigation method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107084740A true CN107084740A (en) | 2017-08-22 |
CN107084740B CN107084740B (en) | 2020-07-03 |
Family
ID=59614948
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710188498.2A Active CN107084740B (en) | 2017-03-27 | 2017-03-27 | Navigation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107084740B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110285801A (en) * | 2019-06-11 | 2019-09-27 | 唐文 | The localization method and device of intelligent safety helmet |
CN110879979A (en) * | 2019-11-13 | 2020-03-13 | 泉州师范学院 | Augmented reality system based on mobile terminal |
CN111238495A (en) * | 2020-01-06 | 2020-06-05 | 维沃移动通信有限公司 | Method for positioning vehicle and terminal equipment |
CN111323028A (en) * | 2020-02-20 | 2020-06-23 | 北京经智纬科技有限公司 | Indoor and outdoor positioning device and positioning method based on image recognition |
CN111665943A (en) * | 2020-06-08 | 2020-09-15 | 浙江商汤科技开发有限公司 | Pose information display method and device |
CN112819956A (en) * | 2020-12-30 | 2021-05-18 | 南京科沃斯机器人技术有限公司 | Three-dimensional map construction method, system and server |
CN114661398A (en) * | 2022-03-22 | 2022-06-24 | 上海商汤智能科技有限公司 | Information display method and device, computer equipment and storage medium |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102541412A (en) * | 2010-12-27 | 2012-07-04 | 上海博泰悦臻电子设备制造有限公司 | Method and system for automatically zooming in and out display scale of navigation map |
CN102980570A (en) * | 2011-09-06 | 2013-03-20 | 上海博路信息技术有限公司 | Live-scene augmented reality navigation system |
CN103335657A (en) * | 2013-05-30 | 2013-10-02 | 佛山电视台南海分台 | Method and system for strengthening navigation performance based on image capture and recognition technology |
CN103376978A (en) * | 2012-04-12 | 2013-10-30 | 宇龙计算机通信科技(深圳)有限公司 | Terminal and electronic map display scale regulating method |
CN103697882A (en) * | 2013-12-12 | 2014-04-02 | 深圳先进技术研究院 | Geographical three-dimensional space positioning method and geographical three-dimensional space positioning device based on image identification |
CN104457765A (en) * | 2013-09-25 | 2015-03-25 | 联想(北京)有限公司 | Positioning method, electronic equipment and server |
CN105674991A (en) * | 2016-03-29 | 2016-06-15 | 深圳市华讯方舟科技有限公司 | Robot positioning method and device |
US20160350982A1 (en) * | 2013-05-31 | 2016-12-01 | Apple Inc. | Adjusting Heights for Road Path Indicators |
CN106355153A (en) * | 2016-08-31 | 2017-01-25 | 上海新镜科技有限公司 | Virtual object display method, device and system based on augmented reality |
CN106403964A (en) * | 2016-08-30 | 2017-02-15 | 北汽福田汽车股份有限公司 | Positioning navigation system and vehicle |
-
2017
- 2017-03-27 CN CN201710188498.2A patent/CN107084740B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102541412A (en) * | 2010-12-27 | 2012-07-04 | 上海博泰悦臻电子设备制造有限公司 | Method and system for automatically zooming in and out display scale of navigation map |
CN102980570A (en) * | 2011-09-06 | 2013-03-20 | 上海博路信息技术有限公司 | Live-scene augmented reality navigation system |
CN103376978A (en) * | 2012-04-12 | 2013-10-30 | 宇龙计算机通信科技(深圳)有限公司 | Terminal and electronic map display scale regulating method |
CN103335657A (en) * | 2013-05-30 | 2013-10-02 | 佛山电视台南海分台 | Method and system for strengthening navigation performance based on image capture and recognition technology |
US20160350982A1 (en) * | 2013-05-31 | 2016-12-01 | Apple Inc. | Adjusting Heights for Road Path Indicators |
CN104457765A (en) * | 2013-09-25 | 2015-03-25 | 联想(北京)有限公司 | Positioning method, electronic equipment and server |
CN103697882A (en) * | 2013-12-12 | 2014-04-02 | 深圳先进技术研究院 | Geographical three-dimensional space positioning method and geographical three-dimensional space positioning device based on image identification |
CN105674991A (en) * | 2016-03-29 | 2016-06-15 | 深圳市华讯方舟科技有限公司 | Robot positioning method and device |
CN106403964A (en) * | 2016-08-30 | 2017-02-15 | 北汽福田汽车股份有限公司 | Positioning navigation system and vehicle |
CN106355153A (en) * | 2016-08-31 | 2017-01-25 | 上海新镜科技有限公司 | Virtual object display method, device and system based on augmented reality |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110285801A (en) * | 2019-06-11 | 2019-09-27 | 唐文 | The localization method and device of intelligent safety helmet |
CN110879979A (en) * | 2019-11-13 | 2020-03-13 | 泉州师范学院 | Augmented reality system based on mobile terminal |
CN110879979B (en) * | 2019-11-13 | 2024-01-02 | 泉州师范学院 | Augmented reality system based on mobile terminal |
CN111238495A (en) * | 2020-01-06 | 2020-06-05 | 维沃移动通信有限公司 | Method for positioning vehicle and terminal equipment |
CN111323028A (en) * | 2020-02-20 | 2020-06-23 | 北京经智纬科技有限公司 | Indoor and outdoor positioning device and positioning method based on image recognition |
CN111323028B (en) * | 2020-02-20 | 2022-08-19 | 川谷汇(北京)数字科技有限公司 | Indoor and outdoor positioning device and positioning method based on image recognition |
CN111665943A (en) * | 2020-06-08 | 2020-09-15 | 浙江商汤科技开发有限公司 | Pose information display method and device |
CN111665943B (en) * | 2020-06-08 | 2023-09-19 | 浙江商汤科技开发有限公司 | Pose information display method and device |
CN112819956A (en) * | 2020-12-30 | 2021-05-18 | 南京科沃斯机器人技术有限公司 | Three-dimensional map construction method, system and server |
CN114661398A (en) * | 2022-03-22 | 2022-06-24 | 上海商汤智能科技有限公司 | Information display method and device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN107084740B (en) | 2020-07-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107084740A (en) | A kind of air navigation aid and device | |
KR102414587B1 (en) | Augmented reality data presentation method, apparatus, device and storage medium | |
CN106296815B (en) | Construction and display method of interactive three-dimensional digital city | |
JP5582548B2 (en) | Display method of virtual information in real environment image | |
CN114730546A (en) | Cross-reality system with location services and location-based shared content | |
CN107622524A (en) | Display methods and display device for mobile terminal | |
CN106355153A (en) | Virtual object display method, device and system based on augmented reality | |
US10733777B2 (en) | Annotation generation for an image network | |
CN105023266A (en) | Method and device for implementing augmented reality (AR) and terminal device | |
US8639023B2 (en) | Method and system for hierarchically matching images of buildings, and computer-readable recording medium | |
US10949069B2 (en) | Shake event detection system | |
CN107656961A (en) | A kind of method for information display and device | |
US8842134B2 (en) | Method, system, and computer-readable recording medium for providing information on an object using viewing frustums | |
CN114202622A (en) | Virtual building generation method, device, equipment and computer readable storage medium | |
CN108388636B (en) | Streetscape method for retrieving image and device based on adaptive segmentation minimum circumscribed rectangle | |
CN116858215B (en) | AR navigation map generation method and device | |
CN113822263A (en) | Image annotation method and device, computer equipment and storage medium | |
CN103632627B (en) | Method for information display, device and mobile navigation electronic equipment | |
CN112788443B (en) | Interaction method and system based on optical communication device | |
CN109816791B (en) | Method and apparatus for generating information | |
CN115731370A (en) | Large-scene element universe space superposition method and device | |
CN114332648B (en) | Position identification method and electronic equipment | |
WO2019127320A1 (en) | Information processing method and apparatus, cloud processing device, and computer program product | |
CN111882675A (en) | Model presentation method and device, electronic equipment and computer storage medium | |
CN112825198A (en) | Mobile label display method and device, terminal equipment and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |