CN109357679A - A kind of indoor orientation method based on significant characteristics identification - Google Patents
A kind of indoor orientation method based on significant characteristics identification Download PDFInfo
- Publication number
- CN109357679A CN109357679A CN201811364650.9A CN201811364650A CN109357679A CN 109357679 A CN109357679 A CN 109357679A CN 201811364650 A CN201811364650 A CN 201811364650A CN 109357679 A CN109357679 A CN 109357679A
- Authority
- CN
- China
- Prior art keywords
- image
- positioning
- indoor
- mobile
- panoramic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 23
- 238000005516 engineering process Methods 0.000 claims abstract description 9
- 238000012545 processing Methods 0.000 claims abstract description 8
- 239000000284 extract Substances 0.000 claims abstract description 4
- 238000000547 structure data Methods 0.000 claims abstract description 3
- 230000006870 function Effects 0.000 claims description 6
- 230000003068 static effect Effects 0.000 claims description 6
- 238000004891 communication Methods 0.000 claims description 4
- 238000004458 analytical method Methods 0.000 claims description 3
- 238000003709 image segmentation Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 238000012549 training Methods 0.000 claims description 2
- 238000012795 verification Methods 0.000 claims 1
- 238000013135 deep learning Methods 0.000 abstract description 3
- 238000001514 detection method Methods 0.000 abstract description 3
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Automation & Control Theory (AREA)
- Computer Networks & Wireless Communication (AREA)
- Studio Devices (AREA)
Abstract
The present invention provides a kind of indoor orientation method based on significant characteristics identification, it is related to conspicuousness target detection, deep learning and indoor positioning technologies field, the present invention completes indoor panoramic table using high definition camera cooperation laser radar and models, and saliency feature is extracted using computer vision technique, be trained by a large amount of real scene images, correct and processing formed model, take pictures eventually by smart machine outdoor scene carry out images match extract space structure data realize precise positioning.It solves the problems, such as that indoor-GPS signal is weak, and eliminates the influence that dynamic object movement generates, ensure that the real-time of real scene image scene, improve the accuracy of image recognition, and then improve the precision of indoor positioning.
Description
Technical Field
The invention relates to a saliency target detection, deep learning and indoor positioning technology, in particular to an indoor positioning method based on saliency feature recognition.
Background
With the development of strong applications such as accurate marketing and indoor navigation services, indoor positioning is more and more concerned, and a large number of application cases are available in various indoor scenes such as airports, railway stations, shopping malls, factories and schools. Indoor location refers to the position location in indoor environment, owing to receive the restriction of environment, utilizes the realization location that GPS sensor can relax at outdoor smart mobile phone, but gets into indoorly, and smart machine often just can't receive the GPS signal, hardly realizes accurate location.
At present, the traditional indoor mode in the industry adopts indoor wireless communication technologies such as WiFi, Bluetooth and RFID, the relative position is calculated by utilizing the base station for positioning and the signal intensity collected by a plurality of wireless transmitters, and finally the positioning of personnel or objects in the indoor space is realized. The construction that depends on infrastructure that traditional mode is very big, and the stability requirement to wiFi or bluetooth signal transmitter's signal strength is higher, especially in the intensive district of personnel's flow or the scene that dynamic object is more, can't guarantee the stability of its signal, causes the precision of indoor location and actual position to have very big deviation, can't satisfy actual application demand.
In recent years, with the development of deep learning and computer vision technology, the accuracy and speed of machine recognition images are greatly improved, and position location by analyzing live-action shot images becomes possible. Under the circumstances, how to effectively utilize the computer vision technology and combine a plurality of sensing technologies to realize accurate indoor positioning becomes a problem which needs to be solved urgently.
Disclosure of Invention
In order to solve the technical problems, the invention provides an indoor positioning method based on significance characteristic identification, which solves the problem of weak indoor GPS positioning signals and improves the indoor positioning precision.
The technical scheme of the invention is as follows:
an indoor positioning method based on saliency feature recognition is characterized by firstly completing indoor panoramic map modeling, extracting image saliency features by utilizing a computer vision technology, training, correcting and processing a large number of live-action images to form a model, and finally performing image matching by live-action shooting through intelligent equipment to extract spatial structure data so as to realize accurate positioning.
Acquiring an indoor panoramic image by using a high-definition camera, completing ranging by matching with point cloud data of a laser radar, performing salient feature recognition and image segmentation based on a panoramic map, correcting and processing live-action shot images of actual mobile shooting equipment in different time periods to form a panoramic positioning model rule base, compressing the model and the rule base, and placing the compressed model and the rule base into mobile intelligent equipment with a shooting function; the mobile intelligent equipment shoots a plurality of outdoor scenes in different directions at a fixed point, image matching is carried out, and image space structure information is extracted to carry out cross comparison positioning. In addition, the mobile intelligent device can also reduce the search range by means of wifi, Bluetooth and the like, and is mutually verified with the live-action image positioning. Wherein,
the high-definition camera is used for collecting indoor panoramic image data, the laser radar is matched with the high-definition camera to take a picture, point cloud data are recorded, the distance between a shooting point and an object in the surrounding environment is determined, and positioning of the shooting point of the panoramic image is realized; the positioning and model rule base is obtained by utilizing image processing algorithm learning and analysis through computing resources gathered by a cloud center, and a mobile positioning App is formed and is put into the mobile intelligent equipment; the mobile intelligent equipment has a photographing function and certain computing capacity and can execute a mobile positioning App; the wifi and Bluetooth signal generator is in indoor wireless communication connection with wifi and Bluetooth.
The invention provides an indoor positioning method based on salient feature recognition, which is used for indoor positioning of mobile intelligent equipment and comprises the following steps:
101, the laser radar is matched with the high-definition camera to shoot panoramic photos indoors, point cloud data are collected by the laser radar, and the distance between a shooting point and an object in the surrounding environment is recorded in an identification mode;
102, uploading the collected image, point cloud data and distance data to a cloud end, preprocessing, and finishing data structuring;
103, performing superpixel segmentation on the panoramic image shot for multiple times to obtain an initial saliency map;
104, according to the panoramic data shot for multiple times, fully considering the change of the shielding relation and the dynamic object, and identifying a significant static object target;
105, fusing the segmented image and the characteristic region of the static object, and determining a saliency map region;
step 106, extracting spatial structure information and determining the position according to the image resolution and the shooting position by combining point cloud data ranging;
step 107, repeatedly executing the step 103 to the step 106 to finish the model and the rule base of the indoor panorama;
step 108, compressing the model and the rule base, combining with an indoor panoramic map to form a mobile positioning APP, and putting the mobile positioning APP into a cloud terminal;
step 109, downloading a mobile positioning APP from a cloud end by the mobile intelligent device;
step 110, the mobile intelligent device shoots a plurality of real scenes in different directions at a fixed point, and pre-processes the obtained picture, and extracts shooting metadata of the picture, wherein the shooting metadata comprises an aperture, a shutter, a focal length, pixels, shooting time and the like;
step 111, the mobile positioning APP of the (optional) mobile intelligent device can be connected with other positioning devices such as wifi and bluetooth to obtain a position range;
step 112, extracting image features by the mobile positioning APP of the mobile intelligent device, performing image matching, and determining the position of the panoramic image;
113, projecting the image on the panoramic image according to metadata of the shot image, extracting image space structure information for cross comparison and positioning, determining the position of the mobile intelligent equipment and performing label display through an App;
step 114, the mobile intelligent device corrects the marked position by using a mobile positioning APP, and uploads the shot images to the cloud;
and step 115, the cloud receives the data uploaded by the user, continuously optimizes the App identification positioning model of the user, and improves the positioning accuracy.
The invention has the advantages that
The invention realizes positioning by utilizing a computer vision technology, greatly solves the problem of weak indoor GPS positioning signals, eliminates the influence generated by the motion of dynamic objects, can realize indoor positioning with higher precision, and is suitable for various application scenes; the searching range is reduced by means of wifi, Bluetooth and the like, the matching speed is increased, and the method and the device are mutually verified in positioning with live-action images. In addition, the effective utilization removes APP and continuously collects the outdoor scene data and revises the location, has guaranteed the real-time of outdoor scene image scene, has promoted image identification's accuracy, and then improves the precision of indoor location.
Drawings
FIG. 1 is a schematic diagram of an indoor positioning system;
fig. 2 is a flow chart of indoor positioning of a mobile smart device.
Detailed Description
The invention will be explained in more detail below with reference to the accompanying drawings:
as shown in fig. 1, a high-definition camera is used for collecting indoor panoramic images, the distance measurement is completed by matching with point cloud data of a laser radar, the identification of salient features and the image segmentation are performed based on a panoramic map, the real-scene shot images of actual mobile shooting equipment in different time periods are corrected and processed to form a panoramic positioning model rule base, and the model and the rule base are compressed and placed into mobile intelligent equipment with a shooting function; the mobile intelligent equipment shoots a plurality of outdoor scenes in different directions at a fixed point, image matching is carried out, and image space structure information is extracted to carry out cross comparison positioning. In addition, the mobile intelligent device can also reduce the search range by means of wifi, Bluetooth and the like, and is mutually verified with the live-action image positioning.
Wherein,
the high-definition camera is used for collecting indoor panoramic image data, the laser radar is matched with the high-definition camera to take a picture, point cloud data are recorded, the distance between a shooting point and an object in the surrounding environment is determined, and positioning of the shooting point of the panoramic image is realized;
the positioning and model rule base is obtained by utilizing image processing algorithm learning and analysis through computing resources gathered by a cloud center, and a mobile positioning App is formed and is put into the mobile intelligent equipment;
the mobile intelligent equipment has a photographing function and certain computing capacity and can execute a mobile positioning App; the wifi and Bluetooth signal generator is in indoor wireless communication connection with wifi and Bluetooth.
For clarity of description, the image feature recognition algorithm in the following examples adopts R-CNN, the saliency target detection algorithm adopts SLRC algorithm, and the feature extraction adopts HOG + SVM algorithm. Those skilled in the art will appreciate that the configurations according to embodiments of the present invention can be applied to other algorithms in addition to using the above algorithm.
As illustrated in fig. 2, indoor positioning of a mobile smart device comprises the steps of:
101, the laser radar is matched with the high-definition camera to shoot panoramic photos indoors, point cloud data are collected by the laser radar, and the distance between a shooting point and an object in the surrounding environment is recorded in an identification mode;
102, uploading the collected image, point cloud data and distance data to a cloud end, preprocessing, and finishing data structuring;
103, performing superpixel segmentation on the panoramic image shot for multiple times to obtain an initial saliency map;
104, according to the panoramic data shot for multiple times, fully considering the change of the shielding relation and the dynamic object, and identifying a significant static object target;
105, fusing the segmented image and the characteristic region of the static object, and determining a saliency map region;
step 106, extracting spatial structure information and determining the position according to the image resolution and the shooting position by combining point cloud data ranging;
step 107, repeatedly executing the step 103 to the step 106 to finish the model and the rule base of the indoor panorama;
step 108, compressing the model and the rule base, combining with an indoor panoramic map to form a mobile positioning APP, and putting the mobile positioning APP into a cloud terminal;
step 109, downloading a mobile positioning APP from a cloud end by the mobile intelligent device;
step 110, the mobile intelligent device shoots a plurality of real scenes in different directions at a fixed point, and pre-processes the obtained picture, and extracts shooting metadata of the picture, wherein the shooting metadata comprises an aperture, a shutter, a focal length, pixels, shooting time and the like;
step 111, the mobile positioning APP of the mobile intelligent device can be connected with other positioning devices such as wifi and Bluetooth to obtain a position range;
step 112, extracting image features by the mobile positioning APP of the mobile intelligent device, performing image matching, and determining the position of the panoramic image;
113, projecting the image on the panoramic image according to metadata of the shot image, extracting image space structure information for cross comparison and positioning, determining the position of the mobile intelligent equipment and performing label display through an App;
step 114, the mobile intelligent device corrects the marked position by using a mobile positioning APP, and uploads the shot images to the cloud;
and step 115, the cloud receives the data uploaded by the user, continuously optimizes the App identification positioning model of the user, and improves the positioning accuracy.
The above-described embodiment is only one specific embodiment of the present invention, and general changes and substitutions by those skilled in the art within the technical scope of the present invention are included in the protection scope of the present invention.
Claims (9)
1. An indoor positioning method based on the identification of the significant features is characterized in that,
the method comprises the steps of firstly completing indoor panoramic map modeling, then utilizing a computer vision technology to extract image saliency characteristics, training, correcting and processing live-action images to form a model, and finally performing image matching and extracting spatial structure data through live-action shooting of intelligent equipment to realize accurate positioning.
2. The method of claim 1,
collecting an indoor panoramic image, completing ranging by matching point cloud data of a laser radar, performing salient feature recognition and image segmentation based on a panoramic map, correcting and processing live-action shot images of actual mobile shooting equipment in different time periods to form a panoramic positioning model rule base, compressing the model and the rule base, and placing the compressed model and the rule base into mobile intelligent equipment with a shooting function; the mobile intelligent equipment shoots a plurality of outdoor scenes in different directions at a fixed point, image matching is carried out, and image space structure information is extracted to carry out cross comparison positioning.
3. The method of claim 2,
in addition, the mobile intelligent device can also narrow the search range by means of wifi and Bluetooth and perform mutual verification with live-action image positioning.
4. The method of claim 2,
the high-definition camera is used for collecting indoor panoramic image data, the laser radar is matched with the high-definition camera to shoot, point cloud data are recorded, the distance between a shooting point and an object in the surrounding environment is determined, and positioning of the shooting point of the panoramic image is achieved.
5. The method of claim 2,
the positioning and model rule base is obtained by utilizing image processing algorithm learning and analysis through computing resources gathered by the cloud center, and a mobile positioning App is formed and is placed into the mobile intelligent device.
6. The method of claim 2,
the mobile intelligent device has a photographing function and computing capability and can execute the mobile positioning App.
7. The method of claim 3,
the wifi and Bluetooth signal generator is in indoor wireless communication connection with wifi and Bluetooth.
8. The method of claim 2,
the method comprises the following specific operation steps:
101, the laser radar is matched with the high-definition camera to shoot panoramic photos indoors, point cloud data are collected by the laser radar, and the distance between a shooting point and an object in the surrounding environment is recorded in an identification mode;
102, uploading the collected image, point cloud data and distance data to a cloud end, preprocessing, and finishing data structuring;
103, performing superpixel segmentation on the panoramic image shot for multiple times to obtain an initial saliency map;
104, according to the panoramic data shot for multiple times, fully considering the change of the shielding relation and the dynamic object, and identifying a significant static object target;
105, fusing the segmented image and the characteristic region of the static object, and determining a saliency map region;
step 106, extracting spatial structure information and determining the position according to the image resolution and the shooting position by combining point cloud data ranging;
step 107, repeatedly executing the step 103 to the step 106 to finish the model and the rule base of the indoor panorama;
step 108, compressing the model and the rule base, combining with an indoor panoramic map to form a mobile positioning APP, and putting the mobile positioning APP into a cloud terminal;
step 109, downloading a mobile positioning APP from a cloud end by the mobile intelligent device;
step 110, the mobile intelligent device shoots a plurality of real scenes in different directions at a fixed point, and pre-processes the obtained picture, and extracts shooting metadata of the picture, wherein the shooting metadata comprises an aperture, a shutter, a focal length, pixels, shooting time and the like;
step 111, extracting image features by the mobile positioning APP of the mobile intelligent device, performing image matching, and determining the position of the panoramic image;
step 112, projecting the image on the panoramic image according to the metadata of the shot image, extracting image space structure information for cross comparison and positioning, determining the position of the mobile intelligent device and performing label display through an App;
step 113, the mobile intelligent equipment corrects the marked position by using a mobile positioning APP, and uploads the shot images to the cloud;
and step 114, the cloud receives the data uploaded by the user, continuously optimizes the App identification positioning model of the user, and improves the positioning precision.
9. The method of claim 8,
the mobile positioning APP of the mobile intelligent device can be connected with wifi and Bluetooth to obtain a position range.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811364650.9A CN109357679B (en) | 2018-11-16 | 2018-11-16 | Indoor positioning method based on significance characteristic recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811364650.9A CN109357679B (en) | 2018-11-16 | 2018-11-16 | Indoor positioning method based on significance characteristic recognition |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109357679A true CN109357679A (en) | 2019-02-19 |
CN109357679B CN109357679B (en) | 2022-04-19 |
Family
ID=65345498
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811364650.9A Active CN109357679B (en) | 2018-11-16 | 2018-11-16 | Indoor positioning method based on significance characteristic recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109357679B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109916408A (en) * | 2019-02-28 | 2019-06-21 | 深圳市鑫益嘉科技股份有限公司 | Robot indoor positioning and air navigation aid, device, equipment and storage medium |
CN110070127A (en) * | 2019-04-19 | 2019-07-30 | 南京邮电大学 | The optimization method finely identified towards family product |
CN110298320A (en) * | 2019-07-01 | 2019-10-01 | 北京百度网讯科技有限公司 | A kind of vision positioning method, device and storage medium |
CN110427936A (en) * | 2019-07-04 | 2019-11-08 | 深圳市新潮酒窖文化传播有限公司 | A kind of the cellar management method and system in wine cellar |
CN112242002A (en) * | 2020-10-09 | 2021-01-19 | 同济大学 | Object identification and panoramic roaming method based on deep learning |
CN113137963A (en) * | 2021-04-06 | 2021-07-20 | 上海电科智能系统股份有限公司 | Passive indoor high-precision comprehensive positioning and navigation method for people and objects |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130222369A1 (en) * | 2012-02-23 | 2013-08-29 | Charles D. Huston | System and Method for Creating an Environment and for Sharing a Location Based Experience in an Environment |
CN103646420A (en) * | 2013-12-12 | 2014-03-19 | 浪潮电子信息产业股份有限公司 | Intelligent 3D scene reduction method based on self learning algorithm |
WO2014073841A1 (en) * | 2012-11-07 | 2014-05-15 | 한국과학기술연구원 | Method for detecting image-based indoor position, and mobile terminal using same |
CN105716609A (en) * | 2016-01-15 | 2016-06-29 | 浙江梧斯源通信科技股份有限公司 | Indoor robot vision positioning method |
CN106447585A (en) * | 2016-09-21 | 2017-02-22 | 武汉大学 | Urban area and indoor high-precision visual positioning system and method |
CN107131883A (en) * | 2017-04-26 | 2017-09-05 | 中山大学 | The full-automatic mobile terminal indoor locating system of view-based access control model |
CN107167144A (en) * | 2017-07-07 | 2017-09-15 | 武汉科技大学 | A kind of mobile robot indoor environment recognition positioning method of view-based access control model |
US20180061126A1 (en) * | 2016-08-26 | 2018-03-01 | Osense Technology Co., Ltd. | Method and system for indoor positioning and device for creating indoor maps thereof |
CN107833220A (en) * | 2017-11-28 | 2018-03-23 | 河海大学常州校区 | Fabric defect detection method based on depth convolutional neural networks and vision significance |
WO2018093438A1 (en) * | 2016-08-26 | 2018-05-24 | William Marsh Rice University | Camera-based positioning system using learning |
-
2018
- 2018-11-16 CN CN201811364650.9A patent/CN109357679B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130222369A1 (en) * | 2012-02-23 | 2013-08-29 | Charles D. Huston | System and Method for Creating an Environment and for Sharing a Location Based Experience in an Environment |
WO2014073841A1 (en) * | 2012-11-07 | 2014-05-15 | 한국과학기술연구원 | Method for detecting image-based indoor position, and mobile terminal using same |
CN103646420A (en) * | 2013-12-12 | 2014-03-19 | 浪潮电子信息产业股份有限公司 | Intelligent 3D scene reduction method based on self learning algorithm |
CN105716609A (en) * | 2016-01-15 | 2016-06-29 | 浙江梧斯源通信科技股份有限公司 | Indoor robot vision positioning method |
US20180061126A1 (en) * | 2016-08-26 | 2018-03-01 | Osense Technology Co., Ltd. | Method and system for indoor positioning and device for creating indoor maps thereof |
WO2018093438A1 (en) * | 2016-08-26 | 2018-05-24 | William Marsh Rice University | Camera-based positioning system using learning |
CN106447585A (en) * | 2016-09-21 | 2017-02-22 | 武汉大学 | Urban area and indoor high-precision visual positioning system and method |
CN107131883A (en) * | 2017-04-26 | 2017-09-05 | 中山大学 | The full-automatic mobile terminal indoor locating system of view-based access control model |
CN107167144A (en) * | 2017-07-07 | 2017-09-15 | 武汉科技大学 | A kind of mobile robot indoor environment recognition positioning method of view-based access control model |
CN107833220A (en) * | 2017-11-28 | 2018-03-23 | 河海大学常州校区 | Fabric defect detection method based on depth convolutional neural networks and vision significance |
Non-Patent Citations (2)
Title |
---|
HISATO KAWAJI等: "Image-based indoor positioning system: fast image matching using omnidirectional panoramic images", 《PROCEEDINGS OF THE 1ST ACM INTERNATIONAL WORKSHOP ON MULTIMODAL PERVASIVE VIDEO ANALYSIS》 * |
曹天扬 等: "结合图像内容匹配的机器人视觉导航定位与全局地图构建系统", 《光学精密工程》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109916408A (en) * | 2019-02-28 | 2019-06-21 | 深圳市鑫益嘉科技股份有限公司 | Robot indoor positioning and air navigation aid, device, equipment and storage medium |
CN110070127A (en) * | 2019-04-19 | 2019-07-30 | 南京邮电大学 | The optimization method finely identified towards family product |
CN110298320A (en) * | 2019-07-01 | 2019-10-01 | 北京百度网讯科技有限公司 | A kind of vision positioning method, device and storage medium |
CN110427936A (en) * | 2019-07-04 | 2019-11-08 | 深圳市新潮酒窖文化传播有限公司 | A kind of the cellar management method and system in wine cellar |
CN110427936B (en) * | 2019-07-04 | 2022-09-30 | 深圳市新潮酒窖文化传播有限公司 | Wine storage management method and system for wine cellar |
CN112242002A (en) * | 2020-10-09 | 2021-01-19 | 同济大学 | Object identification and panoramic roaming method based on deep learning |
CN112242002B (en) * | 2020-10-09 | 2022-07-08 | 同济大学 | Object identification and panoramic roaming method based on deep learning |
CN113137963A (en) * | 2021-04-06 | 2021-07-20 | 上海电科智能系统股份有限公司 | Passive indoor high-precision comprehensive positioning and navigation method for people and objects |
Also Published As
Publication number | Publication date |
---|---|
CN109357679B (en) | 2022-04-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109357679B (en) | Indoor positioning method based on significance characteristic recognition | |
CN111415388B (en) | Visual positioning method and terminal | |
CN111199564B (en) | Indoor positioning method and device of intelligent mobile terminal and electronic equipment | |
JP7236565B2 (en) | POSITION AND ATTITUDE DETERMINATION METHOD, APPARATUS, ELECTRONIC DEVICE, STORAGE MEDIUM AND COMPUTER PROGRAM | |
CN102749072B (en) | Indoor positioning method, indoor positioning apparatus and indoor positioning system | |
CN109671119A (en) | A kind of indoor orientation method and device based on SLAM | |
CN112166459A (en) | Three-dimensional environment modeling based on multi-camera convolver system | |
US20150161441A1 (en) | Image location through large object detection | |
CN110858414A (en) | Image processing method and device, readable storage medium and augmented reality system | |
CN103500452A (en) | Scenic spot scenery moving augmented reality method based on space relationship and image analysis | |
CN111323024A (en) | Positioning method and device, equipment and storage medium | |
CN104573617A (en) | Video shooting control method | |
Castillo-Carrión et al. | SIFT optimization and automation for matching images from multiple temporal sources | |
CN115808170B (en) | Indoor real-time positioning method integrating Bluetooth and video analysis | |
CN109903308B (en) | Method and device for acquiring information | |
CN112313706A (en) | Method and system for calculating spatial coordinate points of a region of interest, and non-transitory computer-readable recording medium | |
Wang et al. | iNavigation: an image based indoor navigation system | |
CN113409358A (en) | Image tracking method, image tracking device, storage medium and electronic equipment | |
An et al. | Image-based positioning system using LED Beacon based on IoT central management | |
CN111383271B (en) | Picture-based direction marking method and device | |
CN109612455A (en) | A kind of indoor orientation method and system | |
Villarrubia et al. | Hybrid indoor location system for museum tourist routes in augmented reality | |
CN114026595A (en) | Image processing method, device and storage medium | |
Xiong et al. | SmartGuide: Towards single-image building localization with smartphone | |
CN114323013A (en) | Method for determining position information of a device in a scene |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20220324 Address after: 250100 building S02, No. 1036, Langchao Road, high tech Zone, Jinan City, Shandong Province Applicant after: Shandong Inspur Scientific Research Institute Co.,Ltd. Address before: 250100 First Floor of R&D Building 2877 Kehang Road, Sun Village Town, Jinan High-tech Zone, Shandong Province Applicant before: JINAN INSPUR HIGH-TECH TECHNOLOGY DEVELOPMENT Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |