CN109357679A - A kind of indoor orientation method based on significant characteristics identification - Google Patents

A kind of indoor orientation method based on significant characteristics identification Download PDF

Info

Publication number
CN109357679A
CN109357679A CN201811364650.9A CN201811364650A CN109357679A CN 109357679 A CN109357679 A CN 109357679A CN 201811364650 A CN201811364650 A CN 201811364650A CN 109357679 A CN109357679 A CN 109357679A
Authority
CN
China
Prior art keywords
image
indoor
intelligent movable
app
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811364650.9A
Other languages
Chinese (zh)
Other versions
CN109357679B (en
Inventor
孙善宝
徐驰
于治楼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Inspur Scientific Research Institute Co Ltd
Original Assignee
Jinan Inspur Hi Tech Investment and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan Inspur Hi Tech Investment and Development Co Ltd filed Critical Jinan Inspur Hi Tech Investment and Development Co Ltd
Priority to CN201811364650.9A priority Critical patent/CN109357679B/en
Publication of CN109357679A publication Critical patent/CN109357679A/en
Application granted granted Critical
Publication of CN109357679B publication Critical patent/CN109357679B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Studio Devices (AREA)

Abstract

The present invention provides a kind of indoor orientation method based on significant characteristics identification, it is related to conspicuousness target detection, deep learning and indoor positioning technologies field, the present invention completes indoor panoramic table using high definition camera cooperation laser radar and models, and saliency feature is extracted using computer vision technique, be trained by a large amount of real scene images, correct and processing formed model, take pictures eventually by smart machine outdoor scene carry out images match extract space structure data realize precise positioning.It solves the problems, such as that indoor-GPS signal is weak, and eliminates the influence that dynamic object movement generates, ensure that the real-time of real scene image scene, improve the accuracy of image recognition, and then improve the precision of indoor positioning.

Description

A kind of indoor orientation method based on significant characteristics identification
Technical field
The present invention relates to conspicuousness target detection, deep learning and indoor positioning technologies, more particularly to one kind is based on significant Property feature identification indoor orientation method.
Background technique
With the development that precision marketing, indoor navigation service etc. are applied by force, more and more attention has been paid in machine for indoor positioning The multiple indoors scene such as field, railway station, market, factory, school has extensive application case.Indoor positioning refers to environment indoors In position positioning, due to being limited by environment, it is fixed easily to be realized in outdoor intelligent mobile phone using GPS sensor Position, but enter the room, smart machine often can not just receive GPS signal, be difficult to realize precise positioning.
Traditional indoor mode is to utilize base using wireless communication techniques such as indoor WiFi, bluetooth, RFID in the industry at present It stands firm position, relative position is calculated by the signal strength that multiple wireless transmitters acquire, it is final to realize that personnel or object etc. exist Positioning in the interior space.Traditional mode greatly relies on the construction of infrastructure, to WiFi or Bluetooth signal transmitter The stability requirement of signal strength is higher, especially in the more scene in flow of personnel compact district or dynamic object, Wu Fabao The stability for demonstrate,proving its signal causes the precision of indoor positioning and actual bit to be equipped with very big deviation, is unable to satisfy actual application Demand.
In recent years, with the development of deep learning and computer vision technique, the precision and speed of machine recognition image are big Width improves, and carries out position positioning by analysis shoot on location image and is possibly realized, on the other hand, the intelligence degree of mobile device It is being continuously improved, processing capacity can satisfy the requirement of application.In this case, how computer vision is effectively utilized Technology, and a variety of sensing technologies are combined, realize that indoor precise positioning becomes a urgent problem needed to be solved.
Summary of the invention
In order to solve the above technical problems, the invention proposes a kind of indoor positioning sides based on significant characteristics identification Method solves the problems, such as that indoor-GPS signal is weak, improves the precision of indoor positioning.
The technical scheme is that
A kind of indoor orientation method based on significant characteristics identification first completes indoor panoramic table modeling, and utilizes computer Vision technique extracts saliency feature, is trained, is corrected and processing forms model by a large amount of real scene images, final logical Cross smart machine outdoor scene take pictures carry out images match extract space structure data realize precise positioning.
Using panoramic picture in high definition camera collection room, and the point cloud data of laser radar is cooperated to complete ranging, based on complete Scape map carries out significant characteristics identification, image segmentation, then the shoot on location by the practical mobile photographing device of different time sections Image is corrected and handles, and forms panorama location model rule base, and compress to model and rule base, is placed into band and claps According in the intelligent movable equipment of function;Intelligent movable equipment shoots the outdoor scene of multiple different directions in fixed point, carries out image Match, extracts image space structural information and carry out intersecting comparison positioning.In addition, intelligent movable equipment can also be by wifi, bluetooth Etc. modes reduce search range, with real scene image positioning be mutually authenticated.Wherein,
The high definition camera is used for the acquisition of indoor panoramic image data, the laser radar cooperation high definition camera It takes pictures, records point cloud data, determine shooting point at a distance from surrounding enviroment object, realize that panoramic picture shoots point location; The positioning and model rule base is the computing resource assembled by cloud center, is learnt using image processing algorithm and is analyzed It obtains, and forms running fix App, be put into intelligent movable equipment;The intelligent movable equipment has camera function, and With certain computing capability, running fix App can be executed;The wifi and Bluetooth signal generator is indoor offer The connection of the wireless communication of wifi and bluetooth.
The present invention provides a kind of indoor orientation methods based on significant characteristics identification, the room for intelligent movable equipment Interior positioning, comprising:
Step 101, the laser radar cooperation high definition camera are to indoor shot distant view photograph, by the laser thunder Up to acquisition point cloud data, identification record shooting point is at a distance from surrounding enviroment object;
The image, point cloud data and range data of acquisition are uploaded to cloud by step 102, are pre-processed, and data structure is completed Change;
The panoramic picture repeatedly shot is carried out super-pixel segmentation by step 103, obtains initial notable figure;
The panoramic view data that step 104, basis are repeatedly shot fully considers that the variation of hiding relation and dynamic object, identification are significant Property stationary body target;
Image and stationary body characteristic area after step 105, fusion segmentation, by the significant graph region of determination;
Step 106 extracts spatial structural form, determines position according to image resolution ratio and camera site in conjunction with point cloud data ranging It sets;
Step 107 repeats step 103 to step 106, completes the model and rule base of indoor panorama;
Step 108 carries out compression processing for model and rule base, and combines indoor panoramic table, forms running fix APP, It is put into cloud;
Step 109, the intelligent movable equipment download running fix APP from cloud;
The picture that step 110, the intelligent movable equipment shoot the outdoor scene of multiple different directions, and will acquire in fixed point into Row pretreatment extracts it and shoots metadata, including aperture, shutter, focal length, pixel, shooting time etc.;
The running fix APP of intelligent movable equipment described in step 111, (optional) can connect wifi, bluetooth etc., and other are fixed Position device, obtains position range;
The running fix APP extraction characteristics of image of step 112, the intelligent movable equipment, carries out images match, determines panorama The position of image;
Step 113, according to shooting image metadata, image is projected on panoramic picture, extract image space structure Information, which intersect, compares positioning, determines intelligent movable device location and is labeled display by App;
Step 114, the intelligent movable equipment utilization running fix APP are corrected labeling position, and will shoot image Cloud is uploaded to together;
Step 115, cloud receive the data of user's upload, its App of Continuous optimization identifies location model, improve positioning accuracy.
The beneficial effects of the invention are as follows
The present invention realizes positioning using computer vision technique, greatly solves the problems, such as that indoor-GPS signal is weak, and disappear In addition to the influence that dynamic object movement generates, the indoor positioning of degree of precision may be implemented, be suitable for plurality of application scenes;By The modes such as wifi, bluetooth reduce search range, improve matching speed, and be mutually authenticated with real scene image positioning.In addition, effective Using mobile APP persistent collection live-action data and amendment positioning, the real-time of real scene image scene ensure that, improve image knowledge Other accuracy, and then improve the precision of indoor positioning.
Detailed description of the invention
Fig. 1 is indoor locating system composition schematic diagram;
Fig. 2 is the indoor positioning flow chart of intelligent movable equipment.
Specific embodiment
More detailed elaboration is carried out to the contents of the present invention with reference to the accompanying drawing:
As shown in fig. 1, using panoramic picture in high definition camera collection room, and the point cloud data of laser radar is cooperated to complete to survey Away from carrying out significant characteristics identification, image segmentation based on panoramic table, then move photographing device by the way that different time sections are practical Shoot on location image is corrected and handles, and forms panorama location model rule base, and compress to model and rule base, puts It sets in the intelligent movable equipment with camera function;Intelligent movable equipment shoots the outdoor scene of multiple different directions in fixed point, into Row images match extracts image space structural information and carries out intersecting comparison positioning.In addition, intelligent movable equipment can also be by The modes such as wifi, bluetooth reduce search range, are mutually authenticated with real scene image positioning.
Wherein,
The high definition camera is used for the acquisition of indoor panoramic image data, the laser radar cooperation high definition camera It takes pictures, records point cloud data, determine shooting point at a distance from surrounding enviroment object, realize that panoramic picture shoots point location;
The positioning and model rule base is the computing resource assembled by cloud center, using image processing algorithm study and Analysis obtains, and forms running fix App, is put into intelligent movable equipment;
The intelligent movable equipment has camera function, and has certain computing capability, can execute running fix App; The wifi is that the wifi of indoor offer and the wireless communication of bluetooth connect with Bluetooth signal generator.
Clear in order to describe, the characteristics of image recognizer in following instance uses R-CNN, conspicuousness algorithm of target detection Using SLRC algorithm, feature extraction uses HOG+SVM algorithm.It will be appreciated by those skilled in the art that in addition to using with worthwhile Except that, the construction of embodiment according to the present invention can also apply on other algorithms method.
As described in Figure 2, intelligent movable equipment indoor positioning the following steps are included:
Step 101, the laser radar cooperation high definition camera are to indoor shot distant view photograph, by the laser thunder Up to acquisition point cloud data, identification record shooting point is at a distance from surrounding enviroment object;
The image, point cloud data and range data of acquisition are uploaded to cloud by step 102, are pre-processed, and data structure is completed Change;
The panoramic picture repeatedly shot is carried out super-pixel segmentation by step 103, obtains initial notable figure;
The panoramic view data that step 104, basis are repeatedly shot fully considers that the variation of hiding relation and dynamic object, identification are significant Property stationary body target;
Image and stationary body characteristic area after step 105, fusion segmentation, by the significant graph region of determination;
Step 106 extracts spatial structural form, determines position according to image resolution ratio and camera site in conjunction with point cloud data ranging It sets;
Step 107 repeats step 103 to step 106, completes the model and rule base of indoor panorama;
Step 108 carries out compression processing for model and rule base, and combines indoor panoramic table, forms running fix APP, It is put into cloud;
Step 109, the intelligent movable equipment download running fix APP from cloud;
The picture that step 110, the intelligent movable equipment shoot the outdoor scene of multiple different directions, and will acquire in fixed point into Row pretreatment extracts it and shoots metadata, including aperture, shutter, focal length, pixel, shooting time etc.;
Step 111, the intelligent movable equipment running fix APP can connect other positioning devices such as wifi, bluetooth, obtain Take position range;
The running fix APP extraction characteristics of image of step 112, the intelligent movable equipment, carries out images match, determines panorama The position of image;
Step 113, according to shooting image metadata, image is projected on panoramic picture, extract image space structure Information, which intersect, compares positioning, determines intelligent movable device location and is labeled display by App;
Step 114, the intelligent movable equipment utilization running fix APP are corrected labeling position, and will shoot image Cloud is uploaded to together;
Step 115, cloud receive the data of user's upload, its App of Continuous optimization identifies location model, improve positioning accuracy.
Embodiment described above, only one kind of the specific embodiment of the invention, those skilled in the art is in this hair The usual variations and alternatives carried out in bright technical proposal scope should be all included within the scope of the present invention.

Claims (9)

1. a kind of indoor orientation method based on significant characteristics identification, which is characterized in that
Indoor panoramic table modeling is first completed, recycles computer vision technique to extract saliency feature, passes through realistic picture As be trained, correct and processing formed model, take pictures eventually by smart machine outdoor scene carry out images match extract space knot Structure data realize precise positioning.
2. the method according to claim 1, wherein
Panoramic picture in collection room, and the point cloud data of laser radar is cooperated to complete ranging, conspicuousness is carried out based on panoramic table Feature identification, image segmentation, then be corrected and locate by the shoot on location image of the practical mobile photographing device of different time sections Reason forms panorama location model rule base, and compresses to model and rule base, is placed into the intelligent movable with camera function In equipment;Intelligent movable equipment shoots the outdoor scene of multiple different directions in fixed point, carries out images match, extracts image space knot Structure information, which intersect, compares positioning.
3. according to the method described in claim 2, it is characterized in that,
In addition, intelligent movable equipment can also reduce search range by wifi, bluetooth, it is mutually authenticated with real scene image positioning.
4. according to the method described in claim 2, it is characterized in that,
Be used for the acquisition of indoor panoramic image data by high definition camera, the laser radar cooperation high definition camera into Row is taken pictures, and point cloud data is recorded, and determines shooting point at a distance from surrounding enviroment object, realizes that panoramic picture shoots point location.
5. according to the method described in claim 2, it is characterized in that,
The positioning and model rule base is the computing resource assembled by cloud center, using image processing algorithm study and Analysis obtains, and forms running fix App, is put into intelligent movable equipment.
6. according to the method described in claim 2, it is characterized in that,
The intelligent movable equipment has camera function, and has computing capability, can execute running fix App.
7. according to the method described in claim 3, it is characterized in that,
The wifi is that the wifi of indoor offer and the wireless communication of bluetooth connect with Bluetooth signal generator.
8. according to the method described in claim 2, it is characterized in that,
Concrete operation step includes:
Step 101, the laser radar cooperation high definition camera are to indoor shot distant view photograph, by the laser thunder Up to acquisition point cloud data, identification record shooting point is at a distance from surrounding enviroment object;
The image, point cloud data and range data of acquisition are uploaded to cloud by step 102, are pre-processed, and data structure is completed Change;
The panoramic picture repeatedly shot is carried out super-pixel segmentation by step 103, obtains initial notable figure;
The panoramic view data that step 104, basis are repeatedly shot fully considers that the variation of hiding relation and dynamic object, identification are significant Property stationary body target;
Image and stationary body characteristic area after step 105, fusion segmentation, by the significant graph region of determination;
Step 106 extracts spatial structural form, determines position according to image resolution ratio and camera site in conjunction with point cloud data ranging It sets;
Step 107 repeats step 103 to step 106, completes the model and rule base of indoor panorama;
Step 108 carries out compression processing for model and rule base, and combines indoor panoramic table, forms running fix APP, It is put into cloud;
Step 109, the intelligent movable equipment download running fix APP from cloud;
The picture that step 110, the intelligent movable equipment shoot the outdoor scene of multiple different directions, and will acquire in fixed point into Row pretreatment extracts it and shoots metadata, including aperture, shutter, focal length, pixel, shooting time etc.;
The running fix APP extraction characteristics of image of step 111, the intelligent movable equipment, carries out images match, determines panorama The position of image;
Step 112, according to shooting image metadata, image is projected on panoramic picture, extract image space structure Information, which intersect, compares positioning, determines intelligent movable device location and is labeled display by App;
Step 113, the intelligent movable equipment utilization running fix APP are corrected labeling position, and will shoot image Cloud is uploaded to together;
Step 114, cloud receive the data of user's upload, its App of Continuous optimization identifies location model, improve positioning accuracy.
9. according to the method described in claim 8, it is characterized in that,
The running fix APP of the intelligent movable equipment can connect wifi, bluetooth, obtain position range.
CN201811364650.9A 2018-11-16 2018-11-16 Indoor positioning method based on significance characteristic recognition Active CN109357679B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811364650.9A CN109357679B (en) 2018-11-16 2018-11-16 Indoor positioning method based on significance characteristic recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811364650.9A CN109357679B (en) 2018-11-16 2018-11-16 Indoor positioning method based on significance characteristic recognition

Publications (2)

Publication Number Publication Date
CN109357679A true CN109357679A (en) 2019-02-19
CN109357679B CN109357679B (en) 2022-04-19

Family

ID=65345498

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811364650.9A Active CN109357679B (en) 2018-11-16 2018-11-16 Indoor positioning method based on significance characteristic recognition

Country Status (1)

Country Link
CN (1) CN109357679B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109916408A (en) * 2019-02-28 2019-06-21 深圳市鑫益嘉科技股份有限公司 Robot indoor positioning and air navigation aid, device, equipment and storage medium
CN110070127A (en) * 2019-04-19 2019-07-30 南京邮电大学 The optimization method finely identified towards family product
CN110298320A (en) * 2019-07-01 2019-10-01 北京百度网讯科技有限公司 A kind of vision positioning method, device and storage medium
CN110427936A (en) * 2019-07-04 2019-11-08 深圳市新潮酒窖文化传播有限公司 A kind of the cellar management method and system in wine cellar
CN112242002A (en) * 2020-10-09 2021-01-19 同济大学 Object identification and panoramic roaming method based on deep learning
CN113137963A (en) * 2021-04-06 2021-07-20 上海电科智能系统股份有限公司 Passive indoor high-precision comprehensive positioning and navigation method for people and objects

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130222369A1 (en) * 2012-02-23 2013-08-29 Charles D. Huston System and Method for Creating an Environment and for Sharing a Location Based Experience in an Environment
CN103646420A (en) * 2013-12-12 2014-03-19 浪潮电子信息产业股份有限公司 Intelligent 3D scene reduction method based on self learning algorithm
WO2014073841A1 (en) * 2012-11-07 2014-05-15 한국과학기술연구원 Method for detecting image-based indoor position, and mobile terminal using same
CN105716609A (en) * 2016-01-15 2016-06-29 浙江梧斯源通信科技股份有限公司 Indoor robot vision positioning method
CN106447585A (en) * 2016-09-21 2017-02-22 武汉大学 Urban area and indoor high-precision visual positioning system and method
CN107131883A (en) * 2017-04-26 2017-09-05 中山大学 The full-automatic mobile terminal indoor locating system of view-based access control model
CN107167144A (en) * 2017-07-07 2017-09-15 武汉科技大学 A kind of mobile robot indoor environment recognition positioning method of view-based access control model
US20180061126A1 (en) * 2016-08-26 2018-03-01 Osense Technology Co., Ltd. Method and system for indoor positioning and device for creating indoor maps thereof
CN107833220A (en) * 2017-11-28 2018-03-23 河海大学常州校区 Fabric defect detection method based on depth convolutional neural networks and vision significance
WO2018093438A1 (en) * 2016-08-26 2018-05-24 William Marsh Rice University Camera-based positioning system using learning

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130222369A1 (en) * 2012-02-23 2013-08-29 Charles D. Huston System and Method for Creating an Environment and for Sharing a Location Based Experience in an Environment
WO2014073841A1 (en) * 2012-11-07 2014-05-15 한국과학기술연구원 Method for detecting image-based indoor position, and mobile terminal using same
CN103646420A (en) * 2013-12-12 2014-03-19 浪潮电子信息产业股份有限公司 Intelligent 3D scene reduction method based on self learning algorithm
CN105716609A (en) * 2016-01-15 2016-06-29 浙江梧斯源通信科技股份有限公司 Indoor robot vision positioning method
US20180061126A1 (en) * 2016-08-26 2018-03-01 Osense Technology Co., Ltd. Method and system for indoor positioning and device for creating indoor maps thereof
WO2018093438A1 (en) * 2016-08-26 2018-05-24 William Marsh Rice University Camera-based positioning system using learning
CN106447585A (en) * 2016-09-21 2017-02-22 武汉大学 Urban area and indoor high-precision visual positioning system and method
CN107131883A (en) * 2017-04-26 2017-09-05 中山大学 The full-automatic mobile terminal indoor locating system of view-based access control model
CN107167144A (en) * 2017-07-07 2017-09-15 武汉科技大学 A kind of mobile robot indoor environment recognition positioning method of view-based access control model
CN107833220A (en) * 2017-11-28 2018-03-23 河海大学常州校区 Fabric defect detection method based on depth convolutional neural networks and vision significance

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HISATO KAWAJI等: "Image-based indoor positioning system: fast image matching using omnidirectional panoramic images", 《PROCEEDINGS OF THE 1ST ACM INTERNATIONAL WORKSHOP ON MULTIMODAL PERVASIVE VIDEO ANALYSIS》 *
曹天扬 等: "结合图像内容匹配的机器人视觉导航定位与全局地图构建系统", 《光学精密工程》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109916408A (en) * 2019-02-28 2019-06-21 深圳市鑫益嘉科技股份有限公司 Robot indoor positioning and air navigation aid, device, equipment and storage medium
CN110070127A (en) * 2019-04-19 2019-07-30 南京邮电大学 The optimization method finely identified towards family product
CN110298320A (en) * 2019-07-01 2019-10-01 北京百度网讯科技有限公司 A kind of vision positioning method, device and storage medium
CN110427936A (en) * 2019-07-04 2019-11-08 深圳市新潮酒窖文化传播有限公司 A kind of the cellar management method and system in wine cellar
CN110427936B (en) * 2019-07-04 2022-09-30 深圳市新潮酒窖文化传播有限公司 Wine storage management method and system for wine cellar
CN112242002A (en) * 2020-10-09 2021-01-19 同济大学 Object identification and panoramic roaming method based on deep learning
CN112242002B (en) * 2020-10-09 2022-07-08 同济大学 Object identification and panoramic roaming method based on deep learning
CN113137963A (en) * 2021-04-06 2021-07-20 上海电科智能系统股份有限公司 Passive indoor high-precision comprehensive positioning and navigation method for people and objects

Also Published As

Publication number Publication date
CN109357679B (en) 2022-04-19

Similar Documents

Publication Publication Date Title
CN109357679A (en) A kind of indoor orientation method based on significant characteristics identification
US10740975B2 (en) Mobile augmented reality system
CN108965687B (en) Shooting direction identification method, server, monitoring method, monitoring system and camera equipment
US10664708B2 (en) Image location through large object detection
CN107133325B (en) Internet photo geographic space positioning method based on street view map
CN109520500B (en) Accurate positioning and street view library acquisition method based on terminal shooting image matching
CN104376118A (en) Panorama-based outdoor movement augmented reality method for accurately marking POI
CN112166459A (en) Three-dimensional environment modeling based on multi-camera convolver system
TW202208879A (en) Pose determination method, electronic device and computer readable storage medium
CN102157011A (en) Method for carrying out dynamic texture acquisition and virtuality-reality fusion by using mobile shooting equipment
CN105023266A (en) Method and device for implementing augmented reality (AR) and terminal device
CN112161618A (en) Storage robot positioning and map construction method, robot and storage medium
CN108955682A (en) Mobile phone indoor positioning air navigation aid
CN104573617A (en) Video shooting control method
Feng et al. Visual map construction using RGB-D sensors for image-based localization in indoor environments
CN109523499A (en) A kind of multi-source fusion full-view modeling method based on crowdsourcing
KR102072796B1 (en) Method, system and non-transitory computer-readable recording medium for calculating spatial coordinates of a region of interest
KR102300570B1 (en) Assembly for omnidirectional image capture and method performing by the same
WO2023160722A1 (en) Interactive target object searching method and system and storage medium
CN112348887A (en) Terminal pose determining method and related device
CN105245845A (en) Method for controlling camera to follow and shoot automatically based on gathering trend in match field
KR102029741B1 (en) Method and system of tracking object
Zhao et al. CrowdOLR: Toward object location recognition with crowdsourced fingerprints using smartphones
WO2023103883A1 (en) Automatic object annotation method and apparatus, electronic device and storage medium
CN110473256A (en) A kind of vehicle positioning method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220324

Address after: 250100 building S02, No. 1036, Langchao Road, high tech Zone, Jinan City, Shandong Province

Applicant after: Shandong Inspur Scientific Research Institute Co.,Ltd.

Address before: 250100 First Floor of R&D Building 2877 Kehang Road, Sun Village Town, Jinan High-tech Zone, Shandong Province

Applicant before: JINAN INSPUR HIGH-TECH TECHNOLOGY DEVELOPMENT Co.,Ltd.

GR01 Patent grant
GR01 Patent grant