CN109357679B - Indoor positioning method based on significance characteristic recognition - Google Patents

Indoor positioning method based on significance characteristic recognition Download PDF

Info

Publication number
CN109357679B
CN109357679B CN201811364650.9A CN201811364650A CN109357679B CN 109357679 B CN109357679 B CN 109357679B CN 201811364650 A CN201811364650 A CN 201811364650A CN 109357679 B CN109357679 B CN 109357679B
Authority
CN
China
Prior art keywords
image
positioning
mobile
indoor
panoramic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811364650.9A
Other languages
Chinese (zh)
Other versions
CN109357679A (en
Inventor
孙善宝
徐驰
于治楼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Inspur Scientific Research Institute Co Ltd
Original Assignee
Shandong Inspur Scientific Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Inspur Scientific Research Institute Co Ltd filed Critical Shandong Inspur Scientific Research Institute Co Ltd
Priority to CN201811364650.9A priority Critical patent/CN109357679B/en
Publication of CN109357679A publication Critical patent/CN109357679A/en
Application granted granted Critical
Publication of CN109357679B publication Critical patent/CN109357679B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Automation & Control Theory (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides an indoor positioning method based on saliency feature recognition, and relates to the technical field of saliency target detection, deep learning and indoor positioning. The problem that indoor GPS positioning signals are weak is solved, the influence caused by the motion of dynamic objects is eliminated, the real-time performance of a live-action image scene is guaranteed, the accuracy of image recognition is improved, and the accuracy of indoor positioning is improved.

Description

Indoor positioning method based on significance characteristic recognition
Technical Field
The invention relates to a saliency target detection, deep learning and indoor positioning technology, in particular to an indoor positioning method based on saliency feature recognition.
Background
With the development of strong applications such as accurate marketing and indoor navigation services, indoor positioning is more and more concerned, and a large number of application cases are available in various indoor scenes such as airports, railway stations, shopping malls, factories and schools. Indoor location refers to the position location in indoor environment, owing to receive the restriction of environment, utilizes the realization location that GPS sensor can relax at outdoor smart mobile phone, but gets into indoorly, and smart machine often just can't receive the GPS signal, hardly realizes accurate location.
At present, the traditional indoor mode in the industry adopts indoor wireless communication technologies such as WiFi, Bluetooth and RFID, the relative position is calculated by utilizing the base station for positioning and the signal intensity collected by a plurality of wireless transmitters, and finally the positioning of personnel or objects in the indoor space is realized. The construction that depends on infrastructure that traditional mode is very big, and the stability requirement to wiFi or bluetooth signal transmitter's signal strength is higher, especially in the intensive district of personnel's flow or the scene that dynamic object is more, can't guarantee the stability of its signal, causes the precision of indoor location and actual position to have very big deviation, can't satisfy actual application demand.
In recent years, with the development of deep learning and computer vision technology, the accuracy and speed of machine recognition images are greatly improved, and position location by analyzing live-action shot images becomes possible. Under the circumstances, how to effectively utilize the computer vision technology and combine a plurality of sensing technologies to realize accurate indoor positioning becomes a problem which needs to be solved urgently.
Disclosure of Invention
In order to solve the technical problems, the invention provides an indoor positioning method based on significance characteristic identification, which solves the problem of weak indoor GPS positioning signals and improves the indoor positioning precision.
The technical scheme of the invention is as follows:
an indoor positioning method based on saliency feature recognition is characterized by firstly completing indoor panoramic map modeling, extracting image saliency features by utilizing a computer vision technology, training, correcting and processing a large number of live-action images to form a model, and finally performing image matching by live-action shooting through intelligent equipment to extract spatial structure data so as to realize accurate positioning.
Acquiring an indoor panoramic image by using a high-definition camera, completing ranging by matching with point cloud data of a laser radar, performing salient feature recognition and image segmentation based on a panoramic map, correcting and processing live-action shot images of actual mobile shooting equipment in different time periods to form a panoramic positioning model rule base, compressing the model and the rule base, and placing the compressed model and the rule base into mobile intelligent equipment with a shooting function; the mobile intelligent equipment shoots a plurality of outdoor scenes in different directions at a fixed point, image matching is carried out, and image space structure information is extracted to carry out cross comparison positioning. In addition, the mobile intelligent device can also reduce the search range by means of wifi, Bluetooth and the like, and is mutually verified with the live-action image positioning. Wherein the content of the first and second substances,
the high-definition camera is used for collecting indoor panoramic image data, the laser radar is matched with the high-definition camera to take a picture, point cloud data are recorded, the distance between a shooting point and an object in the surrounding environment is determined, and positioning of the shooting point of the panoramic image is realized; the positioning and model rule base is obtained by utilizing image processing algorithm learning and analysis through computing resources gathered by a cloud center, and a mobile positioning App is formed and is put into the mobile intelligent equipment; the mobile intelligent equipment has a photographing function and certain computing capacity and can execute a mobile positioning App; the wifi and Bluetooth signal generator is in indoor wireless communication connection with wifi and Bluetooth.
The invention provides an indoor positioning method based on salient feature recognition, which is used for indoor positioning of mobile intelligent equipment and comprises the following steps:
101, the laser radar is matched with the high-definition camera to shoot panoramic photos indoors, point cloud data are collected by the laser radar, and the distance between a shooting point and an object in the surrounding environment is recorded in an identification mode;
102, uploading the collected image, point cloud data and distance data to a cloud end, preprocessing, and finishing data structuring;
103, performing superpixel segmentation on the panoramic image shot for multiple times to obtain an initial saliency map;
104, according to the panoramic data shot for multiple times, fully considering the change of the shielding relation and the dynamic object, and identifying a significant static object target;
105, fusing the segmented image and the characteristic region of the static object, and determining a saliency map region;
step 106, extracting spatial structure information and determining the position according to the image resolution and the shooting position by combining point cloud data ranging;
step 107, repeatedly executing the step 103 to the step 106 to finish the model and the rule base of the indoor panorama;
step 108, compressing the model and the rule base, combining with an indoor panoramic map to form a mobile positioning APP, and putting the mobile positioning APP into a cloud terminal;
step 109, downloading a mobile positioning APP from a cloud end by the mobile intelligent device;
step 110, the mobile intelligent device shoots a plurality of real scenes in different directions at a fixed point, and pre-processes the obtained picture, and extracts shooting metadata of the picture, wherein the shooting metadata comprises an aperture, a shutter, a focal length, pixels, shooting time and the like;
step 111, the mobile positioning APP of the (optional) mobile intelligent device can be connected with other positioning devices such as wifi and bluetooth to obtain a position range;
step 112, extracting image features by the mobile positioning APP of the mobile intelligent device, performing image matching, and determining the position of the panoramic image;
113, projecting the image on the panoramic image according to metadata of the shot image, extracting image space structure information for cross comparison and positioning, determining the position of the mobile intelligent equipment and performing label display through an App;
step 114, the mobile intelligent device corrects the marked position by using a mobile positioning APP, and uploads the shot images to the cloud;
and step 115, the cloud receives the data uploaded by the user, continuously optimizes the App identification positioning model of the user, and improves the positioning accuracy.
The invention has the advantages that
The invention realizes positioning by utilizing a computer vision technology, greatly solves the problem of weak indoor GPS positioning signals, eliminates the influence generated by the motion of dynamic objects, can realize indoor positioning with higher precision, and is suitable for various application scenes; the searching range is reduced by means of wifi, Bluetooth and the like, the matching speed is increased, and the method and the device are mutually verified in positioning with live-action images. In addition, the effective utilization removes APP and continuously collects the outdoor scene data and revises the location, has guaranteed the real-time of outdoor scene image scene, has promoted image identification's accuracy, and then improves the precision of indoor location.
Drawings
FIG. 1 is a schematic diagram of an indoor positioning system;
fig. 2 is a flow chart of indoor positioning of a mobile smart device.
Detailed Description
The invention will be explained in more detail below with reference to the accompanying drawings:
as shown in fig. 1, a high-definition camera is used for collecting indoor panoramic images, the distance measurement is completed by matching with point cloud data of a laser radar, the identification of salient features and the image segmentation are performed based on a panoramic map, the real-scene shot images of actual mobile shooting equipment in different time periods are corrected and processed to form a panoramic positioning model rule base, and the model and the rule base are compressed and placed into mobile intelligent equipment with a shooting function; the mobile intelligent equipment shoots a plurality of outdoor scenes in different directions at a fixed point, image matching is carried out, and image space structure information is extracted to carry out cross comparison positioning. In addition, the mobile intelligent device can also reduce the search range by means of wifi, Bluetooth and the like, and is mutually verified with the live-action image positioning.
Wherein the content of the first and second substances,
the high-definition camera is used for collecting indoor panoramic image data, the laser radar is matched with the high-definition camera to take a picture, point cloud data are recorded, the distance between a shooting point and an object in the surrounding environment is determined, and positioning of the shooting point of the panoramic image is realized;
the positioning and model rule base is obtained by utilizing image processing algorithm learning and analysis through computing resources gathered by a cloud center, and a mobile positioning App is formed and is put into the mobile intelligent equipment;
the mobile intelligent equipment has a photographing function and certain computing capacity and can execute a mobile positioning App; the wifi and Bluetooth signal generator is in indoor wireless communication connection with wifi and Bluetooth.
For clarity of description, the image feature recognition algorithm in the following examples adopts R-CNN, the saliency target detection algorithm adopts SLRC algorithm, and the feature extraction adopts HOG + SVM algorithm. Those skilled in the art will appreciate that the configurations according to embodiments of the present invention can be applied to other algorithms in addition to using the above algorithm.
As illustrated in fig. 2, indoor positioning of a mobile smart device comprises the steps of:
101, the laser radar is matched with the high-definition camera to shoot panoramic photos indoors, point cloud data are collected by the laser radar, and the distance between a shooting point and an object in the surrounding environment is recorded in an identification mode;
102, uploading the collected image, point cloud data and distance data to a cloud end, preprocessing, and finishing data structuring;
103, performing superpixel segmentation on the panoramic image shot for multiple times to obtain an initial saliency map;
104, according to the panoramic data shot for multiple times, fully considering the change of the shielding relation and the dynamic object, and identifying a significant static object target;
105, fusing the segmented image and the characteristic region of the static object, and determining a saliency map region;
step 106, extracting spatial structure information and determining the position according to the image resolution and the shooting position by combining point cloud data ranging;
step 107, repeatedly executing the step 103 to the step 106 to finish the model and the rule base of the indoor panorama;
step 108, compressing the model and the rule base, combining with an indoor panoramic map to form a mobile positioning APP, and putting the mobile positioning APP into a cloud terminal;
step 109, downloading a mobile positioning APP from a cloud end by the mobile intelligent device;
step 110, the mobile intelligent device shoots a plurality of real scenes in different directions at a fixed point, and pre-processes the obtained picture, and extracts shooting metadata of the picture, wherein the shooting metadata comprises an aperture, a shutter, a focal length, pixels, shooting time and the like;
step 111, the mobile positioning APP of the mobile intelligent device can be connected with other positioning devices such as wifi and Bluetooth to obtain a position range;
step 112, extracting image features by the mobile positioning APP of the mobile intelligent device, performing image matching, and determining the position of the panoramic image;
113, projecting the image on the panoramic image according to metadata of the shot image, extracting image space structure information for cross comparison and positioning, determining the position of the mobile intelligent equipment and performing label display through an App;
step 114, the mobile intelligent device corrects the marked position by using a mobile positioning APP, and uploads the shot images to the cloud;
and step 115, the cloud receives the data uploaded by the user, continuously optimizes the App identification positioning model of the user, and improves the positioning accuracy.
The above-described embodiment is only one specific embodiment of the present invention, and general changes and substitutions by those skilled in the art within the technical scope of the present invention are included in the protection scope of the present invention.

Claims (8)

1. An indoor positioning method based on the identification of the significant features is characterized in that,
firstly, modeling an indoor panoramic map, extracting image significance characteristics by using a computer vision technology, training, correcting and processing live-action images to form a model, and finally performing image matching by live-action shooting through intelligent equipment to extract spatial structure data so as to realize accurate positioning;
the method comprises the following specific operation steps:
step 101, a laser radar is matched with a high-definition camera to shoot panoramic photos indoors, the laser radar collects point cloud data, and the distance between a shooting point and objects in the surrounding environment is recorded in an identification mode;
102, uploading the collected image, point cloud data and distance data to a cloud end, preprocessing, and finishing data structuring;
103, performing superpixel segmentation on the panoramic image shot for multiple times to obtain an initial saliency map;
104, according to the panoramic data shot for multiple times, fully considering the change of the shielding relation and the dynamic object, and identifying a significant static object target;
105, fusing the segmented image and the characteristic region of the static object, and determining a saliency map region;
step 106, extracting spatial structure information and determining the position according to the image resolution and the shooting position by combining point cloud data ranging;
step 107, repeatedly executing the step 103 to the step 106 to finish the model and the rule base of the indoor panorama;
step 108, compressing the model and the rule base, combining with an indoor panoramic map to form a mobile positioning APP, and putting the mobile positioning APP into a cloud terminal;
step 109, downloading a mobile positioning APP from a cloud end by the mobile intelligent device;
step 110, the mobile intelligent device shoots a plurality of real scenes in different directions at a fixed point, and pre-processes the obtained picture, and extracts shooting metadata of the picture, wherein the shooting metadata comprises an aperture, a shutter, a focal length, pixels, shooting time and the like;
step 111, extracting image features by the mobile positioning APP of the mobile intelligent device, performing image matching, and determining the position of the panoramic image;
step 112, projecting the image on the panoramic image according to the metadata of the shot image, extracting image space structure information for cross comparison and positioning, determining the position of the mobile intelligent device and performing label display through an App;
step 113, the mobile intelligent equipment corrects the marked position by using a mobile positioning APP, and uploads the shot images to the cloud;
and step 114, the cloud receives the data uploaded by the user, continuously optimizes the App identification positioning model of the user, and improves the positioning precision.
2. The method of claim 1,
collecting an indoor panoramic image, completing ranging by matching point cloud data of a laser radar, performing salient feature recognition and image segmentation based on a panoramic map, correcting and processing live-action shot images of actual mobile shooting equipment in different time periods to form a panoramic positioning model rule base, compressing the model and the rule base, and placing the compressed model and the rule base into mobile intelligent equipment with a shooting function; the mobile intelligent equipment shoots a plurality of outdoor scenes in different directions at a fixed point, image matching is carried out, and image space structure information is extracted to carry out cross comparison positioning.
3. The method of claim 2,
in addition, the mobile intelligent device can also narrow the search range by means of wifi and Bluetooth and perform mutual verification with live-action image positioning.
4. The method of claim 2,
the high-definition camera is used for collecting indoor panoramic image data, the laser radar is matched with the high-definition camera to shoot, point cloud data are recorded, the distance between a shooting point and an object in the surrounding environment is determined, and positioning of the shooting point of the panoramic image is achieved.
5. The method of claim 2,
the positioning and model rule base is obtained by utilizing image processing algorithm learning and analysis through computing resources gathered by the cloud center, and a mobile positioning App is formed and is placed into the mobile intelligent device.
6. The method of claim 2,
the mobile intelligent device has a photographing function and computing capability and can execute the mobile positioning App.
7. The method of claim 3,
the wifi and Bluetooth signal generator is in indoor wireless communication connection with wifi and Bluetooth.
8. The method of claim 1,
the mobile positioning APP of the mobile intelligent device can be connected with wifi and Bluetooth to obtain a position range.
CN201811364650.9A 2018-11-16 2018-11-16 Indoor positioning method based on significance characteristic recognition Active CN109357679B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811364650.9A CN109357679B (en) 2018-11-16 2018-11-16 Indoor positioning method based on significance characteristic recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811364650.9A CN109357679B (en) 2018-11-16 2018-11-16 Indoor positioning method based on significance characteristic recognition

Publications (2)

Publication Number Publication Date
CN109357679A CN109357679A (en) 2019-02-19
CN109357679B true CN109357679B (en) 2022-04-19

Family

ID=65345498

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811364650.9A Active CN109357679B (en) 2018-11-16 2018-11-16 Indoor positioning method based on significance characteristic recognition

Country Status (1)

Country Link
CN (1) CN109357679B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109916408A (en) * 2019-02-28 2019-06-21 深圳市鑫益嘉科技股份有限公司 Robot indoor positioning and air navigation aid, device, equipment and storage medium
CN110070127B (en) * 2019-04-19 2022-08-16 南京邮电大学 Household product fine identification oriented optimization method
CN110298320B (en) * 2019-07-01 2021-06-22 北京百度网讯科技有限公司 Visual positioning method, device and storage medium
CN110427936B (en) * 2019-07-04 2022-09-30 深圳市新潮酒窖文化传播有限公司 Wine storage management method and system for wine cellar
CN112242002B (en) * 2020-10-09 2022-07-08 同济大学 Object identification and panoramic roaming method based on deep learning
CN113137963B (en) * 2021-04-06 2023-05-05 上海电科智能系统股份有限公司 High-precision comprehensive positioning and navigation method for passive indoor people and objects

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103646420A (en) * 2013-12-12 2014-03-19 浪潮电子信息产业股份有限公司 Intelligent 3D scene reduction method based on self learning algorithm
WO2014073841A1 (en) * 2012-11-07 2014-05-15 한국과학기술연구원 Method for detecting image-based indoor position, and mobile terminal using same
CN105716609A (en) * 2016-01-15 2016-06-29 浙江梧斯源通信科技股份有限公司 Indoor robot vision positioning method
CN106447585A (en) * 2016-09-21 2017-02-22 武汉大学 Urban area and indoor high-precision visual positioning system and method
CN107131883A (en) * 2017-04-26 2017-09-05 中山大学 The full-automatic mobile terminal indoor locating system of view-based access control model
CN107167144A (en) * 2017-07-07 2017-09-15 武汉科技大学 A kind of mobile robot indoor environment recognition positioning method of view-based access control model
CN107833220A (en) * 2017-11-28 2018-03-23 河海大学常州校区 Fabric defect detection method based on depth convolutional neural networks and vision significance
WO2018093438A1 (en) * 2016-08-26 2018-05-24 William Marsh Rice University Camera-based positioning system using learning

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104641399B (en) * 2012-02-23 2018-11-23 查尔斯·D·休斯顿 System and method for creating environment and for location-based experience in shared environment
US10169914B2 (en) * 2016-08-26 2019-01-01 Osense Technology Co., Ltd. Method and system for indoor positioning and device for creating indoor maps thereof

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014073841A1 (en) * 2012-11-07 2014-05-15 한국과학기술연구원 Method for detecting image-based indoor position, and mobile terminal using same
CN103646420A (en) * 2013-12-12 2014-03-19 浪潮电子信息产业股份有限公司 Intelligent 3D scene reduction method based on self learning algorithm
CN105716609A (en) * 2016-01-15 2016-06-29 浙江梧斯源通信科技股份有限公司 Indoor robot vision positioning method
WO2018093438A1 (en) * 2016-08-26 2018-05-24 William Marsh Rice University Camera-based positioning system using learning
CN106447585A (en) * 2016-09-21 2017-02-22 武汉大学 Urban area and indoor high-precision visual positioning system and method
CN107131883A (en) * 2017-04-26 2017-09-05 中山大学 The full-automatic mobile terminal indoor locating system of view-based access control model
CN107167144A (en) * 2017-07-07 2017-09-15 武汉科技大学 A kind of mobile robot indoor environment recognition positioning method of view-based access control model
CN107833220A (en) * 2017-11-28 2018-03-23 河海大学常州校区 Fabric defect detection method based on depth convolutional neural networks and vision significance

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Image-based indoor positioning system: fast image matching using omnidirectional panoramic images;Hisato Kawaji等;《Proceedings of the 1st ACM international workshop on Multimodal pervasive video analysis》;20101031;第1-4页 *
结合图像内容匹配的机器人视觉导航定位与全局地图构建系统;曹天扬 等;《光学精密工程》;20170815;第25卷(第8期);第2221-2232页 *

Also Published As

Publication number Publication date
CN109357679A (en) 2019-02-19

Similar Documents

Publication Publication Date Title
CN109357679B (en) Indoor positioning method based on significance characteristic recognition
CN111415388B (en) Visual positioning method and terminal
JP7236565B2 (en) POSITION AND ATTITUDE DETERMINATION METHOD, APPARATUS, ELECTRONIC DEVICE, STORAGE MEDIUM AND COMPUTER PROGRAM
CN112166459A (en) Three-dimensional environment modeling based on multi-camera convolver system
CN105830093A (en) Systems, methods, and apparatus for generating metadata relating to spatial regions of non-uniform size
Castillo-Carrión et al. SIFT optimization and automation for matching images from multiple temporal sources
CN111047622B (en) Method and device for matching objects in video, storage medium and electronic device
CN111323024A (en) Positioning method and device, equipment and storage medium
CN103426172A (en) Vision-based target tracking method and device
Steinhoff et al. How computer vision can help in outdoor positioning
CN107480580B (en) Image recognition method and image recognition device
CN112313706A (en) Method and system for calculating spatial coordinate points of a region of interest, and non-transitory computer-readable recording medium
Wang et al. iNavigation: an image based indoor navigation system
CN107193820A (en) Location information acquisition method, device and equipment
An et al. Image-based positioning system using LED Beacon based on IoT central management
CN111383271B (en) Picture-based direction marking method and device
CN109903308B (en) Method and device for acquiring information
CN109612455A (en) A kind of indoor orientation method and system
KR101806066B1 (en) Camera module with function of parking guidance
CN115767424A (en) Video positioning method based on RSS and CSI fusion
CN112651351B (en) Data processing method and device
Villarrubia et al. Hybrid indoor location system for museum tourist routes in augmented reality
Xiong et al. SmartGuide: Towards single-image building localization with smartphone
CN113409358A (en) Image tracking method, image tracking device, storage medium and electronic equipment
Ahn et al. Research of panoramic image generation using IoT device with camera for cloud computing environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220324

Address after: 250100 building S02, No. 1036, Langchao Road, high tech Zone, Jinan City, Shandong Province

Applicant after: Shandong Inspur Scientific Research Institute Co.,Ltd.

Address before: 250100 First Floor of R&D Building 2877 Kehang Road, Sun Village Town, Jinan High-tech Zone, Shandong Province

Applicant before: JINAN INSPUR HIGH-TECH TECHNOLOGY DEVELOPMENT Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant