CN112261719A - Area positioning method combining SLAM technology with deep learning - Google Patents

Area positioning method combining SLAM technology with deep learning Download PDF

Info

Publication number
CN112261719A
CN112261719A CN202011121186.8A CN202011121186A CN112261719A CN 112261719 A CN112261719 A CN 112261719A CN 202011121186 A CN202011121186 A CN 202011121186A CN 112261719 A CN112261719 A CN 112261719A
Authority
CN
China
Prior art keywords
target
signal intensity
area
positioning
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011121186.8A
Other languages
Chinese (zh)
Other versions
CN112261719B (en
Inventor
冷阳
牟海涛
唐琪
康斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Institute Of Artificial Intelligence Dalian University Of Technology
Dalian Sanli Technology Co ltd
Original Assignee
Dalian Institute Of Artificial Intelligence Dalian University Of Technology
Dalian Sanli Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Institute Of Artificial Intelligence Dalian University Of Technology, Dalian Sanli Technology Co ltd filed Critical Dalian Institute Of Artificial Intelligence Dalian University Of Technology
Publication of CN112261719A publication Critical patent/CN112261719A/en
Application granted granted Critical
Publication of CN112261719B publication Critical patent/CN112261719B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/023Services making use of location information using mutual or relative location information between multiple location based services [LBS] targets or of distance thresholds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the technical field of image identification and regional positioning, and provides a regional positioning method combining an SLAM technology with deep learning. And the invention method proposed this time overcomes some disadvantages of the existing indoor positioning technology. The target is positioned by adopting the SLAM technology and combining with deep learning, so that the positioning precision is higher, the coverage is wider, the cost is lower, and the practical value is higher; and because the vision SLAM technology is a positioning mode based on computer vision, huge calculation amount is needed, and positioning has certain delay, and the method can well solve the two problems.

Description

Area positioning method combining SLAM technology with deep learning
Technical Field
The invention belongs to the technical field of image recognition and area positioning, and relates to a positioning method for obtaining positioning coordinates in a certain area by means of deep learning.
Background
SLAM is an abbreviation for Simultaneous localization and mapping, the name of which is translated as "Simultaneous localization and mapping". The method is characterized in that under the condition that environmental information is unknown, the environmental information is continuously constructed in the motion process through a specified sensor, and meanwhile, the position of the user is estimated; when a camera is employed as the sensor, it is referred to as "visual SLAM".
Deep learning is a new field of machine learning research, the concept of which is derived from the study of artificial neural networks. By constructing a neural network simulating the human brain to analyze and learn, simulating human brain mechanism interpretation data, extracting characteristics from a 'low layer' to a 'high layer' of input data layer by layer, and realizing good mapping from input to output. Deep learning with respect to specific research content, three types of methods are mainly involved: convolutional neural networks, self-coding neural networks, deep belief networks. The deep learning can improve the accuracy of classification and prediction, improve the limitation that the representation capability of the traditional neural network algorithm to a complex function is limited, and realize the processing of nonlinear natural signals, such as natural language processing, big data feature extraction, image recognition, voice recognition and the like.
With the development of positioning technology, GPS and base station positioning technology have achieved accurate positioning of most outdoor scenes. But in a few outdoor scenes or indoors, due to the presence of various obstacles, the GNSS signals decay rapidly or even not, so that accurate positioning cannot be achieved in these areas. Meanwhile, indoor scenes such as shopping malls have large people flow, the environment is relatively complex, and the real-time performance is strong, so that higher requirements on positioning are provided. The technologies currently applied to indoor positioning mainly include wifi positioning technology, bluetooth positioning technology, infrared positioning technology, ultrasonic positioning technology, geomagnetic positioning technology, RFID positioning, ultra-wideband positioning, and the like. However, these techniques have different defects in terms of positioning accuracy, coverage, reliability, power consumption, cost, and the like. If wifi positioning is carried out, the hot spot is greatly influenced by the surrounding environment, the positioning precision is low, and the maintenance cost is high; the RFID positioning coverage range is small, and the communication capability is not available; the ultra-wideband positioning technology has high cost, complex network deployment and the like. Aiming at the defects, the SLAM technology is combined with the positioning technology by means of deep learning, so that the positioning precision in special indoor or outdoor areas is improved, the coverage area is expanded, and the cost is reduced.
Disclosure of Invention
The main purposes of the invention are: the positioning method is mainly applied to indoor areas or outdoor areas such as construction sites and the like, and aims to improve the positioning accuracy, enlarge the coverage range and reduce the cost.
The technical scheme of the invention is as follows:
a region positioning method combining SLAM technology with deep learning comprises the following steps:
firstly, a base station is installed in a certain specific area, the base station can receive the signal intensity of a target in the area, firstly, a positioning area is divided into m small areas according to the size of an actual scene and the condition of an obstacle, sampling points are divided according to a certain interval, and then an instrument capable of obtaining the signal intensity is worn by the target to be measured. And finally, respectively collecting the signal intensity at the sampling points in the m areas, and recording and storing the collected RSSI values.
And secondly, positioning by using a binocular camera, installing the binocular camera on the target to be detected, enabling the target to walk through the sampling points of the m small areas in the first step, and obtaining the coordinates Y (x, Y) of each sampling point of the target to be detected in each area by using a classic frame of a visual SLAM, namely a sensor data processing module, a front-end visual odometer module, a rear-end optimization module, a loop detection module and a drawing module.
And thirdly, training a classification model, inputting the signal intensity X obtained in the first step, and outputting the signal intensity X into m divided positioning areas. That is, the filtered signal strength is input, it can be determined that the positioning target is in the i (i ═ 1,2.., m) th region. And respectively training prediction models of the m positioning areas. And (3) taking the signal intensity of the position where the positioning target is located obtained in the ith area as an input X, and taking the coordinate obtained in the second step as an output Y, so that the signal intensity and the coordinate of the target are taken as a set of corresponding data (X, Y) to train an ith prediction model.
Introducing a GA-BP neural network into the classification model; the initial weight and the threshold of the neural network are optimized by using the genetic algorithm, so that the neural network can be effectively prevented from falling into local optimum during training. In the method, a classification model hidden layer is 2 layers, an output layer is connected with a softmax classifier, and the number of output nodes is m. Output 0 represents that the positioning target is located in the 1 st area, output 1 represents that the positioning target is located in the 2 nd area, and output m-1 represents that the positioning target is located in the m-th area.
The prediction model introduces denoising self-coding and stacked self-coding networks. The denoising method adds certain times of Gaussian noise to the input signal intensity in the encoding, so that the model has stronger generalization capability. The stack self-coding can deepen any layer number of the network and has higher fitting degree.
The prediction model is provided with n layers of networks, and the specific network model is as follows:
(1) layer 1 structure of the model: firstly, training an autoencoder to obtain a first-order characteristic h of signal intensity(1)
(2) Layer i structure of the model: feature h of output of i-1 layer(i-1)As input, it is self-encoded and the feature h is obtained(i)
(3) Layer n structure of the model: the characteristic h of the previous step(n-1)Carrying out linear output to obtain a coordinate; combining the n layers to form a prediction network model which can obtain coordinates from the input signal intensity;
and fourthly, after the classification and prediction model is trained, removing the binocular camera. And then, firstly, transmitting the signal intensity of the target to be detected into the classification model, judging that the target to be detected is positioned in the ith area, and then, inputting the signal intensity into the ith prediction model to obtain the target coordinate xyz. Since the environment in the area is not constant, changes in the environment (e.g., movement of the position of static objects in the area, local building modification, movement of people, etc.) may occur, and the changes in the environment may affect the signal strength. And then, training the network model regularly to adapt to the change of the environment in the area and ensure the positioning accuracy.
In the first step, the number of installed base stations increases with the increase of the area, and symmetrical installation is ensured as much as possible. When the positioning area is large, area grid division can be carried out on the whole, the large area of the area to be detected is divided into a plurality of sub-areas, then interval division is carried out on each sub-area, in the signal intensity acquisition process, the sub-area where the object to be detected is located and a plurality of adjacent areas are determined, then signal values received by base stations of the areas are taken as representatives of the points, Gaussian filtering processing can be carried out on acquired data information before recording in order to improve the accuracy of the received signal values, and therefore the problem caused by signal fluctuation is reduced.
In the second step, information is read from the vision sensor, the distance between the camera and the object is determined by comparing different image information in left and right visual angles, the image formed by the camera on each road sign in the visual field forms pixel points in an imaging plane, and the distance between each pixel and the camera can be obtained through an imaging model of the binocular camera. And then, data acquired by the camera is transmitted into a front-end vision odometer, the change of the motion pose between adjacent images can be estimated, local environment information is restored, a video stream obtained from the camera is subjected to extraction and matching of the feature points, and the approximate motion track of the target to be detected is obtained. Meanwhile, the calculation result is preliminarily optimized to obtain an optimal coordinate solution and the optimal coordinate solution is transmitted to the rear end of the visual SLAM, so that the accumulated error of the visual odometer can be eliminated, and the data information omitted by the front end is further calculated and analyzed. And obtaining a map and a track which are more in line with the actual situation by utilizing a nonlinear graph optimization algorithm. And then the front end and the rear end are subjected to defect detection and leakage repair by utilizing loop detection, namely, some parts which are not optimized are subjected to regional error correction, and the positioning precision is further improved. And finally, establishing a real-time map, and establishing a real-time map model for the small area by adopting monocular dense reconstruction. The method is constructed according to tasks to be executed by a target to be detected, namely, coordinates Y (x, Y) of each sampling point in the area are obtained by positioning each sampling point in the first step.
The invention has the beneficial effects that: because the satellite signal is seriously attenuated due to the shielding interference of buildings, effective positioning cannot be realized in some areas, and the target can be effectively positioned in the areas by utilizing the method. And the invention method proposed this time overcomes some disadvantages of the existing indoor positioning technology. The target is positioned by adopting the SLAM technology and combining with deep learning, so that the positioning precision is higher, the coverage is wider, the cost is lower, and the practical value is higher; and because the vision SLAM technology is a positioning mode based on computer vision, huge calculation amount is needed, and positioning has certain delay, and the method can well solve the two problems.
Drawings
Fig. 1 is a schematic flow chart of the area location method combining SLAM technology with deep learning according to the present invention.
FIG. 2 is a flow chart of a model algorithm.
Fig. 3 is a visual SLAM classic framework.
Detailed Description
The following further describes a specific embodiment of the present invention with reference to the drawings and technical solutions.
An area positioning method combining SLAM technology and deep learning is applied to specific examples in factories:
firstly, a base station is installed in a factory with the length of 90m and the width of 30m, the base station can receive the signal intensity of a target in the area, a positioning area is divided into three small areas according to the size and the actual condition of the factory, sampling points are divided according to certain intervals, and then an instrument capable of obtaining the signal intensity is worn by the target to be measured. And finally, respectively collecting the signal intensity at sampling points in the three areas, and recording and storing the collected RSSI values.
And secondly, selecting a first small area from the three small areas, installing a binocular camera on the target to be detected, determining the distance between the camera and the object by comparing different image information in left and right visual angles, forming pixel points on an image formed by each road sign in the visual field by the camera in an imaging plane, and obtaining the distance between each pixel and the camera through an imaging model of the binocular camera. And then, data acquired by the camera is transmitted into a front-end vision odometer, the change of the motion pose between adjacent images can be estimated, local environment information is restored, a video stream obtained from the camera is subjected to extraction and matching of the feature points, and the approximate motion track of the target to be detected is obtained. Meanwhile, the calculation result is preliminarily optimized to obtain an optimal coordinate solution and the optimal coordinate solution is transmitted to the rear end of the visual SLAM, so that the accumulated error of the visual odometer can be eliminated, and the data information omitted by the front end is further calculated and analyzed. And obtaining a map and a track which are more in line with the actual situation through a nonlinear graph optimization algorithm. And then the front end and the rear end are subjected to defect detection and leakage repair by utilizing loop detection, namely, some parts which are not optimized are subjected to regional error correction, and the positioning precision is further improved. And finally, establishing a real-time map, and establishing a real-time map model for the small area by adopting monocular dense reconstruction. The method is constructed according to tasks to be executed by a target to be detected, namely, coordinates Y (x, Y) of each sampling point in the area are obtained by positioning each sampling point in the first step. The same is true for the second and third two small regions.
And thirdly, training a classification model, inputting the signal intensity X obtained in the first step, and outputting the signal intensity X into 3 divided positioning areas. That is, the filtered signal strength is input, it can be determined that the positioning target is in the i (i is 1,2,3) th area. And respectively training the prediction models of the 3 positioning areas. And (3) taking the signal intensity of the position where the positioning target is located obtained in the ith area as an input X, and taking the coordinate obtained in the second step as an output Y, so that the signal intensity and the coordinate of the target are taken as a set of corresponding data (X, Y) to train an ith prediction model.
Introducing a GA-BP neural network into the classification model; the initial weight and the threshold of the neural network are optimized by using the genetic algorithm, so that the neural network can be effectively prevented from falling into local optimum during training. Under the condition that the parking lot is divided into 3 positioning areas, the classification model hiding layer is 2 layers, the output layer is connected with the softmax classifier, and the number of output nodes is 3. Output 0 represents that the positioning object is located in the first area, output 1 represents that the positioning object is located in the second area, and output 2 represents that the positioning object is located in the third area.
The prediction model introduces denoising self-coding and stacked self-coding networks. The prediction model has 10 layers of networks, and coordinates can be directly obtained from the input signal strength.
And fourthly, after the classification and prediction model is trained, removing the binocular camera. And then, firstly, transmitting the signal intensity of the target to be detected into the classification model, judging that the target to be detected is positioned in the ith area, and then, inputting the signal intensity into the ith prediction model to obtain the target coordinate xyz. And then training the network model every other half year to adapt to the change of the environment in the area and ensure the positioning accuracy.

Claims (1)

1. A region positioning method combining SLAM technology with deep learning is characterized by comprising the following steps:
firstly, installing a base station in a certain specific area, and requiring the base station to receive the signal intensity of a target in the area; firstly, dividing a positioning area into m small areas according to the size of an actual scene and the condition of an obstacle, dividing sampling points according to a certain interval, and then wearing an instrument capable of obtaining signal intensity by a target to be measured; finally, collecting each signal intensity X at sampling points in m small areas respectively, and recording and storing each collected RSSI value;
secondly, positioning by using a binocular camera, installing the binocular camera on the target to be detected, and obtaining coordinates Y (x, Y) of each sampling point of the target to be detected in each small area by using a classic frame of a visual SLAM, namely a sensor data processing module, a front-end visual odometer module, a rear-end optimization module, a loop detection module and a drawing module;
training a classification model, inputting the signal intensity X obtained in the first step, and outputting the signal intensity X into m divided small areas; inputting the filtered signal intensity, namely judging that the target to be detected is in the ith small area, wherein i is 1,2, m; respectively training prediction models of m small areas, taking the signal intensity of the position of the target to be detected in the ith small area as input X, taking the coordinate obtained in the second step as output Y, and training the ith prediction model by taking X and Y as corresponding training data;
introducing a GA-BP neural network into the classification model; optimizing the initial weight and the threshold of the GA-BP neural network by utilizing a genetic algorithm, and effectively avoiding the GA-BP neural network from falling into local optimum during training; the hidden layer of the classification model is 2 layers, the output layer is connected with a softmax classifier, and the number of output nodes is m; outputting 0 to represent that the target to be detected is located in the 1 st area, outputting 1 to represent that the target to be detected is located in the 2 nd area, and so on, and outputting m-1 to represent that the target to be detected is located in the mth area;
introducing denoising self-coding and stacked self-coding networks into the prediction model;
the prediction model is provided with n layers of networks, and the specific network model is as follows:
(1) layer 1 structure of the prediction model: firstly, training an autoencoder to obtain a first-order characteristic h of signal intensity(1)
(2) Layer i structure of the prediction model: feature h of output of i-1 layer(i-1)As input, it is self-encoded and the feature h is obtained(i)
(3) First of prediction modeln-layer structure: the characteristics h obtained in the step (2)(n-1)Carrying out linear output to obtain a coordinate;
(4) combining the n layers to form a prediction model, and obtaining coordinates from the input signal intensity;
fourthly, after the classification model and the prediction model are trained, removing the binocular camera; firstly, the signal intensity of the target to be detected is transmitted into the classification model, the target to be detected is judged to be located in the ith area, then the signal intensity is input into the ith prediction model, and the target coordinate xyz is obtained.
CN202011121186.8A 2020-07-24 2020-10-20 Area positioning method combining SLAM technology with deep learning Active CN112261719B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010720502 2020-07-24
CN2020107205027 2020-07-24

Publications (2)

Publication Number Publication Date
CN112261719A true CN112261719A (en) 2021-01-22
CN112261719B CN112261719B (en) 2022-02-11

Family

ID=74245170

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011121186.8A Active CN112261719B (en) 2020-07-24 2020-10-20 Area positioning method combining SLAM technology with deep learning

Country Status (1)

Country Link
CN (1) CN112261719B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113573232A (en) * 2021-07-13 2021-10-29 深圳优地科技有限公司 Robot roadway positioning method, device, equipment and storage medium
CN114125698A (en) * 2021-05-07 2022-03-01 南京邮电大学 Positioning method based on channel state information and depth image
CN114222240A (en) * 2021-10-29 2022-03-22 中国石油大学(华东) Multi-source fusion positioning method based on particle filtering

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103209417A (en) * 2013-03-05 2013-07-17 北京邮电大学 Method and device for predicting spectrum occupancy state based on neural network
US20160025498A1 (en) * 2014-07-28 2016-01-28 Google Inc. Systems and Methods for Performing a Multi-Step Process for Map Generation or Device Localizing
WO2018176195A1 (en) * 2017-03-27 2018-10-04 中国科学院深圳先进技术研究院 Method and device for classifying indoor scene
CN109341691A (en) * 2018-09-30 2019-02-15 百色学院 Intelligent indoor positioning system and its localization method based on icon-based programming
US20190150006A1 (en) * 2017-11-15 2019-05-16 Futurewei Technologies, Inc. Predicting received signal strength in a telecommunication network using deep neural networks
CN110264154A (en) * 2019-05-28 2019-09-20 南京航空航天大学 A kind of crowdsourcing signal map constructing method based on self-encoding encoder
CN110866140A (en) * 2019-11-26 2020-03-06 腾讯科技(深圳)有限公司 Image feature extraction model training method, image searching method and computer equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103209417A (en) * 2013-03-05 2013-07-17 北京邮电大学 Method and device for predicting spectrum occupancy state based on neural network
US20160025498A1 (en) * 2014-07-28 2016-01-28 Google Inc. Systems and Methods for Performing a Multi-Step Process for Map Generation or Device Localizing
WO2018176195A1 (en) * 2017-03-27 2018-10-04 中国科学院深圳先进技术研究院 Method and device for classifying indoor scene
US20190150006A1 (en) * 2017-11-15 2019-05-16 Futurewei Technologies, Inc. Predicting received signal strength in a telecommunication network using deep neural networks
CN109341691A (en) * 2018-09-30 2019-02-15 百色学院 Intelligent indoor positioning system and its localization method based on icon-based programming
CN110264154A (en) * 2019-05-28 2019-09-20 南京航空航天大学 A kind of crowdsourcing signal map constructing method based on self-encoding encoder
CN110866140A (en) * 2019-11-26 2020-03-06 腾讯科技(深圳)有限公司 Image feature extraction model training method, image searching method and computer equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张国良等: "融合直接法与特征法的快速双目SLAM算法", 《北大核心》 *
李嘉俊: "基于深度学习的WiFi室内定位算法研究", 《中国优秀硕士学位论文全文数据库(电子期刊)》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114125698A (en) * 2021-05-07 2022-03-01 南京邮电大学 Positioning method based on channel state information and depth image
CN114125698B (en) * 2021-05-07 2024-05-17 南京邮电大学 Positioning method based on channel state information and depth image
CN113573232A (en) * 2021-07-13 2021-10-29 深圳优地科技有限公司 Robot roadway positioning method, device, equipment and storage medium
CN113573232B (en) * 2021-07-13 2024-04-19 深圳优地科技有限公司 Robot roadway positioning method, device, equipment and storage medium
CN114222240A (en) * 2021-10-29 2022-03-22 中国石油大学(华东) Multi-source fusion positioning method based on particle filtering

Also Published As

Publication number Publication date
CN112261719B (en) 2022-02-11

Similar Documents

Publication Publication Date Title
CN112261719B (en) Area positioning method combining SLAM technology with deep learning
CN110675418B (en) Target track optimization method based on DS evidence theory
EP3633615A1 (en) Deep learning network and average drift-based automatic vessel tracking method and system
CN107967473B (en) Robot autonomous positioning and navigation based on image-text recognition and semantics
CN111123257B (en) Radar moving target multi-frame joint detection method based on graph space-time network
CN101576384B (en) Indoor movable robot real-time navigation method based on visual information correction
CN106584451B (en) automatic transformer substation composition robot and method based on visual navigation
CN111174781B (en) Inertial navigation positioning method based on wearable device combined target detection
CN109961460A (en) A kind of multiple target method for inspecting based on improvement YOLOv3 model
CN111060924B (en) SLAM and target tracking method
CN114419825B (en) High-speed rail perimeter intrusion monitoring device and method based on millimeter wave radar and camera
CN105225482A (en) Based on vehicle detecting system and the method for binocular stereo vision
CN104023228A (en) Self-adaptive indoor vision positioning method based on global motion estimation
CN105760846A (en) Object detection and location method and system based on depth data
CN115032651A (en) Target detection method based on fusion of laser radar and machine vision
CN114240868A (en) Unmanned aerial vehicle-based inspection analysis system and method
CN117029840A (en) Mobile vehicle positioning method and system
CN115979250B (en) Positioning method based on UWB module, semantic map and visual information
CN116978009A (en) Dynamic object filtering method based on 4D millimeter wave radar
CN111726535A (en) Smart city CIM video big data image quality control method based on vehicle perception
CN116645645A (en) Coal Mine Transportation Safety Determination Method and Coal Mine Transportation Safety Determination System
CN111144279A (en) Method for identifying obstacle in intelligent auxiliary driving
CN114152955A (en) High-precision obstacle identification system based on SLAM technology
CN114842660A (en) Unmanned lane track prediction method and device and electronic equipment
CN115035470A (en) Low, small and slow target identification and positioning method and system based on mixed vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant