CN112689234B - Indoor vehicle positioning method, device, computer equipment and storage medium - Google Patents

Indoor vehicle positioning method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN112689234B
CN112689234B CN202011581538.8A CN202011581538A CN112689234B CN 112689234 B CN112689234 B CN 112689234B CN 202011581538 A CN202011581538 A CN 202011581538A CN 112689234 B CN112689234 B CN 112689234B
Authority
CN
China
Prior art keywords
positioning data
vehicle
fusion
positioning
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011581538.8A
Other languages
Chinese (zh)
Other versions
CN112689234A (en
Inventor
卫璁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Aibee Technology Co Ltd
Original Assignee
Beijing Aibee Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Aibee Technology Co Ltd filed Critical Beijing Aibee Technology Co Ltd
Priority to CN202011581538.8A priority Critical patent/CN112689234B/en
Publication of CN112689234A publication Critical patent/CN112689234A/en
Application granted granted Critical
Publication of CN112689234B publication Critical patent/CN112689234B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The application relates to an indoor vehicle positioning method, an indoor vehicle positioning device, computer equipment and a storage medium. The method comprises the following steps: acquiring an image to be detected acquired at the current moment, extracting vehicle characteristics according to the image to be detected, and calculating according to the vehicle characteristics to obtain first positioning data; acquiring a Bluetooth signal uploaded by a mobile terminal, and determining second positioning data according to the Bluetooth signal; acquiring fusion positioning data and speed of a vehicle at the previous moment, and predicting according to the fusion positioning data and the speed to obtain predicted positioning data of the vehicle at the current moment; and carrying out fusion calculation according to any one of the predicted positioning data and the first positioning data or the second positioning data to obtain fusion positioning data at the current moment. The fusion calculation is carried out through any one of the prediction positioning data and the first positioning data or the second positioning data to obtain the fusion positioning data at the current moment, so that the positioning precision is improved, the positioning time delay is shortened, and the stability is higher.

Description

Indoor vehicle positioning method, device, computer equipment and storage medium
Technical Field
The present application relates to the field of indoor positioning technologies, and in particular, to a method and apparatus for positioning an indoor vehicle, a computer device, and a storage medium.
Background
With the increase of the holding quantity of the passenger car, the gap of the parking space is larger and larger, and 30% of driving time is spent on parking and parking space finding. The vehicle positioning and navigation system can help a driver to effectively improve parking efficiency, and the driver is prevented from wasting precious time in parking space finding.
However, current vehicle positioning is mainly achieved by GPS (Global Positioning System ). However, the GPS signal is poor in signal in the indoor parking lot, so that the positioning time delay is large, and the positioning error is large.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an indoor vehicle positioning method, apparatus, computer device, and storage medium that can reduce positioning errors and shorten positioning time delay.
In a first aspect, there is provided a method of an in-vehicle, the method comprising:
acquiring an image to be detected acquired at the current moment, extracting vehicle characteristics according to the image to be detected, and calculating according to the vehicle characteristics to obtain first positioning data;
Acquiring a Bluetooth signal uploaded by a mobile terminal, and determining second positioning data according to the Bluetooth signal; the Bluetooth signal is a signal which is transmitted by the transmitting equipment at the current moment and received by the mobile terminal;
acquiring fusion positioning data and speed of a vehicle at the previous moment, and predicting according to the fusion positioning data and the speed to obtain predicted positioning data of the vehicle at the current moment; wherein the previous time is the next previous time to the current time;
and carrying out fusion calculation according to any one of the predicted positioning data and the first positioning data or the second positioning data to obtain fusion positioning data at the current moment.
In one embodiment, the fusing calculation is performed according to any one of the predicted positioning data and the first positioning data or the second positioning data to obtain the fused positioning data at the current moment, which includes:
comparing the time when the Bluetooth signal is received with the time when the image to be detected is received;
if the time of receiving the image to be detected is earlier than the time of receiving the Bluetooth signal, carrying out fusion calculation according to the predicted positioning data and the first positioning data to obtain fusion positioning data of the current time;
If the time of receiving the Bluetooth signal is earlier than the time of receiving the image to be detected, carrying out fusion calculation according to the predicted positioning data and the second positioning data to obtain fusion positioning data of the current time, wherein in one embodiment, the Bluetooth signal comprises an identifier of transmitting equipment and a channel identifier;
determining second positioning data from the bluetooth signal, comprising:
and determining signal strength according to the Bluetooth signal, and searching from the pre-established mapping relation of the signal strength, the equipment identification, the channel identification and the position according to the signal strength, the identification of the transmitting equipment and the channel identification to determine second positioning data.
In one embodiment, the vehicle characteristic includes at least one of a vehicle logo, a vehicle model, a vehicle body color;
extracting the vehicle features according to the image to be detected, including:
inputting the image to be detected into a vehicle feature recognition model to obtain vehicle features; the vehicle feature recognition model is used for extracting at least one feature of a vehicle identifier, a vehicle model or a vehicle body color.
In one embodiment, the calculating according to the vehicle feature to obtain the first positioning data includes:
Identifying a location of the vehicle feature in the image to be detected;
projecting the position of the vehicle feature in the image to be detected into a three-dimensional coordinate system corresponding to a preset projection matrix, and obtaining the three-dimensional coordinate of the position in the three-dimensional coordinate system;
and positioning the vehicle according to the three-dimensional coordinates to obtain first positioning data.
In one embodiment, the acquiring the speed of the vehicle at the last moment includes:
acquiring track data of a vehicle at the last moment; the track data comprises at least two pieces of fusion positioning data, wherein the at least two pieces of fusion positioning data comprise fusion positioning data of the last moment and fusion positioning data of the last moment;
determining a moving distance according to the fused positioning data of the previous moment and the positioning data of the previous moment, and calculating according to the moving distance and the time interval to obtain a speed;
predicting according to the fusion positioning data and the speed to obtain the predicted positioning data of the vehicle at the current moment, wherein the method comprises the following steps:
and calculating according to the fusion positioning data and speed of the last moment and the time interval between the last moment and the current moment to obtain the predicted positioning data of the vehicle at the current moment.
In one embodiment, the method further comprises:
if the first positioning data or the second positioning data are the first measured position data, the first positioning data or the second positioning data are directly used as fusion positioning data at the current moment.
In a second aspect, there is provided an indoor vehicle positioning apparatus comprising:
the first positioning module is used for acquiring an image to be detected acquired at the current moment, extracting vehicle characteristics according to the image to be detected, and calculating according to the vehicle characteristics to obtain first positioning data;
the second positioning module is used for acquiring a Bluetooth signal uploaded by the mobile terminal and determining second positioning data according to the Bluetooth signal; the Bluetooth signal is a signal which is transmitted by the transmitting equipment at the current moment and received by the mobile terminal;
the acquisition module is used for acquiring the fusion positioning data and the speed of the vehicle at the previous moment, and predicting according to the fusion positioning data and the speed to obtain the predicted positioning data of the vehicle at the current moment; wherein the previous time is the next previous time to the current time;
and the fusion positioning module is used for carrying out fusion calculation according to any one of the predicted positioning data and the first positioning data or the second positioning data to obtain fusion positioning data at the current moment.
In a third aspect, a computer device is provided, comprising a memory storing a computer program and a processor implementing the following steps when the computer program is executed:
acquiring an image to be detected acquired at the current moment, extracting vehicle characteristics according to the image to be detected, and calculating according to the vehicle characteristics to obtain first positioning data;
acquiring a Bluetooth signal uploaded by a mobile terminal, and determining second positioning data according to the Bluetooth signal; the Bluetooth signal is a signal which is transmitted by the transmitting equipment at the current moment and received by the mobile terminal;
acquiring fusion positioning data and speed of a vehicle at the previous moment, and predicting according to the fusion positioning data and the speed to obtain predicted positioning data of the vehicle at the current moment; wherein the previous time is the next previous time to the current time;
and carrying out fusion calculation according to any one of the predicted positioning data and the first positioning data or the second positioning data to obtain fusion positioning data at the current moment.
In a fourth aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
Acquiring an image to be detected acquired at the current moment, extracting vehicle characteristics according to the image to be detected, and calculating according to the vehicle characteristics to obtain first positioning data;
acquiring a Bluetooth signal uploaded by a mobile terminal, and determining second positioning data according to the Bluetooth signal; the Bluetooth signal is a signal which is transmitted by the transmitting equipment at the current moment and received by the mobile terminal;
acquiring fusion positioning data and speed of a vehicle at the previous moment, and predicting according to the fusion positioning data and the speed to obtain predicted positioning data of the vehicle at the current moment; wherein the previous time is the next previous time to the current time;
and carrying out fusion calculation according to any one of the predicted positioning data and the first positioning data or the second positioning data to obtain fusion positioning data at the current moment.
According to the indoor vehicle positioning method, the indoor vehicle positioning device, the computer equipment and the storage medium, the vehicle characteristics are extracted according to the image to be detected, which is acquired at the current moment, and the vehicle characteristics are calculated according to the vehicle characteristics, so that first positioning data are obtained; acquiring a Bluetooth signal uploaded by a mobile terminal, and determining second positioning data according to the Bluetooth signal; acquiring fusion positioning data and speed of a vehicle at the previous moment, and predicting according to the fusion positioning data and the speed to obtain predicted positioning data of the vehicle at the current moment; and carrying out fusion calculation according to any one of the predicted positioning data and the first positioning data or the second positioning data to obtain fusion positioning data at the current moment. The fusion positioning data at the current moment is obtained by carrying out fusion calculation on any one of the prediction positioning data and the first positioning data obtained by calculating based on the image data or the second positioning data obtained by calculating based on the Bluetooth signal, so that the positioning precision is improved, the positioning time delay is shortened, and the stability is stronger.
Drawings
FIG. 1 is an application environment diagram of a method of locating an indoor vehicle in one embodiment;
FIG. 2 is a flow chart of a method of locating an indoor vehicle in one embodiment;
FIG. 3 is a flow chart of an indoor vehicle positioning step according to another embodiment;
FIG. 4 is a flow chart of a method for locating an indoor vehicle in another embodiment;
FIG. 5 is a flow chart of a method for locating an indoor vehicle in another embodiment;
FIG. 6 is a block diagram of an indoor vehicle positioning device in one embodiment;
fig. 7 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
At present, bluetooth positioning of a mobile phone is achieved by using an iBeacon signal which can be acquired by a smart phone, the mobile phone is placed in a vehicle and synchronously moves with the vehicle, and the vehicle position is positioned by using bluetooth of the mobile phone placed in the vehicle by a vehicle owner. The accuracy of mobile phone Bluetooth positioning can reach 1-3 meters, but the mobile phone Bluetooth positioning device must rely on starting of mobile phone Bluetooth functions, initial positioning speed is low, bluetooth performance differences of mobile phones of different models are large, positioning can deviate under certain conditions, and correction is needed.
The camera visual positioning is to install a camera on a lane in a parking lot lane or other application scenes such as a car washing place, and to extract and identify vehicle characteristics by means of computer visual technology so as to position the vehicle in real time. The camera vision positioning speed is high, the camera is used for passive acquisition, the influence of mobile phone hardware equipment in a vehicle is avoided, the influence of ambient light is easy, and for example, the vehicle feature recognition can be influenced under the condition of darker light.
Therefore, the application considers combining mobile phone Bluetooth positioning and camera vision positioning, realizes complementary data advantages of each sensor through a multi-sensor fusion filtering algorithm, and can realize a parking lot forward parking space guiding technology with high precision and high stability, thereby providing a comprehensive indoor positioning high-precision solution with high precision, quick response, good stability and stronger adaptability and realizing better parking space guiding experience for vehicle owners.
The indoor vehicle positioning method provided by the application can be applied to an application environment shown in figure 1. Wherein the mobile terminal 102 and the image acquisition device 104 communicate with the server 106 through a network. The mobile terminal 102 collects bluetooth signals transmitted by the transmitting device and transmits the bluetooth signals to the server 106. The video stream captured by the collector of the image capturing device 104 is uploaded to the server 106. The mobile terminal 102 may be, but not limited to, various personal computers, notebook computers, smartphones, tablet computers, portable wearable devices, the image capturing device 104 may be a black-and-white camera, a color camera, an infrared camera, etc., and the server 104 may be implemented by a separate server or a server cluster formed by a plurality of servers.
In one embodiment, as shown in fig. 2, there is provided an indoor vehicle positioning method, which is illustrated by taking an example that the method is applied to the server in fig. 1, and includes the following steps:
step 202, acquiring an image to be detected acquired at the current moment, extracting vehicle characteristics according to the image to be detected, and calculating according to the vehicle characteristics to obtain first positioning data.
The image to be detected may be an original image of a frame in the video stream acquired by the image acquisition device 104, or an image obtained by performing image processing on the original image acquired by the image acquisition device by using some image processing methods. The vehicle characteristics may be characteristics of a vehicle logo (i.e., license plate number), a vehicle model, a vehicle body color, and the like. The first positioning data refers to vehicle position data calculated based on the image.
Specifically, the server acquires a frame of original image in the video stream acquired by the image acquisition device 104 at the current moment as the image to be detected. And inputting the image to be detected into a pre-trained recognition model to extract the vehicle characteristics. According to the position relation between the vehicle features and the image to be detected, mapping the vehicle features and the image to be detected into a corresponding three-dimensional coordinate system, further calculating to obtain first position data, and storing the first position data in a first position matrix.
Step 204, acquiring a Bluetooth signal uploaded by the mobile terminal, and determining second positioning data according to the Bluetooth signal; the bluetooth signal is a signal that the transmitting device transmits at the current time and is received by the mobile terminal.
The transmitting device is an iBeacon beacon, and the Bluetooth signal is based on the Bluetooth signal broadcast by the iBeacon beacon. The second positioning data refers to vehicle position data calculated based on the bluetooth signal.
Specifically, when the mobile device turns on the bluetooth function and enters the broadcasting range of the iBeacon beacon, the mobile device can receive the bluetooth signal and send the bluetooth signal to the server. And after receiving the Bluetooth signal, the server extracts the strength of the Bluetooth signal and the carried useful information, further calculates to obtain second positioning data, and stores the second positioning data in a second position matrix.
Step 206, acquiring the fusion positioning data and speed of the vehicle at the previous moment, and predicting according to the fusion positioning data and speed to obtain the predicted positioning data of the vehicle at the current moment; wherein the previous time is the time adjacent to the current time.
The fused positioning data refers to optimal positioning data obtained by optimizing the estimated positioning data, the first positioning data and the second positioning data by using a filtering algorithm.
Specifically, the fused positioning data of the previous moment, that is, the optimal positioning state of the previous moment, the optimal positioning state and the speed state of the vehicle of the previous moment are obtained from the state matrix, the displacement is calculated according to the speed state, and the predicted positioning data of the vehicle at the current moment is obtained by calculating according to the optimal positioning state and the displacement of the previous moment.
And step 208, performing fusion calculation according to the predicted positioning data and any one of the first positioning data or the second positioning data to obtain fusion positioning data at the current moment.
Specifically, whether the measurement data received at the current moment is first positioning data or second positioning data is determined, then filtering is carried out by utilizing a filtering algorithm according to the measurement data received at the current moment and the predicted positioning data, the optimal positioning state at the current moment is obtained, and the optimal positioning state at the current moment is stored in a positioning state matrix. The positioning state is optimized through continuous iteration of the filtering algorithm, and the positioning state is more and more close to a true value, so that the positioning precision is improved.
In the indoor vehicle positioning method, the image to be detected acquired at the current moment is acquired, the vehicle characteristics are extracted according to the image to be detected, and calculation is performed according to the vehicle characteristics to obtain first positioning data; acquiring a Bluetooth signal uploaded by a mobile terminal, and determining second positioning data according to the Bluetooth signal; acquiring fusion positioning data and speed of a vehicle at the previous moment, and predicting according to the fusion positioning data and the speed to obtain predicted positioning data of the vehicle at the current moment; and carrying out fusion calculation according to any one of the predicted positioning data and the first positioning data or the second positioning data to obtain fusion positioning data at the current moment. The fusion positioning data at the current moment is obtained by carrying out fusion calculation on any one of the prediction positioning data and the first positioning data obtained by calculating based on the image data or the second positioning data obtained by calculating based on the Bluetooth signal, so that the positioning precision is improved, the positioning time delay is shortened, and the stability is stronger.
In one embodiment, the fusion calculation is performed according to the predicted positioning data and any one of the first positioning data or the second positioning data to obtain the fusion positioning data at the current time, which includes:
comparing the time when the Bluetooth signal is received with the time when the image to be detected is received;
if the time of receiving the image to be detected is earlier than the time of receiving the Bluetooth signal, carrying out fusion calculation according to the predicted positioning data and the first positioning data to obtain fusion positioning data of the current time;
if the time of receiving the Bluetooth signal is earlier than the time of receiving the image to be detected, fusion calculation is carried out according to the predicted positioning data and the second positioning data to obtain fusion positioning data of the current time.
Specifically, when a bluetooth signal is received and the image to be detected carries time information, comparing the time when the bluetooth signal is received with the time when the image to be detected is received, if the time when the image to be detected is received is earlier than the time when the bluetooth signal is received, filtering is performed by using a filtering algorithm according to the predicted positioning data and the first positioning data, and fusion positioning data at the current time is obtained through calculation. The filtering algorithm can adopt a Kalman filtering algorithm, the predicted positioning data is used as a predicted value in the Kalman filtering algorithm, the first positioning data is used as a measured value, the measured value and the predicted value are considered to have errors, an error coefficient is added for adjustment, and as the positioning state value and the speed state value follow normal distribution, the error coefficient can be obtained according to the mean value and the variance of the positioning state value and the mean value and the variance of the speed state value, the first positioning data + (1-p) is used for predicting the positioning data according to the fusion positioning data = p, and p is the error coefficient.
And if the time of receiving the Bluetooth signal is earlier than the time of receiving the image to be detected, filtering by using a filtering algorithm according to the predicted positioning data and the second positioning data, and calculating to obtain the fusion positioning data at the current time. The filtering algorithm may adopt a kalman filtering algorithm, the predicted positioning data is used as a predicted value in the kalman filtering algorithm, the second positioning data is used as a measured value, and the predicted positioning data of the second positioning data + (1-p) and the fusion positioning data = p, wherein p is an error coefficient.
Alternatively, another implementation of this embodiment may also use a time identifier to determine whether the measurement data at the current time is the first positioning data or the second positioning data. The time stamp takes a value of 0 or 1.
And if the time mark at the current moment is 1, filtering by using a filtering algorithm according to the predicted positioning data and the first positioning data, and calculating to obtain the fused positioning data at the current moment. And finally, modifying the time mark to be 0 so as to calculate by using the second positioning data at the next moment to obtain the fusion positioning data.
And if the time mark at the current moment is 0, filtering by utilizing a filtering algorithm according to the predicted positioning data and the second positioning data, and calculating to obtain the fused positioning data at the current moment. And finally, modifying the time mark to be 1 so as to calculate by using the first positioning data at the next moment to obtain the fusion positioning data.
In this embodiment, the kalman filtering is performed by comprehensively using the advantages of the first positioning data and the second positioning data, and predicting the positioning data, so that the calculated fusion positioning data is more and more approximate to the true value, and the positioning accuracy is improved and the positioning time delay is shortened. Meanwhile, the indoor positioning method provided by the application can be switched between the first positioning data and the second data, wherein any positioning data is abnormal and can be automatically switched to the other positioning data, so that high availability and stability are ensured, and the adaptability is stronger.
In one embodiment, the Bluetooth signal includes an identification of the transmitting device and a channel identification;
determining second positioning data from the bluetooth signal, comprising:
and determining signal strength according to the Bluetooth signal, and searching from the pre-established mapping relation of the signal strength, the equipment identification, the channel identification and the position according to the signal strength, the identification of the transmitting equipment and the channel identification to determine second positioning data.
Specifically, the bluetooth signal includes an identification of the transmitting device and a channel identification, which is used to represent a transmission frequency point utilized by the transmitting device when transmitting the signal. For example, the frequency points corresponding to bluetooth low energy may be 2.402GHz, 2.426GHz and 2.48GHz, and when the application is performed, the 3 different transmitting frequency points may be represented by different channel identifiers. The bluetooth signal may further include: a universally unique identifier, a primary identifier, and a secondary identifier, a channel identification being located at any one of the universally unique identifier, the primary identifier, or the secondary identifier. I.e. 128 bits Universal Unique Identifier (UUID), 16 bits primary identifier (Major), 16 bits secondary identifier (Minor).
And determining signal strength (RSSI) according to the received Bluetooth signal, searching in a mapping relation between the signal strength, the equipment identification, the channel identification and the position, which is established in advance, according to the signal strength, the identification of the transmitting equipment and the channel identification, and finding the position in the mapping relation matched with the signal strength, the identification of the transmitting equipment and the channel identification and taking the position as second positioning data.
As shown in fig. 3, in the off-line data acquisition stage, a plurality of acquisition devices are used to acquire bluetooth signals transmitted by different transmitting devices at different positions, the acquisition devices determine own position data, and a mapping relationship is established between the bluetooth signals and the strength and the position relationship. In the off-line acquisition stage, calibration and fusion can be carried out through position data acquired by Bluetooth signals, so that ground point cloud data are generated, tracks and vector data are drawn in the ground point cloud data, and field attributes are not edited by the data; and carrying out symbolization processing on the edited data, and carrying out finishing and shading on the drawing surface to obtain the high-precision semantic map. And navigating the vehicle by using the high-precision semantic map and the current position of the vehicle.
In the online data fusion positioning stage, a server acquires a frame of image to be detected in a video stream acquired by a camera, and extracts vehicle visual characteristics from the image to be detected to obtain a first positioning data (namely a vehicle track); the server obtains a Bluetooth signal uploaded by the mobile terminal, determines the signal strength according to the Bluetooth signal, and then searches from the mapping relation of the pre-established signal strength, the device identification, the channel identification and the position according to the signal strength, the identification of the transmitting device and the channel identification (namely Bluetooth characteristics) to determine second positioning data (namely online fingerprint matching/learning); comparing the time when the Bluetooth signal is received with the time when the image to be detected is received; if the time of receiving the image to be detected is earlier than the time of receiving the Bluetooth signal, carrying out fusion calculation according to the predicted positioning data and the first positioning data to obtain fusion positioning data at the current time, then outputting the fusion positioning data (namely vehicle position coordinates), and sending the fusion positioning data to a mobile phone of a vehicle owner; if the time of receiving the Bluetooth signal is earlier than the time of receiving the image to be detected, fusion calculation is carried out according to the predicted positioning data and the second positioning data to obtain fusion positioning data of the current time, then the fusion positioning data (namely vehicle position coordinates) is output, and the fusion positioning data is sent to a mobile phone of a vehicle owner, so that the position in the navigation process is updated in real time.
In this embodiment, the second positioning data is determined through the bluetooth signal and the corresponding mapping relationship, and since the bluetooth signal carries the identifier of the transmitting device and the channel identifier, the problem of difference in receiving strength caused by different channels can be avoided, thereby improving positioning accuracy.
In one embodiment, the vehicle characteristic includes at least one of a vehicle logo, a vehicle model, or a vehicle body color;
extracting the vehicle features according to the image to be detected, including:
inputting the image to be detected into a vehicle feature recognition model to obtain vehicle features; the vehicle feature recognition model is used for extracting at least one feature of vehicle identification, vehicle type and vehicle body color.
Specifically, the vehicle characteristic recognition model is a pre-trained vehicle characteristic recognition model capable of simultaneously recognizing characteristics such as vehicle identifications, vehicle types and vehicle body colors. The convolutional neural network capable of simultaneously learning various vehicle features is designed in advance, training is carried out based on vehicle image samples capable of simultaneously identifying features such as vehicle identifications, vehicle types and vehicle body colors, and a trained vehicle feature identification model is obtained.
Inputting the image to be detected into a trained vehicle identification model to obtain the characteristics of the identified vehicle identification, vehicle type, vehicle body color and the like.
In one embodiment, the calculating according to the vehicle characteristic to obtain the first positioning data includes:
identifying a location of the vehicle feature in the image to be detected;
projecting the position of the vehicle feature in the image to be detected into a three-dimensional coordinate system corresponding to a preset projection matrix, and obtaining the three-dimensional coordinate of the position in the three-dimensional coordinate system;
and positioning the vehicle according to the three-dimensional coordinates to obtain first positioning data.
Specifically, the position of the vehicle feature in the image to be detected is calculated from the identified vehicle feature. The embodiment of the invention can predefine a three-dimensional coordinate system, then determine the position of an actual application scene and the boundary of the actual application scene in the three-dimensional coordinate system, and set a projection matrix of the three-dimensional coordinate of the image to be detected projected into the three-dimensional coordinate system according to the corresponding image angle of the image to be detected in the actual application scene. The actual application scene can be an indoor parking lot, a 4S shop, a car washing shop and the like, and the embodiment of the invention does not limit the actual application scene.
The projection matrix may be composed of external parameters and internal parameters corresponding to the image to be detected. Alternatively, the external parameters may include the position and orientation of the image to be detected in a three-dimensional coordinate system. Alternatively, the internal parameter may be an approximate parameter of the physical characteristics of the apparatus or device that obtains the image to be detected. For example: p=kr|t, where P is a projection matrix, K is an approximate parameter of a physical characteristic of a device or apparatus that obtains an image to be detected, R is a position of the image to be detected corresponding to the image to be detected in a three-dimensional coordinate system, and t is an orientation of the image to be detected corresponding to the image to be detected in the three-dimensional coordinate system.
In this embodiment, the internal parameters of the projection matrix may be calibrated by a checkerboard calibration method. Specifically, in the embodiment of the invention, a chessboard with a known fixed size can be placed in the acquisition view field of a device or equipment for obtaining an image to be detected, and the mapping matrix of the device or equipment is calculated by detecting the angular point position of the chessboard in a two-dimensional image, wherein the mapping matrix is an internal reference of the image acquisition device.
In the embodiment, the external parameters of the projection matrix can be calibrated through a point cloud matching method. Specifically, in the embodiment of the invention, in the acquisition view field of two devices or equipment with adjacent installation positions in an actual application scene, the point positions corresponding to the overlapping area point clouds acquired by the two devices or equipment are calibrated, so that the relative positions of the two devices or equipment are determined, and the relative positions of all the devices or equipment in the actual application scene are further determined. Therefore, after determining the three-dimensional coordinate system in which the actual application scene is located, the position and orientation of each device or apparatus that obtains the image to be detected in the three-dimensional coordinate system can be determined.
According to the embodiment, the vehicle key points in the image are identified, so that the three-dimensional coordinates of the vehicle key points in the three-dimensional coordinate system are determined, and the accurate positioning of the vehicle is realized.
In one embodiment, the acquiring the speed of the vehicle at the last time includes:
acquiring track data of a vehicle at the last moment; the track data comprises at least two pieces of fusion positioning data, wherein the at least two pieces of fusion positioning data comprise fusion positioning data of the last moment and fusion positioning data of the last moment;
determining a moving distance according to the fused positioning data of the previous moment and the positioning data of the previous moment, and calculating according to the moving distance and the time interval to obtain a speed;
predicting according to the fusion positioning data and the speed to obtain the predicted positioning data of the vehicle at the current moment, wherein the method comprises the following steps:
and calculating according to the fusion positioning data and speed of the last moment and the time interval between the last moment and the current moment to obtain the predicted positioning data of the vehicle at the current moment.
Specifically, track data of the vehicle at the previous time is acquired, the track data includes fused positioning data at the previous time and fused positioning data at the previous time, and a straight line distance between the fused positioning data at the previous time and the positioning data at the previous time is calculated as a moving distance (i.e., displacement). The speed is obtained by dividing the distance of movement by the time interval between the last moment and the last time of the last moment.
And then obtaining the predicted positioning data of the vehicle at the current moment according to the predicted positioning data of the current moment = the fused positioning data of the previous moment + the speed.
In one embodiment, the method further comprises:
if the first positioning data or the second positioning data are the first measured position data, the first positioning data or the second positioning data are directly used as fusion positioning data at the current moment.
Specifically, the first positioning data is stored in the first positioning matrix, the second positioning data is stored in the second positioning matrix, and if the current first positioning matrix and the second positioning matrix are empty, namely the first positioning data or the second positioning data are the first measured position data, the first positioning data or the second positioning data are directly used as fusion positioning data at the current moment, namely the initialization fusion positioning data. As shown in fig. 4, the first positioning data (i.e., bluetooth positioning data) or the second positioning data (i.e., camera positioning data) is the value of the first position measurement, and then the first positioning data or the second positioning data is used as the initialization fusion positioning data (i.e., initialization state). If the current first positioning matrix or the second positioning matrix is not empty, namely the first positioning data or the second positioning data is not the first measured position data; comparing the time when the Bluetooth signal is received with the time when the image to be detected is received, if the time when the image to be detected is received is earlier than the time when the Bluetooth signal is received, carrying out fusion calculation according to the predicted positioning data and the first positioning data to obtain fusion positioning data of the current time; if the time of receiving the Bluetooth signal is earlier than the time of receiving the image to be detected, fusion calculation is carried out according to the predicted positioning data and the second positioning data to obtain fusion positioning data of the current time. And finally, storing the fusion positioning data at the current moment into a positioning state matrix (namely updating state).
In order to easily understand the technical solution provided by the embodiments of the present application, the following is a brief description of the indoor vehicle positioning method provided by the embodiments of the present application with a complete indoor vehicle positioning process in conjunction with fig. 5:
(1) A vehicle enters a parking lot, an image to be detected, which is acquired at the current moment, is acquired, and vehicle characteristics are extracted according to the image to be detected, wherein the vehicle characteristics comprise at least one characteristic of a vehicle identifier, a vehicle type and a vehicle body color; identifying a position of the vehicle feature in an image to be detected; projecting the position of the vehicle feature in the image to be detected into a three-dimensional coordinate system corresponding to a preset projection matrix, and obtaining a three-dimensional coordinate of the position in the three-dimensional coordinate system; positioning the vehicle according to the three-dimensional coordinates to obtain first positioning data;
acquiring a Bluetooth signal uploaded by a mobile terminal, determining signal strength according to the Bluetooth signal, searching from a mapping relation of the signal strength, the equipment identification, the channel identification and the position which are established in advance according to the signal strength, the identification of transmitting equipment and the channel identification, and determining second positioning data; the Bluetooth signal is a signal which is transmitted by the transmitting equipment at the current moment and received by the mobile terminal.
(2) Acquiring fusion positioning data and track data of a vehicle at the last moment, wherein the track data comprises at least two fusion positioning data: the fusion positioning data of the previous moment and the fusion positioning data of the previous moment;
determining a moving distance according to the fused positioning data of the previous moment and the positioning data of the previous moment, and calculating according to the moving distance and the time interval to obtain a speed;
and calculating according to the fusion positioning data and the speed of the last moment and the time interval between the last moment and the current moment to obtain the predicted positioning data of the vehicle at the current moment.
(3) Comparing the time sequence of the received Bluetooth signal and the image to be detected, and determining positioning data used at the current moment;
if the image to be detected is received first, fusion calculation is carried out according to the predicted positioning data and the first positioning data to obtain fusion positioning data at the current moment;
and if the Bluetooth signal is received first, carrying out fusion calculation according to the predicted positioning data and the second positioning data to obtain fusion positioning data at the current moment.
(4) If the first positioning data or the second positioning data are the first measured position data, the first positioning data or the second positioning data are directly used as fusion positioning data at the current moment; according to the fusion positioning data of the vehicle at the current moment and the vacant position in the parking lot, the optimal parking space is allocated for the vehicle;
the optimal navigation path is generated for the vehicle according to the fusion positioning data of the optimal parking space and the vehicle at the current moment;
and after the vehicle reaches the optimal parking space, the navigation is finished.
It should be understood that, although the steps in the flowcharts of fig. 2-5 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 2-5 may include multiple steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor does the order in which the steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of the steps or stages in other steps or other steps.
In one embodiment, as shown in fig. 6, there is provided an indoor vehicle positioning apparatus including: a first positioning module 602, a second positioning module 604, an acquisition module 606, and a fusion positioning module 608, wherein:
the first positioning module 602 is configured to obtain an image to be detected acquired at a current moment, extract a vehicle feature according to the image to be detected, and calculate according to the vehicle feature to obtain first positioning data;
the second positioning module 604 is configured to obtain a bluetooth signal uploaded by the mobile terminal, and determine second positioning data according to the bluetooth signal; the Bluetooth signal is a signal which is transmitted by the transmitting equipment at the current moment and received by the mobile terminal;
the acquiring module 606 is configured to acquire fused positioning data and a speed of the vehicle at a previous time, and predict according to the fused positioning data and the speed to obtain predicted positioning data of the vehicle at a current time; wherein the previous time is the next previous time to the current time;
and the fusion positioning module 608 is configured to perform fusion calculation according to the predicted positioning data and any one of the first positioning data or the second positioning data to obtain fusion positioning data at the current moment.
In one embodiment, the indoor vehicle positioning device further comprises a comparing module, configured to compare the time when the bluetooth signal is received with the image to be detected;
if the time of receiving the image to be detected is earlier than the time of receiving the bluetooth signal, the fusion positioning module 608 is configured to perform fusion calculation according to the predicted positioning data and the first positioning data to obtain fusion positioning data at the current time;
if the time of receiving the bluetooth signal is earlier than the time of receiving the image to be detected, the fusion positioning module 608 is configured to perform fusion calculation according to the predicted positioning data and the second positioning data to obtain fusion positioning data at the current time.
In one embodiment, the Bluetooth signal includes an identification of the transmitting device and a channel identification; the second positioning module 604 is further configured to determine signal strength according to the bluetooth signal, and then search from a mapping relationship between the signal strength, the device identifier, the channel identifier and the location, which are pre-established, according to the signal strength, the identifier of the transmitting device, and the channel identifier, so as to determine second positioning data.
In one embodiment, the vehicle characteristic includes at least one of a vehicle logo, a vehicle model, or a vehicle body color; the first positioning module 602 is further configured to input the image to be detected into a vehicle feature recognition model to obtain a vehicle feature; the vehicle feature recognition model is used for extracting at least one feature of vehicle identification, vehicle type and vehicle body color.
In one embodiment, the first positioning module 602 is further configured to identify a location of the vehicle feature in the image to be detected;
projecting the position of the vehicle feature in the image to be detected into a three-dimensional coordinate system corresponding to a preset projection matrix, and obtaining the three-dimensional coordinate of the position in the three-dimensional coordinate system;
and positioning the vehicle according to the three-dimensional coordinates to obtain first positioning data.
In one embodiment, the obtaining module 606 is further configured to obtain track data of the vehicle at a previous time; the track data comprises at least two pieces of fusion positioning data, wherein the at least two pieces of fusion positioning data comprise fusion positioning data of the last moment and fusion positioning data of the last moment;
determining a moving distance according to the fused positioning data of the previous moment and the positioning data of the previous moment, and calculating according to the moving distance and the time interval to obtain a speed;
and calculating according to the fusion positioning data and speed of the last moment and the time interval between the last moment and the current moment to obtain the predicted positioning data of the vehicle at the current moment.
In one embodiment, the fused positioning module 608 is further configured to directly use the first positioning data or the second positioning data as the fused positioning data at the current time if the first positioning data or the second positioning data is the first measured position data.
For specific limitations on the indoor vehicle positioning device, reference may be made to the above limitation on the indoor vehicle positioning method, and no further description is given here. The various modules in the indoor vehicle positioning device described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing a first positioning matrix, a second positioning matrix and a positioning state matrix. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of indoor vehicle localization.
It will be appreciated by those skilled in the art that the structure shown in FIG. 7 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (10)

1. A method of locating an indoor vehicle, the method comprising:
acquiring an image to be detected, which is acquired at the current moment, extracting vehicle characteristics according to the image to be detected, and calculating according to the vehicle characteristics to obtain first positioning data, wherein the image to be detected carries time information;
acquiring a Bluetooth signal uploaded by a mobile terminal, and determining second positioning data according to the Bluetooth signal; the Bluetooth signal is a signal which is transmitted by the transmitting equipment at the current moment and received by the mobile terminal, and carries time information;
Acquiring fusion positioning data and speed of a vehicle at the previous moment, and predicting according to the fusion positioning data and the speed to obtain predicted positioning data of the vehicle at the current moment; wherein the previous time is the time adjacent to the current time;
and carrying out fusion calculation according to any one of the predicted positioning data and the first positioning data or the second positioning data to obtain fusion positioning data at the current moment.
2. The method of claim 1, wherein the performing the fusion calculation according to the predicted positioning data and any one of the first positioning data or the second positioning data to obtain the fusion positioning data at the current time includes:
comparing the time when the Bluetooth signal is received with the image to be detected;
if the time of receiving the image to be detected is earlier than the time of receiving the Bluetooth signal, fusion calculation is carried out according to the predicted positioning data and the first positioning data to obtain fusion positioning data at the current time;
and if the time of receiving the Bluetooth signal is earlier than the time of receiving the image to be detected, carrying out fusion calculation according to the predicted positioning data and the second positioning data to obtain fusion positioning data at the current time.
3. The method of claim 1, wherein the bluetooth signal includes an identification of a transmitting device and a channel identification;
the determining second positioning data according to the bluetooth signal includes:
and determining signal strength according to the Bluetooth signal, and searching from the mapping relation of the pre-established signal strength, the equipment identification, the channel identification and the position according to the signal strength, the identification of the transmitting equipment and the channel identification to determine second positioning data.
4. The method of claim 1, wherein the vehicle characteristics include at least one of a vehicle logo, a vehicle model, a vehicle body color;
the extracting the vehicle features according to the image to be detected comprises the following steps:
inputting the image to be detected into a vehicle feature recognition model to obtain vehicle features; the vehicle feature recognition model is used for extracting at least one feature of a vehicle identifier, a vehicle model or a vehicle body color.
5. The method of claim 1, wherein the calculating based on the vehicle characteristic results in a first positioning bit comprising:
identifying a position of the vehicle feature in an image to be detected;
Projecting the position of the vehicle feature in the image to be detected into a three-dimensional coordinate system corresponding to a preset projection matrix, and obtaining a three-dimensional coordinate of the position in the three-dimensional coordinate system;
and positioning the vehicle according to the three-dimensional coordinates to obtain first positioning data.
6. The method of claim 1, wherein the acquiring the speed of the vehicle at the last time comprises:
acquiring track data of a vehicle at the last moment; the track data comprises at least two pieces of fusion positioning data, wherein the at least two pieces of fusion positioning data comprise fusion positioning data of the last moment and fusion positioning data of the last moment;
determining a moving distance according to the fused positioning data of the previous moment and the positioning data of the previous moment, and calculating according to the moving distance and the time interval to obtain a speed;
the predicting according to the fused positioning data and the speed to obtain the predicted positioning data of the vehicle at the current moment comprises the following steps:
and calculating according to the fusion positioning data and the speed of the last moment and the time interval between the last moment and the current moment to obtain the predicted positioning data of the vehicle at the current moment.
7. The method according to claim 1, wherein the method further comprises:
and if the first positioning data or the second positioning data are the first measured position data, directly taking the first positioning data or the second positioning data as fusion positioning data at the current moment.
8. An indoor vehicle locating apparatus, the apparatus comprising:
the first positioning module is used for acquiring an image to be detected, which is acquired at the current moment, extracting vehicle characteristics according to the image to be detected, and calculating according to the vehicle characteristics to obtain a first positioning data, wherein the image to be detected carries time information;
the second positioning module is used for acquiring Bluetooth signals uploaded by the mobile terminal and determining second positioning data according to the Bluetooth signals; the Bluetooth signal is a signal which is transmitted by the transmitting equipment at the current moment and received by the mobile terminal, and carries time information;
the acquisition module is used for acquiring the fusion positioning data and the speed of the vehicle at the previous moment, and predicting according to the fusion positioning data and the speed to obtain the predicted positioning data of the vehicle at the current moment; wherein the previous time is the time adjacent to the current time;
And the fusion positioning module is used for carrying out fusion calculation according to any one of the predicted positioning data and the first positioning data or the second positioning data to obtain fusion positioning data at the current moment.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202011581538.8A 2020-12-28 2020-12-28 Indoor vehicle positioning method, device, computer equipment and storage medium Active CN112689234B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011581538.8A CN112689234B (en) 2020-12-28 2020-12-28 Indoor vehicle positioning method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011581538.8A CN112689234B (en) 2020-12-28 2020-12-28 Indoor vehicle positioning method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112689234A CN112689234A (en) 2021-04-20
CN112689234B true CN112689234B (en) 2023-10-17

Family

ID=75453681

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011581538.8A Active CN112689234B (en) 2020-12-28 2020-12-28 Indoor vehicle positioning method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112689234B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340877B (en) * 2020-03-25 2023-10-27 北京爱笔科技有限公司 Vehicle positioning method and device
CN113327344B (en) * 2021-05-27 2023-03-21 北京百度网讯科技有限公司 Fusion positioning method, device, equipment, storage medium and program product

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107024216A (en) * 2017-03-14 2017-08-08 重庆邮电大学 Introduce the intelligent vehicle fusion alignment system and method for panoramic map
CN107845287A (en) * 2017-10-11 2018-03-27 安徽特旺网络科技有限公司 A kind of building parking management platform
CN108495251A (en) * 2018-02-27 2018-09-04 珠海横琴华策光通信科技有限公司 A kind of combined positioning method positioned based on bluetooth and LED light
CN108955673A (en) * 2018-06-27 2018-12-07 四川斐讯信息技术有限公司 A kind of head-wearing type intelligent wearable device, positioning system and localization method
WO2019095849A1 (en) * 2017-11-15 2019-05-23 阿里巴巴集团控股有限公司 Vehicle positioning method and apparatus
CN109905847A (en) * 2019-03-05 2019-06-18 长安大学 The collaboration of the blind area GNSS intelligent vehicle aided positioning system accumulated error corrects system and method
CN110473256A (en) * 2019-07-18 2019-11-19 中国第一汽车股份有限公司 A kind of vehicle positioning method and system
CN111462226A (en) * 2020-01-19 2020-07-28 杭州海康威视系统技术有限公司 Positioning method, system, device, electronic equipment and storage medium
CN111935644A (en) * 2020-08-10 2020-11-13 腾讯科技(深圳)有限公司 Positioning method and device based on fusion information and terminal equipment
CN112037565A (en) * 2020-08-31 2020-12-04 国家电网有限公司 Parking space navigation method and system based on Bluetooth positioning

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI534764B (en) * 2014-01-10 2016-05-21 財團法人工業技術研究院 Apparatus and method for vehicle positioning
CN108845289B (en) * 2018-07-03 2021-08-03 京东方科技集团股份有限公司 Positioning method and system for shopping cart and shopping cart
CN109141451B (en) * 2018-07-13 2023-02-10 京东方科技集团股份有限公司 Shopping positioning system and method, intelligent shopping cart and electronic equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107024216A (en) * 2017-03-14 2017-08-08 重庆邮电大学 Introduce the intelligent vehicle fusion alignment system and method for panoramic map
CN107845287A (en) * 2017-10-11 2018-03-27 安徽特旺网络科技有限公司 A kind of building parking management platform
WO2019095849A1 (en) * 2017-11-15 2019-05-23 阿里巴巴集团控股有限公司 Vehicle positioning method and apparatus
CN108495251A (en) * 2018-02-27 2018-09-04 珠海横琴华策光通信科技有限公司 A kind of combined positioning method positioned based on bluetooth and LED light
CN108955673A (en) * 2018-06-27 2018-12-07 四川斐讯信息技术有限公司 A kind of head-wearing type intelligent wearable device, positioning system and localization method
CN109905847A (en) * 2019-03-05 2019-06-18 长安大学 The collaboration of the blind area GNSS intelligent vehicle aided positioning system accumulated error corrects system and method
CN110473256A (en) * 2019-07-18 2019-11-19 中国第一汽车股份有限公司 A kind of vehicle positioning method and system
CN111462226A (en) * 2020-01-19 2020-07-28 杭州海康威视系统技术有限公司 Positioning method, system, device, electronic equipment and storage medium
CN111935644A (en) * 2020-08-10 2020-11-13 腾讯科技(深圳)有限公司 Positioning method and device based on fusion information and terminal equipment
CN112037565A (en) * 2020-08-31 2020-12-04 国家电网有限公司 Parking space navigation method and system based on Bluetooth positioning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
北斗卫星与车载传感器融合的车辆定位;耿华等;《汽车工程》;20071125(第11期);全文 *
多传感器信息融合的自动驾驶车辆定位与速度估计;彭文正等;《传感技术学报》;20200815(第08期);全文 *

Also Published As

Publication number Publication date
CN112689234A (en) 2021-04-20

Similar Documents

Publication Publication Date Title
EP3505869B1 (en) Method, apparatus, and computer readable storage medium for updating electronic map
CN108303103B (en) Method and device for determining target lane
CN111436216B (en) Method and system for color point cloud generation
CN110148185B (en) Method and device for determining coordinate system conversion parameters of imaging equipment and electronic equipment
JP7025276B2 (en) Positioning in urban environment using road markings
US10949712B2 (en) Information processing method and information processing device
AU2018282302A1 (en) Integrated sensor calibration in natural scenes
CN102567449A (en) Vision system and method of analyzing an image
CN109029444A (en) One kind is based on images match and sterically defined indoor navigation system and air navigation aid
CN108594244B (en) Obstacle recognition transfer learning method based on stereoscopic vision and laser radar
CN112689234B (en) Indoor vehicle positioning method, device, computer equipment and storage medium
JPWO2012046671A1 (en) Positioning system
US20200341150A1 (en) Systems and methods for constructing a high-definition map based on landmarks
JP2018077162A (en) Vehicle position detection device, vehicle position detection method and computer program for vehicle position detection
JP2020193954A (en) Position correction server, position management device, moving object position management system and method, position information correction method, computer program, onboard device, and vehicle
JP2017181476A (en) Vehicle location detection device, vehicle location detection method and vehicle location detection-purpose computer program
CN111159459B (en) Landmark positioning method, landmark positioning device, computer equipment and storage medium
CN115629386B (en) High-precision positioning system and method for automatic parking
CN110988795A (en) Mark-free navigation AGV global initial positioning method integrating WIFI positioning
CN111060110A (en) Robot navigation method, robot navigation device and robot
KR20160128967A (en) Navigation system using picture and method of cotnrolling the same
CN107545760B (en) Method for providing positioning information for positioning a vehicle at a positioning location and method for providing information for positioning a vehicle by means of another vehicle
CN113503883B (en) Method for collecting data for constructing map, storage medium and electronic equipment
CN111754388A (en) Picture construction method and vehicle-mounted terminal
CN113554711A (en) Camera online calibration method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant