CN116772832A - Vehicle positioning and searching method, system, electronic equipment and storage medium - Google Patents

Vehicle positioning and searching method, system, electronic equipment and storage medium Download PDF

Info

Publication number
CN116772832A
CN116772832A CN202310519514.7A CN202310519514A CN116772832A CN 116772832 A CN116772832 A CN 116772832A CN 202310519514 A CN202310519514 A CN 202310519514A CN 116772832 A CN116772832 A CN 116772832A
Authority
CN
China
Prior art keywords
vehicle
mobile terminal
target vehicle
real
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310519514.7A
Other languages
Chinese (zh)
Inventor
张上鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhidao Network Technology Beijing Co Ltd
Original Assignee
Zhidao Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhidao Network Technology Beijing Co Ltd filed Critical Zhidao Network Technology Beijing Co Ltd
Priority to CN202310519514.7A priority Critical patent/CN116772832A/en
Publication of CN116772832A publication Critical patent/CN116772832A/en
Pending legal-status Critical Current

Links

Landscapes

  • Navigation (AREA)

Abstract

The application provides a vehicle positioning and searching method, a system, electronic equipment and a storage medium, and relates to the field of vehicles. The vehicle positioning method comprises the following steps: acquiring a first real-time image and vehicle IMU data of the target vehicle; wherein the first real-time image characterizes a location of a target vehicle; acquiring basic position information of the target vehicle according to the first real-time image and a semantic map of a target area; and determining a target vehicle position based on the basic position information and the vehicle IMU data. By using the vehicle positioning method provided by the embodiment of the application, the accurate positioning of the target vehicle can be realized in the target area without depending on the vehicle positioning signal so as to determine the position of the target vehicle.

Description

Vehicle positioning and searching method, system, electronic equipment and storage medium
Technical Field
The present application relates to the field of vehicles, and in particular, to a vehicle positioning and searching method, system, electronic device, and storage medium.
Background
Positioning signals (such as GPS signals, beidou signals and the like) are widely applied to application scenes of navigation and addressing due to the characteristics of being free from weather influence, high in coverage rate, high in precision of three-dimensional fixed point fixed speed and fixed time, high in positioning efficiency and the like.
However, in areas such as underground parking lots or other areas where positioning signals and/or communication signals are weak, the positioning signals and the communication signals at the mobile phone end and the vehicle end are easy to be poor or missing due to the closed reinforced concrete structure or other signal shielding structures. Under the condition of poor or missing positioning signals and communication signals, the mobile phone end and the vehicle machine end are difficult to position, so that under the condition that a vehicle owner forgets the position where the vehicle is parked after parking, navigation and vehicle finding are difficult for the vehicle owner, and time waste is easily caused by long-time vehicle finding.
Disclosure of Invention
The embodiment of the application aims to provide a vehicle positioning and searching method, a system, electronic equipment and a storage medium, wherein the position information of a vehicle is determined through an image acquired by a target vehicle and vehicle IMU (Inertial Measurement Unit ) data; determining the position information of the mobile terminal based on the IMU data of the parking lot through the image acquired by the mobile terminal and the IMU data of the mobile terminal; in the semantic map of the parking lot, navigating from the mobile terminal position to the vehicle position, thereby realizing the positioning and searching of the vehicle; the vehicle positioning and searching method provided by the embodiment of the application does not depend on positioning signals (such as GPS signals, beidou signals and the like), and can realize the positioning and successful vehicle searching of the vehicle at the place where the positioning signals and/or communication signals are poor or missing.
In a first aspect, an embodiment of the present application provides a vehicle positioning method, including: acquiring a first real-time image and vehicle IMU data of a target vehicle; acquiring basic position information of a target vehicle according to the first real-time image and a semantic map of the target area; determining a target vehicle position based on the base position information and the vehicle IMU data; wherein the first real-time image characterizes a location of the target vehicle.
In the implementation process, the vehicle positioning and searching method provided by the embodiment of the application determines the position of the target vehicle by acquiring the first real-time image representing the position of the target vehicle and the IMU data of the target vehicle. That is, the vehicle positioning method provided by the embodiment of the application can realize accurate positioning of the target vehicle in the target area without depending on the vehicle positioning signal, and determine the position of the target vehicle.
Optionally, in an embodiment of the present application, determining the target vehicle position based on the basic position information and the vehicle IMU data includes: judging whether the basic position information comprises a plurality of initial vehicle positions; if the basic position information is judged to comprise a plurality of initial vehicle positions, acquiring initial position information of the target vehicle entering the target area; the target vehicle position is screened among a plurality of vehicle initial positions based on the vehicle IMU data and the initial position information.
In the implementation process, the vehicle positioning method provided by the embodiment of the application obtains the basic position information of the target vehicle according to the semantic map of the target area and the first real-time image acquired by the vehicle; preliminary matching due to areas like parking lots, there are often multiple similar locations; therefore, there may be a plurality of initial vehicle positions as described above, and further screening is performed among the plurality of initial vehicle position information to determine the target vehicle information. Therefore, the vehicle positioning method provided by the embodiment of the application accurately and efficiently determines the position of the target vehicle through rough positioning and fine positioning, and avoids the error determination of the position of the target vehicle.
Optionally, in an embodiment of the present application, acquiring basic location information of the target vehicle according to the first real-time image and the semantic map of the target area includes: extracting a plurality of first feature points and a plurality of first feature point descriptors of the semantic map, and a plurality of second feature points and a plurality of second feature point descriptors of the first real-time image; and matching the plurality of second feature points with the plurality of first feature points, and matching the plurality of second feature point descriptors with the plurality of first feature point descriptors to obtain basic position information.
In the implementation process, in order to obtain the basic position information of the target vehicle, the vehicle positioning method provided by the embodiment of the application obtains the same position on the semantic map and the first real-time image by extracting the semantic map of the target area, the feature points of the first real-time image and the feature point descriptors and matching the semantic map and the feature points and the feature point descriptors, so as to accurately determine the basic position information of the target vehicle.
Optionally, in an embodiment of the present application, determining the target vehicle position based on the basic position information and the vehicle IMU data further includes: if it is determined that the basic position information includes only one initial vehicle position, the initial vehicle position is determined as the target vehicle position.
In the implementation process, after the basic position information is acquired, judging that the basic position information only comprises one initial position information; then, the initial position information is determined as target vehicle position information; in this case, there is no need to perform secondary screening in combination with IMU data of the vehicle, so that the target vehicle position is accurately and efficiently determined.
Optionally, in an embodiment of the present application, the vehicle IMU data includes part or all of IMU data from when the target vehicle enters the target area to before the target vehicle resides.
In the implementation process, the IMU data acquired by the embodiment of the application is part or all of the IMU data acquired from the time when the target vehicle enters the target area, so that the IMU data required for determining the vehicle position information can be accurately selected from the IMU data of the vehicle, and the efficiency of acquiring the target vehicle position information can be prevented from being influenced by excessive data.
In a second aspect, an embodiment of the present application provides a vehicle searching method, including: acquiring a second real-time image shot by the mobile terminal, and determining the position of the mobile terminal according to the second real-time image, the IMU data of the mobile terminal and the semantic map of the target area; acquiring a target vehicle position of a target vehicle; navigating from the mobile terminal position to the target vehicle position based on the semantic map to find the target vehicle; wherein the vehicle position and the mobile end position are located within the target area; target vehicle position of a target vehicle a first real-time image representative of the vehicle position is acquired or acquired according to the vehicle positioning method of any one of the first aspects of the application; and acquiring the position of the target vehicle according to the first real-time image and the semantic map of the target area.
In the implementation process, on the basis of acquiring the position of the target vehicle, the position of the mobile terminal can be determined according to the real-time image acquired by the mobile terminal and the IMU data of the mobile terminal, and navigation is performed from the position of the mobile terminal to the position of the target vehicle based on the semantic map of the target area, so that the purpose of searching the target vehicle is achieved; therefore, the vehicle searching method provided by the embodiment can efficiently search the target vehicle without depending on the positioning signal on the basis of the vehicle positioning method.
Optionally, in an embodiment of the present application, navigating from the mobile terminal location to the target vehicle location based on the semantic map to find the target vehicle includes: traversing the semantic map, and searching the position of the mobile terminal and the position of the target vehicle; and planning a vehicle searching path according to the position of the mobile terminal and the position of the target vehicle by a preset navigation algorithm, and displaying the vehicle searching path on the mobile terminal.
In the implementation process, after the target vehicle position and the mobile terminal position are acquired, path planning can be performed through a preset navigation algorithm, and the planned path is displayed on the mobile terminal; therefore, a user can realize vehicle searching according to the vehicle searching path displayed on the mobile terminal.
Optionally, in an embodiment of the present application, determining, according to the second real-time image, the IMU data of the mobile terminal, and the semantic map of the target area, the mobile terminal position of the mobile terminal includes: establishing a local positioning image according to the second real-time image and the IMU data of the mobile terminal; extracting a plurality of third feature points and a plurality of third feature point descriptors of the local positioning image; matching the plurality of third feature points with the plurality of first feature points, and matching the plurality of third feature point descriptors with the plurality of first feature point descriptors to obtain the initial position of the mobile terminal; and determining the position of the mobile terminal according to the initial position of the mobile terminal and the semantic map.
In the implementation process, the local positioning image can be built on the mobile terminal based on the second real-time image acquired by the mobile terminal and the IMU data of the mobile terminal; after the local positioning image is established, the feature points and feature point descriptors of the local positioning image and the semantic map are matched, so that preliminary determination of the position of the mobile terminal is completed, and the initial position of the mobile terminal is obtained.
Optionally, in an embodiment of the present application, determining the mobile terminal position according to the mobile terminal initial position and the semantic map includes: image segmentation is carried out on the semantic map so as to obtain a characteristic position data set; the characteristic positions corresponding to the characteristic position data sets comprise arrows of the target area, pillars of the target area and/or lane lines of the target area; semantic segmentation is carried out on the second real-time image so as to obtain a mobile terminal real-time characteristic position dataset; the characteristic positions corresponding to the characteristic position data set of the mobile terminal comprise an arrow in the initial position of the mobile terminal, a pillar in the initial position of the mobile terminal and/or a lane line in the initial position of the mobile terminal; and matching the real-time characteristic position data set of the mobile terminal with the characteristic position data set to determine the position of the mobile terminal.
In the implementation process, obtaining a data set about the arrow, the pillar and/or the lane line by performing image segmentation on the second real-time image and the semantic map; the final mobile terminal position can be accurately and rapidly obtained through matching analysis from the initial position of the mobile terminal through matching among the data sets.
Optionally, in an embodiment of the present application, performing image segmentation on the semantic map to obtain the feature location dataset includes: segmenting the characteristic position in the initial position of the mobile terminal in the second real-time image; the feature locations are projected onto the local positioning image to obtain a feature location dataset.
In the implementation process, in order to select the mobile terminal position from the mobile terminal initial positions, the vehicle searching method provided by the embodiment of the application performs image segmentation on the second real-time image and the semantic map acquired by the mobile terminal; obtaining a series of point sets representing special positions, such as a lane line point set, an arrow point set and the like; and screening the initial position of the mobile terminal based on the point set, so as to obtain the final position of the mobile terminal.
In a third aspect, an embodiment of the present application provides a vehicle positioning system, including: a vehicle information acquisition module and a vehicle position calculation module; the vehicle information acquisition module is used for acquiring the first real-time image and the vehicle IMU data of the target vehicle; the vehicle position calculation module is used for acquiring basic position information of the target vehicle according to the first real-time image and the semantic map of the target area; the vehicle position calculation module is also used for determining the position of the target vehicle based on the basic position information and the vehicle IMU data; wherein the first real-time image characterizes a location of a target vehicle.
In a fourth aspect, an embodiment of the present application provides a vehicle finding system, including: the mobile terminal comprises a mobile terminal information acquisition module, a mobile terminal position calculation module, a vehicle position acquisition module and a path planning module; the mobile terminal information acquisition module is used for acquiring a second real-time image shot by the mobile terminal; the mobile terminal position calculation module is used for determining the mobile terminal position of the mobile terminal according to the second real-time image, the mobile terminal IMU data and the semantic map of the target area; the vehicle position acquisition module is used for acquiring a target vehicle position of a target vehicle; the path planning module is used for navigating from the mobile terminal position to the target vehicle position based on the semantic map so as to find the target vehicle; wherein the vehicle position and the mobile end position are located within the target area; the target vehicle position of the target vehicle is obtained according to the vehicle positioning system provided by the third aspect of the application.
In a fifth aspect, an embodiment of the present application provides an electronic device, where the electronic device includes a memory and a processor, where the memory stores program instructions, and where the processor executes steps in any implementation manner of the first aspect and the second aspect when reading and executing the program instructions.
In a sixth aspect, embodiments of the present application further provide a computer readable storage medium having stored therein computer program instructions which, when read and executed by a processor, perform the steps in any implementation of the first and second aspects.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a first flow chart of vehicle positioning provided by an embodiment of the present application;
FIG. 2 is a second flow chart of vehicle positioning provided by an embodiment of the present application;
FIG. 3 is a flowchart for determining basic position information of a target vehicle according to an embodiment of the present application;
FIG. 4 is a first flow chart of vehicle finding provided by an embodiment of the present application;
FIG. 5 is a second flow chart of target vehicle finding provided by an embodiment of the present application;
FIG. 6 is a first flowchart of mobile terminal location finding according to an embodiment of the present application;
FIG. 7 is a second flowchart of mobile terminal location finding according to an embodiment of the present application;
FIG. 8 is a flow chart of feature location dataset acquisition provided by an embodiment of the present application;
FIG. 9 is a schematic block diagram of a vehicle positioning system according to an embodiment of the present application;
FIG. 10 is a schematic block diagram of a vehicle finding system according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application. For example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. In addition, functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The applicant found during the course of the study that the signal of the parking lot was often poor, even no; after parking, users often forget the position where the vehicle is parked; also, since the positioning signal is often unavailable due to signal problems, the mobile navigation is unable to solve the above problems.
Based on the above, the embodiment of the application provides a vehicle positioning and searching method, which is used for determining the position information of a vehicle through an image acquired by a target vehicle and vehicle IMU data; determining the position information of the mobile terminal based on the IMU data of the parking lot through the image acquired by the mobile terminal and the IMU data of the mobile terminal; in the semantic map of the parking lot, navigating from the mobile terminal position to the vehicle position, thereby realizing the positioning and searching of the vehicle; the vehicle positioning and searching method provided by the embodiment of the application does not depend on positioning signals, and can realize successful vehicle searching in places with poor signals.
Referring to fig. 1, fig. 1 is a first flowchart of vehicle positioning according to an embodiment of the present application; the vehicle positioning method comprises the following steps:
step S100: and acquiring a first real-time image and vehicle IMU data of the target vehicle.
In the above step S100, the vehicle acquires the first real-time image and the vehicle IMU data of the target vehicle when the vehicle enters the target area, such as a parking lot. It will be appreciated that the first real-time image characterizes the location information of the vehicle; an image acquired by a camera that may be configured for the target vehicle, i.e., a vehicle camera field of view; or the user holds the mobile terminal to shoot the image of the position of the vehicle; alternatively, the monitoring of the target area or other device capable of acquiring an image of the vehicle parking location.
The IMU is a sensor for measuring the acceleration and the angular velocity of the three-dimensional motion of the carrier in real time, and the attitude position and the velocity of the vehicle under a navigation coordinate system can be obtained by solving an inertial navigation equation by utilizing the measurement data.
Step S101: and acquiring basic position information of the target vehicle according to the first real-time image and the semantic map of the target area.
In the step S101, basic information of the target vehicle is acquired based on the first real-time image and the semantic map of the target area; it should be noted that there are two ways of obtaining the semantic map of the target area; after each vehicle leaves the target area on the basis of a large amount of vehicle data, uploading an image acquired by the vehicle, and uploading the established area map to the cloud, wherein the semantic map of the target area is gradually perfected through data stacking. Another way is to use special acquisition equipment to traverse the whole target area, and establish a semantic map of the target area in the cloud through the image of the target area acquired by the acquisition vehicle.
As will be appreciated by those skilled in the art, corresponding feature points and feature point descriptors should be generated during the process of semantic map creation of the target area; in the case of a parking lot, there are a series of point sets of projections of pillars, arrows, lane lines, etc. of the parking lot on a map after the pillars, arrows, lane lines, etc. are image-divided.
Step S102: the target vehicle location is determined based on the base location information and the vehicle IMU data.
In the above-described step S102, after the basic position information of the target vehicle and the IMU data of the vehicle are acquired, the position of the target vehicle is determined based on the basic position information and the IMU data of the vehicle. It should be appreciated that, through the first real-time image acquired by the vehicle and the semantic map of the target area, multiple pieces of basic position information may be acquired, and the final position of the target vehicle cannot be determined; therefore, further screening is required on the basis of the position of the target vehicle to be determined.
As can be seen from fig. 1, the vehicle positioning and searching method provided by the embodiment of the application determines the target vehicle position by acquiring the first real-time image representing the target vehicle position and the IMU data of the target vehicle. That is, the vehicle positioning method provided by the embodiment of the application can realize accurate positioning of the target vehicle in the target area without depending on the vehicle positioning signal, and determine the position of the target vehicle.
Referring to fig. 2, fig. 2 is a second flowchart of vehicle positioning according to an embodiment of the present application; in an alternative implementation of the embodiment of the present application, determining the target vehicle position based on the basic position information and the vehicle IMU data includes the steps of:
Step S200: it is determined whether the basic position information includes a plurality of initial vehicle positions.
In the above step S200, the above is accepted, and after the basic position information is acquired, it is determined whether the basic position information includes a plurality of initial vehicle positions. It will be appreciated that the process of obtaining basic location information of the target vehicle from the semantic map of the target area and the first real-time image acquired by the vehicle may be understood as a preliminary match. In the preliminary matching process, there are often a plurality of similar locations due to areas similar to parking lots; thus, there may be a plurality of initial vehicle positions as described above.
Step S201: if the basic position information is determined to include a plurality of initial vehicle positions, initial position information of the target vehicle entering the target area is acquired.
In the above-described step S201, after it is determined that the basic position information includes a plurality of initial vehicle positions, initial position information for the target vehicle to enter the target area is based on. It should be appreciated that if the target vehicle is entering an underground parking garage; then, there is a positioning signal at the entrance of the parking lot, and initial position information of the target vehicle is acquired based on the information at the entrance.
Step S202: the target vehicle position is screened among a plurality of vehicle initial positions based on the vehicle IMU data and the initial position information.
In the above step S202, after obtaining the initial position information of the vehicle entering the target area, the target vehicle position is selected from the acquired plurality of vehicle initial positions according to the initial position information of the target vehicle and the vehicle IMU data.
As can be seen from fig. 2, in the vehicle positioning method provided by the embodiment of the present application, basic position information of a target vehicle is obtained according to a semantic map of the target area and a first real-time image acquired by the vehicle; preliminary matching due to areas like parking lots, there are often multiple similar locations; therefore, there may be a plurality of initial vehicle positions as described above, and further screening is performed among the plurality of initial vehicle position information to determine the target vehicle information. Therefore, the vehicle positioning method provided by the embodiment of the application accurately and efficiently determines the position of the target vehicle through rough positioning and fine positioning, and avoids the error determination of the position of the target vehicle.
Referring to fig. 3, fig. 3 is a flowchart illustrating a determination of basic location information of a target vehicle according to an embodiment of the present application; in an alternative implementation manner of the embodiment of the present application, the basic position information of the target vehicle is obtained according to the first real-time image and the semantic map of the target area, and may be obtained by the following manner:
Step S300: extracting a plurality of first feature points and a plurality of first feature point descriptors of the semantic map, and a plurality of second feature points and a plurality of second feature point descriptors of the first real-time image.
In the above step S300, after the semantic map of the target area is acquired, a plurality of feature points on the semantic map of the target area are acquired, and in the embodiment of the present application, the feature points extracted from the semantic map of the target area are referred to as first feature points. It may be understood that the feature points have corresponding feature point descriptors, and in the embodiment of the present application, the feature point descriptors corresponding to the first feature point are referred to as first feature point descriptors. Similarly, the feature points extracted from the first real-time image acquired from the target vehicle and the descriptors corresponding to the feature points are the second feature points and the second feature point descriptors, respectively.
Step S301: and matching the plurality of second feature points with the plurality of first feature points, and matching the plurality of second feature point descriptors with the plurality of first feature point descriptors to obtain basic position information.
In the step S301, after the first feature point and the first feature point descriptor, and the second feature point descriptor are obtained, in order to match the first real-time image with the target semantic map, the embodiment of the present application matches the second feature point with the first feature point, and matches the second feature point description descriptor with the first feature point descriptor. It will be appreciated that by matching the same points, or positions, in the semantic map of the target area and the first real-time image can be obtained.
As can be seen from fig. 3, in order to obtain the basic position information of the target vehicle, the vehicle positioning method provided by the embodiment of the application extracts the semantic map of the target area, the feature points of the first real-time image and the feature point descriptors, and matches the two to obtain the same position on the semantic map and the first real-time image, so as to accurately determine the basic position information of the target vehicle.
In an alternative embodiment, determining the target vehicle location based on the base location information and the vehicle IMU data further comprises: if it is determined that the basic position information includes only one initial vehicle position, the initial vehicle position is determined as the target vehicle position.
It is known that, after the basic position information is acquired, it is determined that only one piece of initial position information is included in the basic position information; then, the initial position information is determined as target vehicle position information; in this case, there is no need to perform secondary screening in combination with IMU data of the vehicle, so that the target vehicle position is accurately and efficiently determined.
In an alternative embodiment, the vehicle IMU data includes some or all of the IMU data after the target vehicle enters the target area and before the target vehicle resides.
Illustratively, the vehicle IMU data includes all IMU data or all IMU data from the target area to the front of the target vehicle after the target vehicle enters the target area; the method for acquiring the IMU data of the vehicle can be selected by combining a first real-time image and time stamp information when the target vehicle enters the target area through a positioning signal when the target vehicle enters the target area as a starting point for acquiring the IMU data.
Therefore, the IMU data acquired by the embodiment of the application is part or all of the IMU data acquired from the time when the target vehicle enters the target area, the IMU data required for determining the vehicle position information can be accurately selected from the IMU data of the vehicle, and the efficiency of acquiring the target vehicle position information can be prevented from being influenced by excessive data.
Referring to fig. 4, fig. 4 is a first flowchart of vehicle searching according to an embodiment of the present application; the method comprises the following steps:
step S400: and acquiring a second real-time image shot by the mobile terminal, and determining the position of the mobile terminal according to the second real-time image, the IMU data of the mobile terminal and the semantic map of the target area.
In the step S400, in order to find the target vehicle, a second real-time image captured by the mobile terminal is acquired, and the position of the mobile terminal is determined according to the second real-time image, IMU data of the mobile terminal and a semantic map of the target area. It will be appreciated that if the mobile APP or applet is mobile, the device should be held by the target user; that is, the location of the mobile terminal represents the location of the user.
Step S401: a target vehicle position of a target vehicle is acquired.
In the above-described step S401, there is provided in the first aspect of the application a vehicle positioning method by which a target vehicle position is obtained; after the target vehicle position is obtained, a vehicle searching method provided by the embodiment of the application obtains the target vehicle position.
Step S402: based on the semantic map, navigating from the mobile terminal position to the target vehicle position to find the target vehicle.
In the step S402, after receiving the above and acquiring the target vehicle position, navigating from the moving end position to the target vehicle position on the semantic map of the target area based on the target vehicle position and the moving end position, thereby searching for the target vehicle; wherein the obtaining of the target vehicle position can be obtained by the method provided by the first aspect of the application; a first real-time image representing the vehicle position may also be acquired, and the target vehicle position may be acquired based on the first real-time image and a semantic map of the target area.
As can be seen from fig. 4, on the basis of acquiring the position of the target vehicle, the position of the mobile terminal can be determined according to the real-time image acquired by the mobile terminal and the IMU data of the mobile terminal, and navigation is performed from the position of the mobile terminal to the position of the target vehicle based on the semantic map of the target area, so as to achieve the purpose of searching the target vehicle; therefore, the vehicle searching method provided by the embodiment can efficiently search the target vehicle without depending on the positioning signal on the basis of the vehicle positioning method.
Referring to fig. 5, fig. 5 is a second flowchart of target vehicle searching according to an embodiment of the present application; in an alternative implementation of the embodiment of the present application, based on a semantic map, navigating from a mobile terminal position to a target vehicle position to find a target vehicle includes the following steps:
step S500: traversing the semantic map, and searching the position of the mobile terminal and the position of the target vehicle.
In the above step S500, in order to find the target vehicle, after the target vehicle position and the mobile terminal position are acquired, a semantic map of the target area is traversed, and the target vehicle position and the mobile terminal position are found on the semantic map.
Step S501: and planning a vehicle searching path according to the position of the mobile terminal and the position of the target vehicle by a preset navigation algorithm, and displaying the vehicle searching path on the mobile terminal.
In the step S501, after the target vehicle position and the mobile terminal position are found on the semantic map, a route from the mobile terminal position to the target vehicle position is planned by using a preset navigation algorithm, and the vehicle-seeking route is displayed on the mobile terminal. Illustratively, the navigation algorithm may be an a-algorithm, a PRM path planning algorithm, a D-algorithm, a path smoothing algorithm, or the like. The algorithm selection can be determined according to actual conditions, and the specific navigation algorithm cannot be the limit of the protection scope of the vehicle searching method provided by the embodiment of the application.
As can be seen from fig. 5, after the target vehicle position and the mobile terminal position are obtained, a path planning can be performed through a preset navigation algorithm, and the planned path is displayed on the mobile terminal; therefore, a user can realize vehicle searching according to the vehicle searching path displayed on the mobile terminal.
Referring to fig. 6, fig. 6 is a first flowchart of mobile terminal location finding according to an embodiment of the present application; in an alternative implementation manner of the embodiment of the present application, determining the mobile terminal position of the mobile terminal according to the second real-time image, the mobile terminal IMU data and the semantic map of the target area may include the following steps:
step S600: and establishing a local positioning image according to the second real-time image and the IMU data of the mobile terminal.
In the step S600, in order to obtain the position of the mobile terminal, the IMU data of the mobile terminal and the second real-time image captured by the mobile terminal are obtained from the mobile terminal; establishing a local positioning image according to the second real-time image and the IMU data of the mobile terminal; it should be noted that, algorithms such as SLAM may be used to create the local positioning image.
Step S601: and extracting a plurality of third feature points and a plurality of third feature point descriptors of the local positioning image.
In the step S601, after the local positioning image is established, feature point extraction is performed on the local positioning image, and feature point descriptors corresponding to features are acquired and respectively recorded as a third feature point and a third feature point descriptor.
Step S602: and matching the plurality of third feature points with the plurality of first feature points, and matching the plurality of third feature point descriptors with the plurality of first feature point descriptors to obtain the initial position of the mobile terminal.
In the step S602, a plurality of third feature points and a plurality of first feature points are matched, and a plurality of third feature point descriptors and a plurality of first feature point descriptors are matched; after matching, the same characteristic points of the second real-time image and the semantic map of the target area can be obtained, so that the initial position of the mobile terminal is obtained.
Step S603: and determining the position of the mobile terminal according to the initial position of the mobile terminal and the semantic map.
In the above step S603, similarly, there may be a plurality of similar positions in the acquired initial position of the mobile terminal; if the mobile terminal position is to be screened from a plurality of similar positions, the semantic map and the initial position are combined for further screening.
As can be seen from fig. 6, the local positioning image can be built at the mobile terminal based on the second real-time image acquired by the mobile terminal and the IMU data of the mobile terminal; after the local positioning image is established, the feature points and feature point descriptors of the local positioning image and the semantic map are matched, so that preliminary determination of the position of the mobile terminal is completed, and the initial position of the mobile terminal is obtained.
Referring to fig. 7, fig. 7 is a second flowchart of mobile terminal location finding according to an embodiment of the present application; in an alternative implementation manner of the embodiment of the present application, determining the mobile terminal position according to the mobile terminal initial position and the semantic map may include the following steps:
step S700: image segmentation is performed on the semantic map to obtain a feature location dataset.
In the above step S700, image segmentation is performed on the semantic map, thereby obtaining a feature position dataset; the feature position corresponding to the feature position data set includes an arrow of the target area, a pillar of the target area, and/or a lane line of the target area.
Step S701: and carrying out semantic segmentation on the second real-time image to obtain a mobile terminal real-time characteristic position dataset.
In the step S701, further, semantic segmentation is performed on the second real-time image, so as to obtain a real-time feature position dataset of the mobile terminal; the feature positions corresponding to the feature position data set of the mobile terminal include an arrow in the initial position of the mobile terminal, a pillar in the initial position of the mobile terminal, and/or a lane line in the initial position of the mobile terminal.
It will be appreciated by those skilled in the art that after processing the arrows, pillars and/or lane lines in the initial position or target area, a series of data sets can be obtained regarding the position.
Step S702: and matching the real-time characteristic position data set of the mobile terminal with the characteristic position data set to determine the position of the mobile terminal.
In the step S702, after the image is processed, a real-time feature position dataset and a feature position dataset of the moving end are obtained respectively; and matching the real-time characteristic position data set of the mobile terminal with the characteristic position data set to determine the position of the mobile terminal. For example, the ICP algorithm may be used to register the data sets, so as to screen out the positions of the lane line arrows of the matched map, and find the correspondence according to the ground points of the pillar segmentation, so as to obtain the final self position of the mobile terminal.
As can be seen from fig. 7, a dataset about arrows, pillars and/or lane lines is obtained by image segmentation of the second real-time image and the semantic map; the final mobile terminal position can be accurately and rapidly obtained through matching analysis from the initial position of the mobile terminal through matching among the data sets.
Referring to fig. 8, fig. 8 is a flowchart of feature location dataset acquisition according to an embodiment of the present application; in an alternative implementation of the embodiment of the present application, image segmentation is performed on a semantic map to obtain a feature location dataset, comprising the steps of:
Step S800: feature positions in the mobile end initial position in the second real-time image are segmented.
In the above step S800, the arrow in the moving end initial position, the pillar in the moving end initial position, and/or the lane line in the moving end initial position are divided. Among them, the image segmentation method may use a threshold-based segmentation method, a region-based segmentation method, an edge-based segmentation method, and the like.
Step S801: the feature locations are projected onto the local positioning image to obtain a feature location dataset.
In the above step S801, the feature position is projected to the local positioning image, thereby obtaining a feature position dataset; for example, according to the transformation relation between the feature position calculated by SLAM and the second real-time image, an arrow and a lane line are projected onto a local map which is just built by the mobile terminal according to positioning, and the lane line, the arrow and the like are a series of point sets, namely a feature position data set, on the map.
As can be seen from fig. 8, in the vehicle searching method provided by the embodiment of the present application, in order to select the mobile terminal position from the mobile terminal initial positions, the second real-time image and the semantic map acquired by the mobile terminal are subjected to image segmentation; obtaining a series of point sets representing special positions, such as a lane line point set, an arrow point set and the like; and screening the initial position of the mobile terminal based on the point set, so as to obtain the final position of the mobile terminal.
Referring to fig. 9, fig. 9 is a schematic block diagram of a vehicle positioning system according to an embodiment of the application; the vehicle positioning system 100 includes: the vehicle information acquisition module 110 and the vehicle position calculation module 120.
The vehicle information acquisition module 110 is configured to acquire a first real-time image and vehicle IMU data of a target vehicle; wherein the first real-time image characterizes a location of a target vehicle.
The vehicle position calculation module 120 is configured to obtain basic position information of the target vehicle according to the first real-time image and the semantic map of the target area; the vehicle location calculation module 120 is also configured to determine a target vehicle location based on the base location information and the vehicle IMU data.
In an alternative embodiment, the vehicle position calculation module 120 includes a vehicle initial position calculation module 121; the vehicle position calculation module 120 determines a target vehicle position based on the base position information and the vehicle IMU data, including: the vehicle initial position calculation module 121 determines whether the basic position information includes a plurality of initial vehicle positions; if the vehicle initial position calculation module 121 determines that the basic position information includes a plurality of initial vehicle positions, acquiring initial position information of the target vehicle entering the target area; the vehicle position calculation module 120 screens the target vehicle position among a plurality of vehicle initial positions based on the vehicle IMU data and the initial position information.
In an alternative embodiment, the vehicle position calculation module 120 obtains the basic position information of the target vehicle according to the first real-time image and the semantic map of the target area, including: the vehicle initial position calculation module 121 extracts a plurality of first feature points and a plurality of first feature point descriptors of the semantic map, and a plurality of second feature points and a plurality of second feature point descriptors of the first real-time image; the vehicle initial position calculation module 121 matches the plurality of second feature points with the plurality of first feature points, and matches the plurality of second feature point descriptors with the plurality of first feature point descriptors to obtain the basic position information.
In an alternative embodiment, the vehicle location calculation module 120 determines the target vehicle location based on the base location information and the vehicle IMU data, further comprising: if the vehicle initial position calculation module 121 determines that the basic position information includes only one initial vehicle position, the vehicle position calculation module 120 determines the initial vehicle position as the target vehicle position.
In an alternative embodiment, vehicle positioning system 100 further includes a data acquisition module 130; the vehicle IMU data acquired by the data acquisition module 130 includes part or all of IMU data of the target vehicle after entering the target area and before the target vehicle resides.
Referring to fig. 10, fig. 10 is a schematic block diagram of a vehicle searching system according to an embodiment of the application; the vehicle finding system 200 includes: a mobile end information acquisition module 210, a mobile end position calculation module 220, a vehicle position acquisition module 230, and a path planning module 240.
The mobile terminal information acquisition module 210 is configured to acquire a second real-time image captured by the mobile terminal.
The mobile terminal position obtaining module 220 is configured to determine a mobile terminal position of the mobile terminal according to the second real-time image, the mobile terminal IMU data, and the semantic map of the target area.
The vehicle position acquisition module 230 is configured to acquire a target vehicle position of a target vehicle.
The path planning module 240 is configured to navigate from the mobile terminal location to the target vehicle location based on the semantic map to find the target vehicle.
It should be noted that the vehicle position and the moving end position are located in the target area; wherein a target vehicle position of a target vehicle is obtained according to the vehicle positioning system provided by the first aspect of the application; or acquiring a first real-time image representative of the vehicle location; and acquiring the position of the target vehicle according to the first real-time image and the semantic map of the target area.
In an alternative embodiment, path planning module 240 navigates from the mobile terminal location to the target vehicle location based on the semantic map to find the target vehicle, including: the path planning module 240 traverses the semantic map to find the mobile terminal location and the target vehicle location; the path planning module 240 plans a vehicle-searching path according to the mobile terminal position and the target vehicle position by using a preset navigation algorithm, and displays the vehicle-searching path on the mobile terminal.
In an alternative embodiment, the mobile end position acquisition module 220 includes a local positioning image creation module 221 and a feature matching module 222; the mobile terminal position obtaining module 220 determines a mobile terminal position of the mobile terminal according to the second real-time image, the mobile terminal IMU data and the semantic map of the target area, including: the local positioning image establishing module 221 establishes a local positioning image according to the second real-time image and the IMU data of the mobile terminal; the feature matching module 222 extracts a plurality of third feature points and a plurality of third feature point descriptors of the locally-positioned image; matching the plurality of third feature points with the plurality of first feature points, and matching the plurality of third feature point descriptors with the plurality of first feature point descriptors by the feature matching module 222 to obtain a mobile terminal initial position of the mobile terminal; the mobile terminal position obtaining module 220 determines the mobile terminal position according to the mobile terminal initial position and the semantic map.
In an alternative embodiment, the mobile end location acquisition module 220 further includes an image segmentation module 223 and a dataset matching module 224; the mobile terminal position obtaining module 220 determines a mobile terminal position according to the mobile terminal initial position and the semantic map, including: the image segmentation module 223 performs image segmentation on the semantic map to obtain a feature location dataset; the characteristic positions corresponding to the characteristic position data sets comprise arrows of the target area, pillars of the target area and/or lane lines of the target area; the image segmentation module 223 performs semantic segmentation on the second real-time image to obtain a mobile terminal real-time feature position dataset; the characteristic positions corresponding to the characteristic position data set of the mobile terminal comprise an arrow in the initial position of the mobile terminal, a pillar in the initial position of the mobile terminal and/or a lane line in the initial position of the mobile terminal; the data set matching module 224 matches the mobile end real-time feature location data set with the feature location data set to determine a mobile end location.
In an alternative embodiment, the image segmentation module 223 performs image segmentation on the semantic map to obtain a feature location dataset, comprising: the image segmentation module 223 segments the feature position in the mobile terminal initial position in the second real-time image; the image segmentation module 223 projects the feature locations to the locally positioned image to obtain a feature location dataset.
Referring to fig. 11, fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application. An electronic device 300 provided in an embodiment of the present application includes: a processor 301 and a memory 302, the memory 302 storing machine-readable instructions executable by the processor 301, which when executed by the processor 301 perform the method as described above.
Based on the same inventive concept, the embodiments of the present application also provide a computer readable storage medium, in which computer program instructions are stored, which when read and run by a processor, perform the steps in any of the above implementations.
The computer readable storage medium may be any of various media capable of storing program codes, such as random access Memory (Random Access Memory, RAM), read Only Memory (ROM), programmable Read Only Memory (Programmable Read-Only Memory, PROM), erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), and the like. The storage medium is used for storing a program, the processor executes the program after receiving an execution instruction, and the method executed by the electronic terminal defined by the process disclosed in any embodiment of the present application may be applied to the processor or implemented by the processor.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
Further, the units described as separate units may or may not be physically separate, and units displayed as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Furthermore, functional modules in various embodiments of the present application may be integrated together to form a single portion, or each module may exist alone, or two or more modules may be integrated to form a single portion.
Alternatively, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present invention, in whole or in part.
The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.).
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and variations will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (14)

1. A vehicle positioning method, the method comprising:
acquiring a first real-time image and vehicle IMU data of the target vehicle; wherein the first real-time image characterizes a location of a target vehicle;
acquiring basic position information of the target vehicle according to the first real-time image and a semantic map of a target area;
and determining a target vehicle position based on the basic position information and the vehicle IMU data.
2. The method of claim 1, wherein the determining a target vehicle location based on the base location information and the vehicle IMU data comprises:
judging whether the basic position information comprises a plurality of initial vehicle positions or not;
if the basic position information is judged to comprise the plurality of initial vehicle positions, acquiring initial position information of the target vehicle entering the target area;
The target vehicle position is screened among a plurality of the vehicle initial positions based on the vehicle IMU data and the initial position information.
3. The method according to claim 1, wherein the obtaining basic location information of the target vehicle according to the first real-time image and the semantic map of the target area includes:
extracting a plurality of first feature points and a plurality of first feature point descriptors of the semantic map, and a plurality of second feature points and a plurality of second feature point descriptors of the first real-time image;
and matching the plurality of second feature points with the plurality of first feature points, and matching the plurality of second feature point descriptors with the plurality of first feature point descriptors to acquire basic position information.
4. The method of claim 2, wherein the determining a target vehicle location based on the base location information and the vehicle IMU data further comprises:
and if the basic position information is judged to only comprise one initial vehicle position, judging the initial vehicle position as the target vehicle position.
5. The method of claim 1, wherein the vehicle IMU data comprises a portion or all of IMU data of the target vehicle after entering the target area to before the target vehicle resides.
6. A vehicle finding method, the method comprising:
acquiring a second real-time image shot by a mobile terminal, and determining the position of the mobile terminal according to the second real-time image, the IMU data of the mobile terminal and a semantic map of a target area;
acquiring a target vehicle position of a target vehicle;
navigating from the mobile terminal position to a target vehicle position based on the semantic map to find the target vehicle;
wherein the vehicle position and the mobile end position are located within the target area;
the method for acquiring the target vehicle position of the target vehicle comprises the following steps: the vehicle positioning method according to any one of claims 1 to 5; or (b)
Acquiring a first real-time image representing a vehicle position; and acquiring the position of the target vehicle according to the first real-time image and the semantic map of the target area.
7. The method of claim 6, wherein navigating from the mobile end location to a target vehicle location based on the semantic map to find the target vehicle comprises:
traversing the semantic map, and searching the mobile terminal position and the target vehicle position;
And planning a vehicle searching path according to the position of the mobile terminal and the position of the target vehicle by a preset navigation algorithm, and displaying the vehicle searching path on the mobile terminal.
8. The method of claim 6, wherein determining the mobile end location of the mobile end from the second real-time image, mobile end IMU data, and a semantic map of a target area comprises:
establishing a local positioning image according to the second real-time image and the IMU data of the mobile terminal;
extracting a plurality of third feature points and a plurality of third feature point descriptors of the local positioning image;
matching the plurality of third feature points with the plurality of first feature points, and matching the plurality of third feature point descriptors with the plurality of first feature point descriptors to obtain a mobile terminal initial position of the mobile terminal;
and determining the position of the mobile terminal according to the initial position of the mobile terminal and the semantic map.
9. The method of claim 8, wherein the determining the mobile end location from the mobile end initial location and the semantic map comprises:
image segmentation is carried out on the semantic map so as to obtain a characteristic position data set; the characteristic positions corresponding to the characteristic position data sets comprise arrows of the target area, pillars of the target area and/or lane lines of the target area;
Performing semantic segmentation on the second real-time image to obtain a mobile terminal real-time characteristic position dataset; the characteristic positions corresponding to the characteristic position data set of the mobile terminal comprise an arrow in the initial position of the mobile terminal, a pillar in the initial position of the mobile terminal and/or a lane line in the initial position of the mobile terminal;
and matching the real-time characteristic position data set of the mobile terminal with the characteristic position data set to determine the position of the mobile terminal.
10. The method of claim 9, wherein said image segmentation of the semantic map to obtain a feature location dataset comprises:
segmenting the characteristic position in the initial position of the mobile terminal in the second real-time image;
the feature locations are projected onto the local positioning image to obtain the feature location dataset.
11. A vehicle positioning system, the vehicle positioning system comprising: a vehicle information acquisition module and a vehicle position calculation module;
the vehicle information acquisition module is used for acquiring a first real-time image and vehicle IMU data of the target vehicle; wherein the first real-time image characterizes a location of a target vehicle;
The vehicle position calculation module is used for acquiring basic position information of the target vehicle according to the first real-time image and a semantic map of the target area;
the vehicle location calculation module is further configured to determine a target vehicle location based on the base location information and the vehicle IMU data.
12. A vehicle finding system, characterized in that the vehicle finding system comprises: the mobile terminal comprises a mobile terminal information acquisition module, a mobile terminal position calculation module, a vehicle position acquisition module and a path planning module;
the mobile terminal information acquisition module is used for acquiring a second real-time image shot by the mobile terminal;
the mobile terminal position calculation module is used for determining the mobile terminal position of the mobile terminal according to the second real-time image, the mobile terminal IMU data and the semantic map of the target area;
the vehicle position acquisition module is used for acquiring a target vehicle position of a target vehicle;
the path planning module is used for navigating from the mobile terminal position to a target vehicle position based on the semantic map so as to find the target vehicle;
wherein the vehicle position and the mobile end position are located within the target area; wherein the target vehicle position of the target vehicle is obtained according to the vehicle positioning system of claim 11.
13. An electronic device comprising a memory and a processor, the memory having stored therein program instructions which, when executed by the processor, perform the steps of the method of any of claims 1-10.
14. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein computer program instructions which, when executed by a processor, perform the steps of the method of any of claims 1-10.
CN202310519514.7A 2023-05-09 2023-05-09 Vehicle positioning and searching method, system, electronic equipment and storage medium Pending CN116772832A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310519514.7A CN116772832A (en) 2023-05-09 2023-05-09 Vehicle positioning and searching method, system, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310519514.7A CN116772832A (en) 2023-05-09 2023-05-09 Vehicle positioning and searching method, system, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116772832A true CN116772832A (en) 2023-09-19

Family

ID=87993863

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310519514.7A Pending CN116772832A (en) 2023-05-09 2023-05-09 Vehicle positioning and searching method, system, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116772832A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117109592A (en) * 2023-10-18 2023-11-24 北京集度科技有限公司 Vehicle navigation method, device, computer equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117109592A (en) * 2023-10-18 2023-11-24 北京集度科技有限公司 Vehicle navigation method, device, computer equipment and storage medium
CN117109592B (en) * 2023-10-18 2024-01-12 北京集度科技有限公司 Vehicle navigation method, device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108303103B (en) Method and device for determining target lane
CN112116654B (en) Vehicle pose determining method and device and electronic equipment
CN113034960B (en) Object change detection system for updating precise route map and method thereof
EP3355027A1 (en) Map updating method and vehicle-mounted terminal
JP5901779B2 (en) How to move data from image database map service into assist system
CN102362156B (en) Map data update system, map data updating method
CN101275841B (en) Feature information collecting apparatus and feature information collecting method
EP2920954B1 (en) Automatic image capture
CN101469998B (en) Feature information collecting apparatus, and own vehicle position recognition apparatus and navigation apparatus
CN102893129B (en) Terminal location certainty annuity, mobile terminal and terminal position identification method
CN106652533A (en) Reverse vehicle search method and apparatus thereof
CN113034566A (en) High-precision map construction method and device, electronic equipment and storage medium
CN112652186A (en) Parking lot vehicle searching method, client and storage medium
CN116772832A (en) Vehicle positioning and searching method, system, electronic equipment and storage medium
CN111323004B (en) Initial position determining method and vehicle-mounted terminal
CN109670003A (en) Electronic map parking lot update method, device and equipment
CN106779174A (en) Route planning method, apparatus and system
KR20150095365A (en) Distance measuring method using vision sensor database
CN111323029B (en) Navigation method and vehicle-mounted terminal
CN115420275A (en) Loop path prediction method and device, nonvolatile storage medium and processor
CN112689234B (en) Indoor vehicle positioning method, device, computer equipment and storage medium
CN114554391A (en) Parking lot vehicle searching method, device, equipment and storage medium
CN111651547B (en) Method and device for acquiring high-precision map data and readable storage medium
CN111754388B (en) Picture construction method and vehicle-mounted terminal
KR100221401B1 (en) Method for supporting and displaying the moving picture on computer numerical map using satellite navigation system and moving picture supporting system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination