CN116229753A - Navigation method and device for seeking vehicle - Google Patents

Navigation method and device for seeking vehicle Download PDF

Info

Publication number
CN116229753A
CN116229753A CN202111466652.0A CN202111466652A CN116229753A CN 116229753 A CN116229753 A CN 116229753A CN 202111466652 A CN202111466652 A CN 202111466652A CN 116229753 A CN116229753 A CN 116229753A
Authority
CN
China
Prior art keywords
point cloud
vehicle
cloud map
looking
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111466652.0A
Other languages
Chinese (zh)
Inventor
刘锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Rockwell Technology Co Ltd
Original Assignee
Beijing Rockwell Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Rockwell Technology Co Ltd filed Critical Beijing Rockwell Technology Co Ltd
Priority to CN202111466652.0A priority Critical patent/CN116229753A/en
Publication of CN116229753A publication Critical patent/CN116229753A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/123Traffic control systems for road vehicles indicating the position of vehicles, e.g. scheduled vehicles; Managing passenger vehicles circulating according to a fixed timetable, e.g. buses, trains, trams

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

The present disclosure provides a navigation method and a device for searching for a vehicle, and relates to the field of computer vision, wherein the method comprises: acquiring a target point cloud map of a vehicle parking track, wherein the target point cloud map comprises a looking-around image acquired in the parking process of the vehicle and the parking position of the vehicle; matching the photographed image of the terminal equipment, and matching the photographed image with the looking-around image; in response to the existence of a similar looking-around image matched with the shot image, determining the positioning position of the terminal equipment on the target point cloud map according to the similar looking-around image; and generating a vehicle searching navigation path on the target point cloud map according to the positioning position and the parking position, and sending the vehicle searching navigation path to the terminal equipment. Therefore, the point cloud map is generated by the vehicle, so that the efficiency of searching the vehicle in a strange environment by the driver can be improved, and the use experience of the driver is improved.

Description

Navigation method and device for seeking vehicle
Technical Field
The present disclosure relates to the field of computer vision, and in particular, to a navigation method and apparatus for searching for a vehicle.
Background
Along with the improvement of living standard of people, automobiles become transportation means for people to travel, but with the increase of vehicles, when the travel amount is relatively large, the underground garage is too small to meet the demands of people, so the existing underground garage has a very large scale. However, as the size of the garage increases, the movement of people in the garage is greatly influenced, and after the driver leaves, how to find the vehicle becomes a problem for the driver.
BRIEF SUMMARY OF THE PRESENT DISCLOSURE
The present disclosure aims to solve, at least to some extent, one of the technical problems in the related art.
To this end, an object of the present disclosure is to propose a navigation method for vehicle finding.
A second object of the present disclosure is to provide a navigation device for searching for a vehicle.
A third object of the present disclosure is to propose an electronic device.
A fourth object of the present disclosure is to propose a non-transitory computer readable storage medium.
A fifth object of the present disclosure is to propose a computer programme product.
In order to achieve the above object, an embodiment of a first aspect of the present disclosure provides a navigation method for searching for a vehicle, including: acquiring a target point cloud map of a vehicle parking track, wherein the target point cloud map comprises an looking-around image acquired in the vehicle parking process and a parking position of the vehicle; acquiring a shooting image of a terminal device, and matching the shooting image with the looking-around image; responding to the similar looking-around image matched with the shooting image, and matching the positioning position of the terminal equipment on the target point cloud map according to the similar looking-around image; and generating a vehicle searching navigation path on the target point cloud map according to the positioning position and the parking position, and sending the vehicle searching navigation path to a terminal device.
Matching the captured image with the looking-around image according to one embodiment of the present disclosure includes: comparing the shot image with the feature of the looking-around image to obtain the similarity between the shot image and the looking-around image; and in response to the presence of the looking-around image with the similarity larger than the set threshold, determining the looking-around image with the similarity larger than the set threshold as the similar looking-around image.
According to one embodiment of the present disclosure, the acquiring a target point cloud map of a vehicle parking track includes: acquiring a looking-around image acquired in the parking process of the vehicle; constructing SLAM, and generating an original point cloud map of the vehicle parking track; binding the looking-around image acquired at the same sampling moment with the original point cloud map to generate the target point cloud map.
According to one embodiment of the present disclosure, the generating the original point cloud map of the vehicle parking trajectory includes: acquiring measurement data of a sensor on the vehicle; determining a three-dimensional reconstruction scale factor based on the measurement data; and acquiring positioning information and point cloud data of the vehicle based on SLAM, and performing three-dimensional reconstruction based on the three-dimensional reconstruction scale factors, the positioning information and the point cloud data to generate the original point cloud map.
According to one embodiment of the present disclosure, the navigation method for seeking vehicles further includes: sending the original point cloud map to the terminal equipment; acquiring a position query request sent by the terminal equipment, wherein the query request comprises a queried target position on the original point cloud map; and acquiring a target looking-around image corresponding to the target position, and sending the target looking-around image to the terminal equipment.
According to one embodiment of the present disclosure, the navigation method for seeking vehicles further includes: and sending the target point cloud map to the terminal equipment.
According to one embodiment of the disclosure, the determining, according to the similar looking-around image, a positioning position of the terminal device on the target point cloud map includes: and acquiring the sampling time of the similar looking-around image, acquiring the binding position of the similar looking-around image from the target point cloud map based on the sampling time, and determining the binding position of the similar looking-around image as the positioning position.
An embodiment of a first aspect of the present disclosure provides a navigation method for searching for a vehicle, including: shooting the current environment to obtain a shooting image; the shot image is sent to a server, wherein the shot image is used for determining the positioning position of the current environment on a target point cloud map, and the target point cloud map comprises the parking position of a vehicle; and receiving the vehicle searching navigation path sent by the server.
According to one embodiment of the present disclosure, the navigation method for seeking vehicles further includes: receiving the original point cloud map; responding to a position selection operation, generating a position query request based on the position selection operation, and sending the position query request to the server, wherein the position query request comprises a queried target position on the original point cloud map; and receiving a target looking-around image corresponding to the target position sent by the server, and displaying the target looking-around image at the target position.
According to one embodiment of the present disclosure, the navigation method for seeking vehicles further includes: receiving the target point cloud map; responding to the position selection operation, and determining a queried target position on the target point cloud map based on the position selection operation; and acquiring the looking-around image bound by the target position and displaying the looking-around image at the target position.
According to one embodiment of the present disclosure, the navigation method for seeking vehicles further includes: and highlighting the car searching navigation path on the original point cloud map or the target point cloud map.
To achieve the above object, an embodiment of a second aspect of the present disclosure provides a navigation device for searching for a vehicle, including: the system comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring a target point cloud map of a vehicle parking track, and the target point cloud map comprises an looking-around image acquired in the parking process of the vehicle and a parking position of the vehicle; the matching module is used for acquiring a shooting image and matching the shooting image with the looking-around image; the positioning module is used for responding to the similar looking-around image matched with the shooting image, and matching the positioning position of the terminal equipment on the target point cloud map according to the similar looking-around image; and the generation module is used for generating a vehicle searching navigation path on the target point cloud map according to the positioning position and the parking position and sending the vehicle searching navigation path to the terminal equipment.
An embodiment of a second aspect of the present disclosure provides a terminal device for seeking a vehicle, including: the shooting module is used for shooting the current environment to obtain a shooting image; the sending module is used for sending the shot image to a server, wherein the shot image is used for determining the positioning position of the current environment on a target point cloud map, and the target point cloud map comprises the parking position of a vehicle; and the receiving module is used for receiving the car searching navigation path sent by the server.
To achieve the above object, an embodiment of a third aspect of the present disclosure provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to implement a method of navigation for a vehicle search according to an embodiment of the first aspect of the present disclosure.
To achieve the above object, a fourth aspect of the present disclosure provides a non-transitory computer-readable storage medium storing computer instructions for implementing a navigation method for seeking a vehicle according to the first aspect of the present disclosure.
To achieve the above object, an embodiment of a fifth aspect of the present disclosure proposes a computer program product comprising a computer program for implementing a navigation method for a vehicle finding according to an embodiment of the first aspect of the present disclosure when being executed by a processor.
Drawings
FIG. 1 is a schematic diagram of a method of navigating a vehicle in accordance with one embodiment of the present disclosure;
FIG. 2 is a schematic diagram of another method of navigation for a vehicle search in accordance with one embodiment of the present disclosure;
FIG. 3 is a schematic diagram of another method of navigation for a vehicle search in accordance with one embodiment of the present disclosure;
FIG. 4 is a schematic diagram of another method of navigation for a vehicle search in accordance with one embodiment of the present disclosure;
FIG. 5 is a schematic diagram of another method of navigation for a vehicle search in accordance with one embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a method of navigation for a vehicle search in accordance with one embodiment of the present disclosure;
FIG. 7 is a schematic diagram of another method of navigation for a vehicle search in accordance with one embodiment of the present disclosure;
FIG. 8 is a block diagram of a navigation device for locating a vehicle in accordance with one embodiment of the present disclosure;
FIG. 9 is a block diagram of another vehicle-seeking navigation device in accordance with one embodiment of the present disclosure;
fig. 10 is a block diagram of an electronic device of one embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary and intended for the purpose of explaining the present disclosure and are not to be construed as limiting the present disclosure.
Fig. 1 is a schematic diagram of an exemplary embodiment of a vehicle searching navigation method according to the present disclosure, as shown in fig. 1, the vehicle searching navigation method includes the following steps:
s101, acquiring a target point cloud map of a vehicle parking track, wherein the target point cloud map comprises an looking-around image acquired in the parking process of the vehicle and the parking position of the vehicle.
In the reverse engineering, the point data set of the product appearance surface obtained by the measuring instrument is also called as point cloud, the number of points obtained by using a three-dimensional coordinate measuring machine is usually small, the distance between the points is also large, and the point cloud is called as sparse point cloud; the point cloud obtained by using the three-dimensional laser scanner or the photographic scanner has larger and denser point number, and is called dense point cloud.
In the embodiment of the present disclosure, the parking trajectory may refer to a travel trajectory of a vehicle from entering a parking lot to finding a parking space. The driver can turn on the navigation function by controlling the vehicle. After the vehicle starts the navigation function of the person and the vehicle, driving data and looking around images in the vehicle running process can be obtained through the vehicle-mounted equipment. For example, the in-vehicle device may be an in-vehicle sensor, an in-vehicle camera, a gyro sensor, or the like.
Further, a cloud map of target points of the vehicle parking trajectory may be generated by processing the driving data and the look-around image. Alternatively, the driving data and the look-around image may be processed by a processor of the vehicle to generate a cloud map of target points of the vehicle's parking trajectory.
Optionally, the driving data and the looking-around image can be uploaded to a cloud server for processing, so as to generate a point cloud map.
Further, after the point cloud map is generated, a navigation map Application (APP) is updated through the point cloud map.
It should be noted that, the looking-around image is formed by imaging information of 360 degrees horizontally and 180 degrees vertically at one time according to the principle of transmission and reflection of a spherical mirror of physical optics by using bionics, and is processed by a program installed on a vehicle, so that a picture is displayed in a mode of habit of eyes. In the embodiment of the disclosure, the point cloud map adopts the looking-around image, so that a driver can know the surrounding environment more remarkably.
S102, acquiring a shooting image of the terminal equipment, and matching the shooting image with the looking-around image.
In the embodiment of the disclosure, a driver can shoot an image of the surrounding environment through terminal equipment in the process of searching for the vehicle, and the shot image is uploaded to a server and matched with an looking-around image in a cloud map of a target point through the server.
It should be noted that the terminal device may be a mobile phone, a tablet computer, a wearable device with a camera function, etc., and no setting is made here.
And S103, in response to the existence of the similar looking-around image matched with the shooting image, matching the positioning position of the terminal equipment on the target point cloud map according to the similar looking-around image.
In the embodiment of the disclosure, any point on the target point cloud map corresponds to a looking-around image taken when the vehicle passes through the point. Therefore, the shooting image and the looking-around image can be matched to determine whether the shooting image is located on the vehicle running path, and meanwhile, if the shooting image is determined to be located on the vehicle running path, the locating position of the terminal equipment on the target point cloud map can be determined through the looking-around image corresponding to the shooting image.
And S104, generating a vehicle searching navigation path on the target point cloud map according to the positioning position and the parking position, and sending the vehicle searching navigation path to the terminal equipment.
In the embodiment of the disclosure, after the positioning position is acquired, the server can generate a vehicle searching navigation path according to the positioning position and the parking position so as to conveniently carry out navigation reminding on a driver. Optionally, the vehicle searching navigation path is issued to the terminal device of the driver, and then the vehicle searching navigation path can be displayed on the navigation map APP of the terminal device. The navigation map APP includes a point cloud map corresponding to a vehicle parking track, which may be a target point cloud map or a point cloud map that does not include a looking-around image.
Optionally, displaying the vehicle running tracks from the driver positioning point to the vehicle positioning point on a point cloud map of the navigation map APP in other colors, and drawing arrows on the tracks in order of a time axis through which the vehicle passes, so as to guide the driver to search the vehicle according to the navigation path. Further, if the driver deviates from the navigation path, the navigation map APP may also alert the driver of the deviation according to the travel path of the driver.
In the embodiment of the disclosure, a target point cloud map of a vehicle parking track is firstly obtained, wherein the target point cloud map comprises a looking-around image acquired in a parking process of the vehicle and a parking position of the vehicle, then a shooting image of a terminal device is obtained, the shooting image is matched with the looking-around image, then a positioning position of the terminal device on the target point cloud map is determined according to a similar looking-around image matched with the shooting image in response to the existence of the similar looking-around image, and finally a vehicle searching navigation path is generated on the target point cloud map according to the positioning position and the parking position and is sent to the terminal device. Therefore, the point cloud map is generated by the vehicle, so that a driver can find the vehicle in a strange environment more conveniently, and the use experience of the driver is improved.
Matching the captured image with the looking-around image in the above embodiment can be further explained by fig. 2, and as shown in fig. 2, the method includes:
s201, comparing the shot image with the feature of the looking-around image to acquire the similarity between the shot image and the looking-around image.
In the embodiment of the present disclosure, the number of shot images is not unique, and is specifically set according to the actual situation. For example, in an underground garage scene, a driver is required to upload 3 photographed images to compare with the features of the looking-around image; in the scene of an open parking lot, a driver is required to upload 2 photographed images to compare with the looking-around image features.
Alternatively, the photographed image and the through-view image may be input into an image processing model to be processed to acquire the similarity between the photographed image and the through-view image. It should be noted that the image processing model may be trained in advance and pre-stored in a storage space of the vehicle processor or the cloud server, so as to be invoked and used when needed.
Optionally, the shot image and the looking-around image can be processed through image processing software, feature information of the two images is extracted, target detection is performed based on the feature information, and finally the similarity of the two images is determined.
S202, in response to the presence of the looking-around image with the similarity larger than the set threshold, determining the looking-around image with the similarity larger than the set threshold as the similar looking-around image.
It will be appreciated that the threshold value may be different according to different situations, and is not limited in any way, and is specifically set according to the actual environment. For example, when in a dim scene, the set threshold may be 0.7; in bright scenes, the set threshold may be 0.9.
In the embodiment of the disclosure, the photographed image is first compared with the looking-around image features to obtain the similarity between the photographed image and the looking-around image, and then in response to the presence of the looking-around image with the similarity greater than the set threshold, the looking-around image with the similarity greater than the set threshold is determined to be the similar looking-around image. Therefore, a driver can match the looking-around image through the surrounding environment, the current position is accurately positioned, and the success rate of vehicle searching is greatly increased.
In the embodiment of the disclosure, the position bound with the similar looking-around image can be obtained from the point cloud map based on the sampling time, and the position bound with the similar looking-around image is determined as the positioning position.
In the above embodiment, the method for obtaining the point cloud map of the parking track of the vehicle may be further explained by fig. 3, as shown in fig. 3, and includes:
s301, acquiring a looking-around image acquired in the process of parking the vehicle.
Specific steps may refer to the above embodiments, and are not repeated here.
S302, constructing SLAM, and generating an original point cloud map of the vehicle parking track.
It should be noted that, the current map construction (Simultaneous Localization And Mapping, SLAM) can be implemented, the user starts to move from an unknown position in an unknown environment, and positions itself according to the position and the map in the moving process, and builds an incremental map on the basis of self-positioning, so as to realize autonomous positioning and navigation of the robot. The vehicle-mounted sensor can be used for achieving more accurate vehicle body positioning and point cloud by utilizing the computing capacity of the cloud, the point cloud map is achieved by utilizing the vehicle-mounted speed sensor, the gyroscope sensor and steering wheel corner information, and the path of the vehicle is drawn on the map. Specifically, a SLAM map may be constructed based on the instantaneous location of the vehicle and the point cloud information collected by the location.
Specifically, the vehicle body can be controlled to sequentially traverse preset traversing points in the SLAM map through a map building program of the vehicle and a multi-point navigation program, so that the map is built by taking a preset starting point as a coordinate origin through the map building program, and an original point cloud map is obtained.
S303, binding the looking-around image acquired at the same sampling moment with the original point cloud map to generate a target point cloud map.
In the embodiment of the disclosure, after the looking-around image and the original point cloud map are acquired, the looking-around image and the point positions of the corresponding original point cloud map are bound, so that the looking-around image of any point of the point cloud map can be seen, and the environmental information near the point can be determined based on the looking-around image.
Alternatively, the target point cloud map may be generated by matching the coordinate position of the looking-around image with the coordinate position of the original point cloud map, and embedding the view image into the original point cloud map according to the matching result. Specifically, the coordinate position of the looking-around image can be converted into the coordinate position of the original point cloud map, and the coordinate is matched with the coordinate in the point cloud map according to the converted coordinate, so that the target point cloud map is generated.
Optionally, matching can be performed by looking around the shooting time of the image and the generation time of the point on the original point cloud map, and the view image is embedded into the original point cloud map according to the matching result, so as to generate the target point cloud map.
In the embodiment of the disclosure, firstly, a looking-around image acquired in a vehicle parking process is acquired, then SLAM is constructed based on instant positioning, an original point cloud map of a vehicle parking track is generated, and finally, the looking-around image acquired at the same sampling moment and the original point cloud map are bound to generate a target point cloud map. Therefore, the coordinate points of the point cloud map are bound through the looking-around image, a basis is provided for a follow-up driver to seek a vehicle through photographing, and meanwhile, the use experience of the driver is greatly improved.
The above embodiment of obtaining an original point cloud map of a vehicle parking track may be further explained by fig. 4, and as shown in the drawing, the method includes:
s401, acquiring measurement data of sensors on a vehicle.
Specific steps may refer to the above embodiments, and are not described herein.
S402, determining a three-dimensional reconstruction scale factor based on the measurement data.
In the disclosed embodiments, the three-dimensional reconstruction scale factor may be determined by the vehicle speed and the time the vehicle experiences the scale factor. For example, when the vehicle speed is V, the vehicle may measure the time T over which the scale factor is experienced by the sensor, whereby the scale factor l=v×t.
S403, positioning information and point cloud data for the vehicle are acquired based on SLAM, and three-dimensional reconstruction is performed based on the three-dimensional reconstruction scale factors, the positioning information and the point cloud data, so that an original point cloud map is generated.
The size of the point cloud map is defined through the scale factors, so that the length of a road in the point cloud map, the size of an object in an looking-around image and the like can be accurately described.
In the embodiment of the disclosure, firstly, measurement data of a sensor on a vehicle is acquired, then a three-dimensional reconstruction scale factor is determined based on the measurement data, finally positioning information and point cloud data which are acquired for the vehicle based on SLAM are acquired, and three-dimensional reconstruction is performed based on the three-dimensional reconstruction scale factor, the positioning information and the point cloud data, so that an original point cloud map is generated. Therefore, by adding the scale factors into the point cloud map, a driver can have more visual dimension knowledge on the point cloud map, and meanwhile, the accuracy of the map is increased.
In practice, there are situations where the driver is uncertain whether the current location is correct, and there may be a need to compare with the looking-around image.
After the original point cloud map is generated in the above embodiment, the method may be further extended by fig. 5, as shown in the figure, and the method includes:
s501, an original point cloud map is sent to a terminal device.
After acquiring the original point cloud map, the server may send the original point cloud map to the terminal device.
It should be noted that, the terminal device may be provided with a navigation map APP, and the server may download or update the generated original point cloud map to the navigation map APP, so as to facilitate operations such as navigation and positioning by the user.
S502, acquiring a position query request sent by a terminal device, wherein the query request comprises a queried target position on an original point cloud map.
The driver selects a position on the original point cloud map, the position is a target position where the driver wants to view the looking-around image, and accordingly, the terminal device can acquire the target position and generate a position query request based on the target position and send the position query request to the server. The server may then obtain the location query request and obtain the queried target location from the location query request.
S503, acquiring a target looking-around image corresponding to the target position, and sending the target looking-around image to the terminal equipment.
The server stores the looking-around image of each position, and after the target position is determined, the looking-around image bound with the target position can be obtained, namely the target looking-around image. The server sends the target looking-around image to the terminal equipment, and the terminal equipment receives the target looking-around image and can display the looking-around image on the display screen so as to facilitate the position confirmation of the driver.
Therefore, the driver can send a request to the server through the navigation map APP so as to acquire the looking-around image corresponding to the target position, so that the driver can know the surrounding environment of the target position more conveniently, and the vehicle can be found conveniently.
Further, after the fact that the matched looking-around image exists in the shot image is determined, the fact that the terminal equipment is located on the driving path of the automobile can be determined, and therefore the automobile searching navigation path can be issued to the terminal equipment through the server.
As another possible case, when it is determined that the photographed image does not have a matching looking-around image, the server generates a reminder and transmits it to the terminal device.
Fig. 6 is a schematic diagram of an exemplary embodiment of a vehicle-seeking navigation method according to the present disclosure, as shown in fig. 6, the vehicle-seeking navigation method includes the following steps:
s601, shooting the current environment to obtain a shooting image.
It should be noted that, the execution body of the embodiment of the present disclosure is a client device, and the client device may include a driver mobile phone, a tablet computer, and the like.
It can be understood that at least one shot image is described in the embodiment of the disclosure, and the shot images are specifically set according to actual needs. After the captured image is acquired, the navigation map APP may first process the captured image, for example, size-enlarging the captured image, providing brightness of the captured image, and the like.
As another possible case, if the photographed image does not meet the requirement, the navigation map APP may also remind the driver to photograph the environment again.
And S602, sending a shooting image to a server, wherein the shooting image is used for determining the positioning position of the current environment on a target point cloud map, and the target point cloud map comprises the parking position of the vehicle.
In the embodiment of the disclosure, the shooting image is matched with the looking-around image on the target point cloud map, and if the shooting image is matched with the similar looking-around image, the positioning position of the terminal equipment on the target point cloud map can be determined through the similar looking-around image.
Optionally, the positioning position of the terminal device on the target point cloud map can be determined through the coordinate information of the similar looking-around image.
Optionally, the location point generated at the same time on the point cloud map can be determined through the generation time of the similar looking-around image, so that the location position of the terminal device on the target point cloud map can be determined.
S603, receiving the car searching navigation path sent by the server.
After uploading the positioning position and parking position determination to the server, the server may generate a vehicle-finding navigation path and issue it to the terminal device.
In the embodiment of the disclosure, firstly, shooting is performed on the current environment of the terminal equipment to obtain a shooting image, then the shooting image is sent to a server, wherein the shooting image is used for determining the positioning position of the terminal equipment on a target point cloud map, the target point cloud map comprises the parking position of a vehicle, and finally, a vehicle searching navigation path determined by the server according to the positioning position and the parking position is received. Therefore, a driver can determine the position of the terminal equipment on the point cloud map through shooting the image and acquire the vehicle searching navigation path, so that the vehicle searching efficiency of the driver is greatly improved.
To further explain the method described in the above embodiment, the method can be further extended by fig. 7, as shown in fig. 7:
s701, receiving an original point cloud map.
In the embodiment of the disclosure, the original point cloud map issued by the server can be received through the terminal. The terminal may include a driver's cell phone, tablet computer, etc.
S702, responding to the position selection operation, generating a position query request based on the position selection operation, and sending the position query request to a server, wherein the position query request comprises a queried target position on an original point cloud map.
In the embodiment of the disclosure, the user's selection operation may be monitored, and further, the user's position may also be monitored and displayed on the original point cloud map, and the latest navigation path may be determined according to the query request and the target position of the query.
It will be appreciated that the inquiry request does not include only inquiry of the vehicle parking position, but may include inquiry of a position point on the vehicle travel path. For example, an elevator car on the vehicle travel path may be used as the query request.
S703, receiving the target looking-around image corresponding to the target position sent by the server, and displaying the target looking-around image at the target position.
In the embodiment of the disclosure, an original point cloud map issued by a server is received first, then a terminal device monitors selection operation of the original point cloud map, a position query request is generated based on the selection operation and sent to the server, wherein the query request comprises a queried target position on the original point cloud map, and finally a target looking-around image corresponding to the target position sent by the server is received and displayed at the target position. Therefore, the driver can search the target position according to the actual needs of the driver, and the use experience of the driver and the practicability of the map are greatly improved.
Further, the method can also receive the target point cloud map issued by the server, monitor the selection operation of the terminal equipment on the target point cloud map, determine the inquired target position on the target point cloud map based on the selection operation, finally acquire the looking-around image bound by the target position and display the looking-around image at the target position.
Further, the navigation path can be adjusted in real time by monitoring the terminal equipment and the target position. Alternatively, the terminal device may be alerted or alerted when it deviates from the navigation path.
Therefore, the position information can be displayed on the point cloud map by monitoring the driver and the target position, so that the sense of the driver is more stereoscopic, and meanwhile, the navigation path can be adjusted in real time, so that the efficiency of vehicle searching and navigation is greatly improved.
Further, the vehicle searching navigation path is highlighted on the original point cloud map or the target point cloud map received by the terminal equipment.
In the disclosed embodiment, when the captured image does not have a matching looking-around image. And the server can send reminding information, wherein the reminding information is used for reminding that the similar looking-around image matched with the shooting image does not exist on the target point cloud map.
Fig. 8 is a schematic diagram of a car-seeking navigation device according to the present disclosure, as shown in fig. 8, the car-seeking navigation device 800 includes: an acquisition module 810, a matching module 820, a positioning module 830, a generation module 840.
The acquiring module 810 is configured to acquire a target point cloud map of a parking track of the vehicle, where the target point cloud map includes a looking-around image acquired during a parking process of the vehicle and a parking position of the vehicle.
And the matching module 820 is used for acquiring the shooting image and matching the shooting image with the looking-around image.
And the positioning module 830 is configured to, in response to the presence of a similar looking-around image that matches the captured image, match a positioning position of the terminal device on the target point cloud map according to the similar looking-around image.
The generating module 840 is configured to generate a vehicle-searching navigation path on the target point cloud map according to the positioning position and the parking position, and send the vehicle-searching navigation path to the terminal device.
In one embodiment of the present disclosure, the matching module 820 is further configured to: comparing the features of the shooting image and the looking-around image to obtain the similarity between the shooting image and the looking-around image; in response to there being a looking-around image with a similarity greater than the set threshold, then the looking-around image with a similarity greater than the set threshold is determined to be a similar looking-around image.
In one embodiment of the present disclosure, the obtaining module 810 is further configured to: acquiring a looking-around image acquired in the process of parking a vehicle; constructing SLAM based on instant positioning, and generating an original point cloud map of a vehicle parking track; binding the looking-around image acquired at the same sampling moment with the original point cloud map to generate a target point cloud map.
In one embodiment of the present disclosure, the obtaining module 810 is further configured to: acquiring measurement data of a sensor on a vehicle; determining a three-dimensional reconstruction scale factor based on the measurement data; and acquiring positioning information and point cloud data for the vehicle based on SLAM, and performing three-dimensional reconstruction based on the three-dimensional reconstruction scale factor, the positioning information and the point cloud data to generate an original point cloud map.
In one embodiment of the present disclosure, the obtaining module 810 is further configured to: transmitting the original point cloud map to the terminal equipment; acquiring a position query request sent by terminal equipment, wherein the query request comprises a queried target position on an original point cloud map; and acquiring a target looking-around image corresponding to the target position, and sending the target looking-around image to the terminal equipment.
In one embodiment of the present disclosure, the obtaining module 810 is further configured to: and sending the target point cloud map to the terminal equipment.
In one embodiment of the present disclosure, the positioning module 830 is further configured to: and acquiring the sampling time of the similar looking-around image, acquiring the binding position of the similar looking-around image from the point cloud map based on the sampling time, and determining the binding position of the similar looking-around image as the positioning position.
In the embodiment of the present disclosure, the navigation device 800 for seeking a car is further configured to: and generating reminding information and sending the reminding information to the terminal equipment in response to the fact that the similar looking-around image matched with the shooting image does not exist.
In the embodiment of the present disclosure, the navigation device 800 for seeking a car is further configured to: and issuing a vehicle searching navigation path to the terminal equipment.
Fig. 9 is a schematic diagram of a terminal device for seeking vehicle according to the present disclosure, as shown in fig. 9, the terminal device 900 for seeking vehicle includes: a shooting module 910, a sending module 920, and a receiving module 930.
The shooting module 910 is configured to shoot the current environment, and obtain a shooting image.
The sending module 920 is configured to send a shot image to the server, where the shot image is used to determine a location of the current environment on a target point cloud map, and the target point cloud map includes a parking position of the vehicle.
And the receiving module 930 is configured to receive the car-seeking navigation path sent by the server.
In one embodiment of the present disclosure, the terminal device 900 is further configured to: receiving an original point cloud map; responding to the position selection operation, generating a position query request based on the position selection operation, and sending the position query request to a server, wherein the position query request comprises a queried target position on an original point cloud map; and receiving a target looking-around image corresponding to the target position sent by the server, and displaying the target looking-around image at the target position.
In one embodiment of the present disclosure, the terminal device 900 is further configured to: receiving a target point cloud map; responding to the position selection operation, and determining the queried target position on the target point cloud map based on the position selection operation; and acquiring the looking-around image bound by the target position and displaying the looking-around image at the target position.
The target point cloud map contains a looking-around image, so that a driver can take a picture of the surrounding environment, match the taken image with the looking-around image through the target point cloud map to determine the target position of the driver, and display the target position through the terminal equipment 900. Therefore, the driver can position the position by positioning the surrounding environment image through the target point cloud map issued by the server.
In one embodiment of the present disclosure, the terminal device 900 is further configured to: and receiving reminding information, wherein the reminding information is used for reminding that similar looking-around images matched with the shooting images do not exist on the target point cloud map.
In one embodiment of the present disclosure, the terminal device 900 is further configured to: and highlighting the car searching navigation path on the received original point cloud map or the received target point cloud map.
In order to implement the above embodiments, the embodiments of the present disclosure further provide an electronic device 1000, as shown in fig. 10, where the electronic device 1000 includes: the processor 1001 is in communication with a memory 1002, and the memory 1002 stores instructions executable by the at least one processor, the instructions being executable by the at least one processor 1001 to implement a method of navigating a vehicle as in an embodiment of the first aspect of the present disclosure.
To achieve the above-described embodiments, the embodiments of the present disclosure also propose a non-transitory computer-readable storage medium storing computer instructions for causing a computer to implement a navigation method for seeking a vehicle as in the embodiments of the first aspect of the present disclosure.
To achieve the above embodiments, the embodiments of the present disclosure also propose a computer program product comprising a computer program which, when executed by a processor, implements a navigation method of seeking a vehicle as in the embodiments of the first aspect of the present disclosure.
In the description of the present disclosure, it should be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", "axial", "radial", "circumferential", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings are merely for convenience in describing the present disclosure and simplifying the description, and do not indicate or imply that the device or element being referred to must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the present disclosure.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present disclosure, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Although embodiments of the present disclosure have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the present disclosure, and that variations, modifications, alternatives, and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the present disclosure.

Claims (16)

1. A navigation method for locating a vehicle, comprising:
acquiring a target point cloud map of a vehicle parking track, wherein the target point cloud map comprises an looking-around image acquired in the vehicle parking process and a parking position of the vehicle;
acquiring a shooting image of a terminal device, and matching the shooting image with the looking-around image;
responding to the similar looking-around image matched with the shooting image, and matching the positioning position of the terminal equipment on the target point cloud map according to the similar looking-around image;
and generating a vehicle searching navigation path on the target point cloud map according to the positioning position and the parking position, and sending the vehicle searching navigation path to the terminal equipment.
2. The method of claim 1, wherein said matching the captured image with the look-around image comprises:
comparing the shot image with the feature of the looking-around image to obtain the similarity between the shot image and the looking-around image;
and in response to the presence of the looking-around image with the similarity larger than the set threshold, determining the looking-around image with the similarity larger than the set threshold as the similar looking-around image.
3. The method of claim 1, wherein the acquiring the cloud map of target points of the vehicle parking trajectory comprises:
acquiring a looking-around image acquired in the parking process of the vehicle;
constructing SLAM, and generating an original point cloud map of the vehicle parking track;
binding the looking-around image acquired at the same sampling moment with the original point cloud map to generate the target point cloud map.
4. The method of claim 3, wherein the generating the raw point cloud map of the vehicle parking trajectory comprises:
acquiring measurement data of a sensor on the vehicle;
determining a three-dimensional reconstruction scale factor based on the measurement data;
and acquiring positioning information and point cloud data of the vehicle based on the SLAM, and performing three-dimensional reconstruction based on the three-dimensional reconstruction scale factors, the positioning information and the point cloud data to generate the original point cloud map.
5. The method of claim 3 or 4, wherein after generating the original point cloud map of the vehicle parking trajectory, further comprising:
sending the original point cloud map to the terminal equipment;
acquiring a position query request sent by the terminal equipment, wherein the query request comprises a queried target position on the original point cloud map;
and acquiring a target looking-around image corresponding to the target position, and sending the target looking-around image to the terminal equipment.
6. The method of any of claims 1-4, wherein the acquiring a cloud map of target points of a vehicle parking trajectory comprises:
and sending the target point cloud map to the terminal equipment.
7. The method according to any one of claims 1-4, wherein said matching the location of the terminal device on the target point cloud map from the similar look-around image comprises:
and acquiring the sampling time of the similar looking-around image, acquiring the binding position of the similar looking-around image from the target point cloud map based on the sampling time, and determining the binding position of the similar looking-around image as the positioning position.
8. A navigation method for locating a vehicle, comprising:
shooting the current environment to obtain a shooting image;
the shot image is sent to a server, wherein the shot image is used for determining the positioning position of the current environment on a target point cloud map, and the target point cloud map comprises the parking position of a vehicle;
and receiving the vehicle searching navigation path sent by the server.
9. The method of claim 8, wherein the method further comprises:
receiving the original point cloud map;
responding to a position selection operation, generating a position query request based on the position selection operation, and sending the position query request to the server, wherein the position query request comprises a queried target position on the original point cloud map;
and receiving a target looking-around image corresponding to the target position sent by the server, and displaying the target looking-around image at the target position.
10. The method of claim 8, wherein the method further comprises:
receiving the target point cloud map;
responding to the position selection operation, and determining a queried target position on the target point cloud map based on the position selection operation;
and acquiring the looking-around image bound by the target position and displaying the looking-around image at the target position.
11. The method according to claim 9 or 10, characterized in that the method further comprises:
and highlighting the car searching navigation path on the original point cloud map or the target point cloud map.
12. A navigation device for locating a vehicle, comprising:
the system comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring a target point cloud map of a vehicle parking track, and the target point cloud map comprises an looking-around image acquired in the parking process of the vehicle and a parking position of the vehicle;
the matching module is used for acquiring a shooting image and matching the shooting image with the looking-around image;
the positioning module is used for responding to the similar looking-around image matched with the shooting image, and matching the positioning position of the terminal equipment on the target point cloud map according to the similar looking-around image;
and the generation module is used for generating a vehicle searching navigation path on the target point cloud map according to the positioning position and the parking position and sending the vehicle searching navigation path to the terminal equipment.
13. A terminal device for locating a vehicle, comprising:
the shooting module is used for shooting the current environment to obtain a shooting image;
the sending module is used for sending the shot image to a server, wherein the shot image is used for determining the positioning position of the current environment on a target point cloud map, and the target point cloud map comprises the parking position of a vehicle;
and the receiving module is used for receiving the car searching navigation path sent by the server.
14. An electronic device, comprising a memory and a processor;
wherein the processor runs a program corresponding to executable program code stored in the memory by reading the executable program code for implementing the method according to any one of claims 1-11.
15. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any one of claims 1-11.
16. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-11.
CN202111466652.0A 2021-12-03 2021-12-03 Navigation method and device for seeking vehicle Pending CN116229753A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111466652.0A CN116229753A (en) 2021-12-03 2021-12-03 Navigation method and device for seeking vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111466652.0A CN116229753A (en) 2021-12-03 2021-12-03 Navigation method and device for seeking vehicle

Publications (1)

Publication Number Publication Date
CN116229753A true CN116229753A (en) 2023-06-06

Family

ID=86571787

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111466652.0A Pending CN116229753A (en) 2021-12-03 2021-12-03 Navigation method and device for seeking vehicle

Country Status (1)

Country Link
CN (1) CN116229753A (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103606301A (en) * 2013-11-27 2014-02-26 南通芯迎设计服务有限公司 Parking and vehicle locating method based on intelligent terminal
CN108922237A (en) * 2018-07-29 2018-11-30 合肥市智信汽车科技有限公司 Vehicle positioning method and car searching method in a kind of parking lot
CN109166344A (en) * 2018-09-27 2019-01-08 盯盯拍(深圳)云技术有限公司 Parking lot car searching method and parking lot car searching device
CN110570449A (en) * 2019-09-16 2019-12-13 电子科技大学 positioning and mapping method based on millimeter wave radar and visual SLAM
CN111276007A (en) * 2020-01-20 2020-06-12 深圳市廿年科技有限公司 Method for positioning and navigating automobile in parking lot through camera
CN111724621A (en) * 2020-06-23 2020-09-29 上海擎感智能科技有限公司 Vehicle searching system, method, computer readable storage medium and client
CN112349127A (en) * 2020-09-16 2021-02-09 深圳市顺易通信息科技有限公司 Method, terminal and analysis system for searching vehicle in parking lot
CN112464796A (en) * 2020-11-25 2021-03-09 迪蒙智慧交通科技有限公司 Vehicle searching method, vehicle searching system and computer readable storage medium
CN112585659A (en) * 2020-11-27 2021-03-30 华为技术有限公司 Navigation method, device and system
CN112652186A (en) * 2020-12-22 2021-04-13 广州小鹏自动驾驶科技有限公司 Parking lot vehicle searching method, client and storage medium
CN113012464A (en) * 2021-02-20 2021-06-22 腾讯科技(深圳)有限公司 Vehicle searching guiding method, device, equipment and computer readable storage medium
CN113252051A (en) * 2020-02-11 2021-08-13 北京图森智途科技有限公司 Map construction method and device
CN113256804A (en) * 2021-06-28 2021-08-13 湖北亿咖通科技有限公司 Three-dimensional reconstruction scale recovery method and device, electronic equipment and storage medium
CN113450591A (en) * 2020-03-25 2021-09-28 阿里巴巴集团控股有限公司 Parking lot vehicle finding method, parking position determining system and related equipment

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103606301A (en) * 2013-11-27 2014-02-26 南通芯迎设计服务有限公司 Parking and vehicle locating method based on intelligent terminal
CN108922237A (en) * 2018-07-29 2018-11-30 合肥市智信汽车科技有限公司 Vehicle positioning method and car searching method in a kind of parking lot
CN109166344A (en) * 2018-09-27 2019-01-08 盯盯拍(深圳)云技术有限公司 Parking lot car searching method and parking lot car searching device
CN110570449A (en) * 2019-09-16 2019-12-13 电子科技大学 positioning and mapping method based on millimeter wave radar and visual SLAM
CN111276007A (en) * 2020-01-20 2020-06-12 深圳市廿年科技有限公司 Method for positioning and navigating automobile in parking lot through camera
CN113252051A (en) * 2020-02-11 2021-08-13 北京图森智途科技有限公司 Map construction method and device
CN113450591A (en) * 2020-03-25 2021-09-28 阿里巴巴集团控股有限公司 Parking lot vehicle finding method, parking position determining system and related equipment
CN111724621A (en) * 2020-06-23 2020-09-29 上海擎感智能科技有限公司 Vehicle searching system, method, computer readable storage medium and client
CN112349127A (en) * 2020-09-16 2021-02-09 深圳市顺易通信息科技有限公司 Method, terminal and analysis system for searching vehicle in parking lot
CN112464796A (en) * 2020-11-25 2021-03-09 迪蒙智慧交通科技有限公司 Vehicle searching method, vehicle searching system and computer readable storage medium
CN112585659A (en) * 2020-11-27 2021-03-30 华为技术有限公司 Navigation method, device and system
CN112652186A (en) * 2020-12-22 2021-04-13 广州小鹏自动驾驶科技有限公司 Parking lot vehicle searching method, client and storage medium
CN113012464A (en) * 2021-02-20 2021-06-22 腾讯科技(深圳)有限公司 Vehicle searching guiding method, device, equipment and computer readable storage medium
CN113256804A (en) * 2021-06-28 2021-08-13 湖北亿咖通科技有限公司 Three-dimensional reconstruction scale recovery method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108413975B (en) Map acquisition method and system, cloud processor and vehicle
KR20200125667A (en) In-vehicle camera self-calibration method and device, and vehicle driving method and device
JP2020030204A (en) Distance measurement method, program, distance measurement system and movable object
JP4672190B2 (en) Video navigation device
CN109887053A (en) A kind of SLAM map joining method and system
US20220274588A1 (en) Method for automatically parking a vehicle
US20180136666A1 (en) Method and system for providing data for a first and second trajectory
JP6943988B2 (en) Control methods, equipment and systems for movable objects
JP2007080060A (en) Object specification device
JP2013154730A (en) Apparatus and method for processing image, and parking support system
KR101880185B1 (en) Electronic apparatus for estimating pose of moving object and method thereof
JP6048246B2 (en) Inter-vehicle distance measuring device and inter-vehicle distance measuring method
CN110293965A (en) Method of parking and control device, mobile unit and computer-readable medium
CN115053279B (en) Driving support device, vehicle, and driving support method
US10825250B2 (en) Method for displaying object on three-dimensional model
CN112447058B (en) Parking method, parking device, computer equipment and storage medium
JP2018072069A (en) Map data structure, transmitter, and map display device
JP2017120238A (en) Navigation information providing system and navigation information providing device
JP4800252B2 (en) In-vehicle device and traffic information presentation method
CN116229753A (en) Navigation method and device for seeking vehicle
JP2020166673A (en) Parking position guiding system
WO2019181839A1 (en) Data structure, terminal device, data communication method, program, and storage medium
KR20170059352A (en) System for Reminding Parking Location, and Vehicle Information Collection Device Suitable for the Same
CN113379850B (en) Mobile robot control method, device, mobile robot and storage medium
CN116762094A (en) Data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination