CN115342811A - Path planning method, device, equipment and storage medium - Google Patents

Path planning method, device, equipment and storage medium Download PDF

Info

Publication number
CN115342811A
CN115342811A CN202210706988.8A CN202210706988A CN115342811A CN 115342811 A CN115342811 A CN 115342811A CN 202210706988 A CN202210706988 A CN 202210706988A CN 115342811 A CN115342811 A CN 115342811A
Authority
CN
China
Prior art keywords
image
target
matching
determining
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210706988.8A
Other languages
Chinese (zh)
Inventor
张强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Geely Holding Group Co Ltd
Zhejiang Zeekr Intelligent Technology Co Ltd
Original Assignee
Zhejiang Geely Holding Group Co Ltd
Zhejiang Zeekr Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Geely Holding Group Co Ltd, Zhejiang Zeekr Intelligent Technology Co Ltd filed Critical Zhejiang Geely Holding Group Co Ltd
Priority to CN202210706988.8A priority Critical patent/CN115342811A/en
Publication of CN115342811A publication Critical patent/CN115342811A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present application relates to the field of vehicle technologies, and in particular, to a method, an apparatus, a device, and a storage medium for path planning. The method comprises the following steps: acquiring a preset target image and an environment video image around a vehicle; determining a target object based on a preset target image; determining an object to be recognized based on the environment video image; when a matching object matched with the target object exists in the object to be identified, determining the position of the matching object; and planning a path according to the current position of the vehicle and the position of the matching object. By acquiring the environment image around the vehicle, comparing the object in the environment image with the target object to determine the position of the target object and planning the path according to the position of the target object, the target object to be searched can be quickly locked in the environment with many complicated crowds, the searching time is saved, and the arrival efficiency of the receiver is improved.

Description

Path planning method, device, equipment and storage medium
Technical Field
The present application relates to the field of vehicle technologies, and in particular, to a method, an apparatus, a device, and a storage medium for path planning.
Background
At present, with the development of science and technology, various fields and industries are developing towards intellectualization. The vehicle is used as an important tool for people to go out, and the intelligent degree of the vehicle has great influence on the life style of people. Most of the current automotive intelligence applications can be associated with autopilot. In fact, the real intelligence of the vehicle includes internet intelligence in addition to driving automation. To realize a real unmanned vehicle, people need to be released from dynamic driving tasks with the help of intelligent network connection, and people need to be helped to complete the affairs of certain life scenes, such as people reception.
In a person-receiving scene, especially when a person is received from a stranger, if there are more people around the appointed position, the target person cannot be quickly found in the crowd. In addition, if there are many obstacles around the predetermined position, which results in weak satellite positioning signals, the vehicle cannot accurately reach the predetermined position. Therefore, it is desirable to provide a solution that enables a vehicle to assist people in achieving a quick pick-up.
Disclosure of Invention
The application provides a path planning method, a path planning device, a path planning equipment and a storage medium, which realize the rapid positioning of a target object based on an environment image around a vehicle, so that the vehicle can plan a path according to the position of the target object, and the purpose of rapid receiving is achieved.
In a first aspect, an embodiment of the present application discloses a path planning method, including:
acquiring a preset target image and an environment video image around a vehicle;
determining a target object based on a preset target image;
determining an object to be identified based on the environment video image;
when a matching object matched with the target object exists in the object to be identified, determining the position of the matching object;
and planning a path according to the current position of the vehicle and the position of the matching object.
Further, in the above-mentioned case, determining a target object based on a preset target image, comprising:
carrying out image segmentation on a preset target image to obtain a target contour image containing a target object;
and performing feature extraction on the target contour image to obtain a target object.
Further, the method for determining the object to be recognized based on the environment video image comprises the following steps:
performing frame division processing on the environment video image to obtain a plurality of frame environment images;
carrying out image segmentation on the multi-frame environment image to obtain an identification contour image containing an object to be identified;
and performing feature extraction on the identification contour image to obtain an object to be identified.
Further, when a matching object matching with the target object exists in the objects to be recognized, determining the position of the matching object includes:
determining an image with similarity greater than a similarity threshold value with the target contour image in the identification contour image to obtain a similar contour image;
determining the characteristics of the object to be identified in the similar contour image and the characteristics of the target object;
comparing the characteristics of the object to be identified with the characteristics of the target object to obtain a matching result between the object to be identified and the target object;
and when the matching result is used for indicating that a matching object matched with the target object exists in the objects to be recognized, determining the position of the matching object.
Further, when a matching object matching with the target object exists in the objects to be recognized, determining the position of the matching object includes:
when an object matched with the target object exists in the object to be recognized, determining the position of the matched object in an image coordinate system;
determining the position of the matching object in the vehicle body coordinate system according to the position of the matching object in the image coordinate system;
and taking the position of the matching object in the vehicle body coordinate system as the position of the matching object.
Further, after the path planning is performed according to the current position of the vehicle and the position of the matching object, the method further includes:
and controlling the vehicle to move to the position of the matched object based on the navigation path obtained by the path planning.
Further, the navigation path obtained by the path planning comprises a driving navigation path and a pedestrian navigation path; after the path planning is performed according to the current position of the vehicle and the position of the matching object, the method further includes:
and when the difference between the distance of the pedestrian navigation path and the distance of the pedestrian navigation path is greater than the threshold value, sending the pedestrian navigation path to the target terminal.
In a second aspect, an embodiment of the present application discloses a path planning apparatus, which includes:
the acquisition module is used for acquiring a preset target image and an environment video image around the vehicle;
the target object determining module is used for determining a target object based on a preset target image;
the to-be-identified object determining module is used for determining an object to be identified based on the environment video image;
the position determining module is used for determining the position of a matched object when the matched object matched with the target object exists in the object to be identified;
and the path planning module is used for planning a path according to the current position of the vehicle and the position of the matching object.
In some optional embodiments, the target object determination module comprises:
the target contour image determining unit is used for carrying out image segmentation on a preset target image to obtain a target contour image containing a target object;
and the target object determining unit is used for extracting the characteristics of the target contour image to obtain a target object.
In some optional embodiments, the to-be-recognized object determination module includes:
the environment image determining unit is used for performing framing processing on an environment video image to obtain a multi-frame environment image;
the identification contour image determining unit is used for carrying out image segmentation on the multi-frame environment image to obtain an identification contour image containing an object to be identified;
and the object to be recognized determining unit is used for extracting the characteristics of the recognition contour image to obtain the object to be recognized.
In some optional embodiments, the apparatus further comprises:
the similar contour image determining module is used for determining an image with similarity greater than a similarity threshold value with the target contour image in the identification contour image to obtain a similar contour image;
the characteristic point determining module is used for determining the characteristics of the object to be identified in the similar contour image and the characteristics of the target object;
the matching module is used for comparing the characteristics of the object to be recognized with the characteristics of the target object to obtain a matching result between the object to be recognized and the target object;
the position determining module is used for determining the position of the matching object when the matching result is used for indicating that the matching object matched with the target object exists in the objects to be identified.
In some optional embodiments, the position determination module comprises:
the image coordinate system processing unit is used for determining the position of the matched object in the image coordinate system when the object matched with the target object exists in the objects to be identified;
the vehicle body coordinate system processing unit is used for determining the position of the matching object in the vehicle body coordinate system according to the position of the matching object in the image coordinate system;
and a matching object position determination unit for determining the position of the matching object in the vehicle body coordinate system as the position of the matching object.
In some optional embodiments, the apparatus further comprises:
and the vehicle control module is used for controlling the vehicle to move to the position of the matched object based on the navigation path obtained by the path planning.
In some optional embodiments, the navigation path obtained by the path planning includes a driving navigation path and a pedestrian navigation path; the device also includes:
and the navigation path sending module is used for sending the pedestrian navigation path to the target terminal when the difference between the distance of the driving navigation path and the distance of the pedestrian navigation path is greater than the threshold value.
In a third aspect, an embodiment of the present application discloses an electronic device, where the device includes a processor and a memory, where at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded by the processor and executes the path planning method described above.
In a fourth aspect, an embodiment of the present application discloses a computer-readable storage medium, in which at least one instruction or at least one program is stored, and the at least one instruction or the at least one program is loaded and executed by a processor to implement the path planning method described above.
The technical scheme provided by the embodiment of the application has the following technical effects:
according to the path planning method, the position of the target object is determined by obtaining the environment image around the vehicle and comparing the object in the environment image with the target object, and the path planning is carried out according to the position of the target object, so that the target object to be searched can be quickly locked in the environment with many complicated crowds, the searching time is saved, and the arrival efficiency of the receiver is improved.
Drawings
In order to more clearly illustrate the technical solutions and advantages of the embodiments of the present application or the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic diagram of an application environment of a path planning method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a path planning method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a method for matching a target object with an object to be recognized according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a path planning apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the embodiments of the present application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in other sequences than those illustrated or described herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover non-exclusive inclusions, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In order to make the objects, technical solutions and advantages disclosed in the embodiments of the present application more clearly understood, the embodiments of the present application are described in further detail below with reference to the accompanying drawings and the embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the embodiments of the application and are not intended to limit the embodiments of the application.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present embodiment, "a plurality" means two or more unless otherwise specified.
The driving and receiving people are common scenes in life, such as receiving children to put a school, taking a car and receiving passengers by a network appointment and the like. Taking the network appointment of the vehicle to the passenger as an example, it is very difficult to finally confirm the passenger by the network appointment in the parking lot of the airport and the train station, or during the peak hours of commuting. Meanwhile, in these scenes, the network car reservation can not find the parking position close to the passenger, and the passenger is not easy to be found in a short distance. For example, if a vehicle and a passenger are just blocked by a bus, the passenger cannot find the bus at first time, or in a large intersection or parking lot, because of more vehicles, the passenger cannot easily find a target vehicle among many nearby vehicles only by the license plate number.
In view of this, the present application provides a path planning method, which is used for rapidly positioning a target object by obtaining an environment image around a vehicle, so that the vehicle can plan a path according to a position of the target object, thereby achieving a purpose of rapid receiving.
The application scene of the application can be that the vehicle can contact people within a preset range, such as receiving children to put a study, receiving customers or experts at airports or railway stations, and taking a car and a passenger in a network appointment mode. In the above-described scenario, the vehicle may reach the predetermined position range by the driver driving, or may reach the predetermined position range by the vehicle automatic driving. The method and the device can also be applied to other scenes, such as scenes that the vehicle automatically drives to a preset position to get in the vehicle owner or the vehicle owner indicates passengers to get in the vehicle.
Referring to fig. 1, fig. 1 is a schematic diagram of an application environment of a path planning method provided in an embodiment of the present application, and as shown in fig. 1, the application environment may include a user terminal 101 and a vehicle 103.
In the embodiment of the present application, the user terminal is a terminal device owned by a passenger to be received, and is used for communicating with the vehicle 103. Alternatively, the user terminal 101 may include, but is not limited to, a vehicle key, a smart phone, a desktop computer, a tablet computer, a laptop computer, a smart speaker, a digital assistant, an Augmented Reality (AR)/Virtual Reality (VR) device, a smart wearable device, and other types of electronic devices. The electronic device may also be a software running on the electronic device, such as an application program, an applet, and the like. Alternatively, the operating system running on the electronic device may include, but is not limited to, an android system, an IOS system, linux, windows, unix, and the like.
In the embodiment of the present application, the vehicle 103 is a vehicle that travels to a predetermined position to pick up a passenger. The vehicle 103 is provided with a path planning device for planning a path according to the position of a passenger to be picked up, so that the purpose of rapid pick-up is achieved. Alternatively, the vehicle 103 may be controlled by the driver, or may be controlled by autonomous driving. The vehicle 103 is provided with a data acquisition sensor. Optionally, the vehicle-mounted data acquisition sensor includes a radar sensor, an image acquisition sensor, and the like. The image acquisition sensor can be a monocular vision sensor, a binocular stereo vision sensor, a panoramic vision sensor, an infrared camera sensor and the like. The radar sensor may be a laser radar sensor, a millimeter wave radar sensor, an ultrasonic radar sensor, or the like. The vehicle-mounted data acquisition sensors are arranged around or in the vehicle body according to respective working characteristics. As an example, the vehicle-mounted data acquisition sensor includes a left camera, a right camera, and a front camera, which are respectively disposed at the left side, the right side, and the vehicle head position of the vehicle body. Optionally, the left camera includes one to a plurality of sub-cameras, the right camera includes one to a plurality of sub-cameras, and the front camera includes one to a plurality of sub-cameras. Because the field of view that the image acquisition sensor surveyed is less, in order to improve the precision of surveying, adopt many cameras to fix a position the discernment to the target object. The vehicle-mounted data acquisition sensor can further comprise a left radar, a right radar, a left front radar and a right front radar, and the left side, the right side, the vehicle head left side position and the vehicle head right side position of the vehicle body are respectively arranged. In some embodiments, the range detection subsystem further includes a left rear radar and a right rear radar disposed at a left position of the vehicle rear and a right position of the vehicle rear, respectively.
In the embodiment of the present application, the user terminal 101 communicates with the vehicle 103 via a wireless communication link.
A specific embodiment of a path planning method according to the present application is described below, and fig. 2 is a schematic flow chart of a path planning method according to the embodiment of the present application, and the present specification provides method operation steps according to the embodiment or the flow chart, but more or less operation steps may be included based on conventional or non-creative labor. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. In practice, the system or server product may be implemented in a sequential or parallel manner (e.g., parallel processor or multi-threaded environment) according to the embodiments or methods shown in the drawings. As shown in fig. 2, the path planning method can be applied to a vehicle. The method can comprise the following steps:
s201: and acquiring a preset target image and an environment video image around the vehicle.
In the embodiment of the application, when the vehicle receives passengers, images capable of determining the positions of the passengers or the passengers to be received are acquired firstly. Alternatively, the preset target image may be an image containing biological characteristics of the passenger to be connected, such as a human face image or an overall image containing a human face. The preset target image may also be a position image of the passenger, such as a sign image having a large target, such as a street lamp, a signboard, a landscape stone, or the like. The preset target image may be pre-imported into the path planning device in the vehicle, or may be sent to the path planning device based on the user terminal of the passenger to be received. Optionally, the user terminal may send the preset target image to the path planning apparatus before the vehicle reaches the predetermined position, or after the vehicle reaches the predetermined position. When the vehicle is about to arrive at a preset position or arrives at the preset position, the vehicle-mounted data acquisition sensor acquires environmental data around the vehicle, so that the position of a passenger to be received is determined. Specifically, the vehicle-mounted image collection sensor collects video images of the environment around the vehicle, and then sends the collected video images of the environment to the path planning device for processing, so that the vehicle can quickly lock the passenger to be connected in the environment with many people, and the position of the passenger to be connected is obtained.
S203: the target object is determined based on a preset target image.
In the embodiment of the application, after the path planning device in the vehicle acquires the preset target image, the preset target image is subjected to image processing to extract the target object. The target object is a human body characteristic image containing the biological characteristics of the passenger to be taken or a characteristic image of a marker. Optionally, determining the target object based on the preset target image may include: and carrying out image segmentation on the preset target image to obtain a target contour image containing a target object. And performing feature extraction on the target contour image to obtain a target object. Specifically, the image segmentation is performed on a preset target image based on an image recognition technology, the outline of a target person or a target object in the image is extracted, then the feature extraction is performed on the target person or the target object, and by taking the feature extraction of the target person as an example, the face feature in the image can be extracted to obtain a target face image including the face feature. The face feature extraction may extract face key feature points through Deep Alignment Network (DAN). Feature extraction can also be performed based on a deep learning Convolutional Neural Network (CNN) algorithm to obtain a feature vector of a person.
S205: an object to be recognized is determined based on the environmental video image.
In the embodiment of the application, the path planning device processes the acquired environment video image to extract the object to be identified in the surrounding environment of the vehicle. The object to be identified is used for matching with the target object, so that the position of the target object relative to the vehicle is determined. Optionally, determining the object to be recognized based on the environment video image may include: and performing frame division processing on the environment video image to obtain a plurality of frame environment images. And carrying out image segmentation on the multi-frame environment image to obtain an identification contour image containing the object to be identified. And performing feature extraction on the identification contour image to obtain an object to be identified. Specifically, when determining an object to be identified, the path planning apparatus first divides an environment video image into a plurality of frames of environment images. Optionally, when the environmental video image is framed, each frame image constituting the environmental video image may be processed as the environmental image. Optionally, when the environmental video image is framed, the image frame may be extracted from the image frames constituting the environmental video image as the environmental image. Specifically, one frame image may be extracted as the environment image in the environment video image at intervals of a preset number. The preset number can be set according to the resolution of the vehicle-mounted image acquisition sensor and the calculation capacity of the path planning device in a comprehensive consideration mode, and the preset number is preferably set to be a fixed value. And after obtaining the environment image, the path planning device performs image processing on each frame of environment image to extract the object to be identified. The path planning device performs image segmentation on an object contained in the environment image based on an image recognition technology to extract the outline of a person object or an object in the image, namely an identification outline image containing the object to be recognized. And then, carrying out feature extraction on the identification contour image to obtain an object to be identified.
It should be noted that, when the object to be identified is extracted, the type of the object to be identified may be determined according to the type of the target object, so as to reduce the operation amount of the path planning apparatus. For example, if the target object is a facial feature image of the passenger to be taken, the path planning apparatus may advance the judgment of the human face and the non-human face when processing the environment image, so as to exclude the non-human face object in the environment image and only keep the human face object. And then further feature extraction is carried out on the image containing the human face object.
S207: when a matching object matched with the target object exists in the objects to be recognized, the position of the matching object is determined.
In the embodiment of the application, after the path planning device obtains the target object and the object to be identified through the image processing technology, the target object is compared with the object to be identified, and whether an object matched with the target object exists in the object to be identified is determined based on the image identification technology. Fig. 3 is a schematic flowchart of a method for matching a target object with an object to be recognized according to an embodiment of the present application, where as shown in fig. 3, matching the target object with the object to be recognized includes:
s301: and determining the image with the similarity larger than the similarity threshold value with the target contour image in the identification contour image to obtain a similar contour image.
In the embodiment of the application, when the target object is compared with the object to be recognized, the target contour image containing the target object and the recognition contour image containing the target to be recognized can be compared one by one, and the target object and the object to be recognized can be compared in features, so that the recognition accuracy of the target object recognition can be improved.
In the embodiment of the application, when the target object is compared with the object to be identified, the target contour image including the target object may be compared with the identification contour image to determine the identification contour image with a similar contour to the target contour image, so as to obtain a similar contour image, and then the object to be identified in the similar contour image is compared with the target object in terms of features. Therefore, the identification contour image which is greatly different from the target contour image can be eliminated, so that the calculation amount of the path planning device can be reduced, and the comparison efficiency is improved.
It should be noted that, in the foregoing processing step for the environment image, the environment image may be only subjected to image segmentation to obtain an identification contour image, then the identification contour image is compared with the target contour image to obtain a similar contour image, then the similar contour image is subjected to feature extraction to obtain an object to be identified, and then the object to be identified is compared with the target object. Therefore, the calculation amount of the path planning device can be further reduced, and the image processing efficiency is improved.
In some embodiments, for a frame of the environment image, the object to be recognized matching the target object may not be included therein. Therefore, the identification contour images obtained from multiple frames of environment images can be combined into a comparison set, the image identification of the frame is set for the identification contour image obtained based on the same frame of environment image, and then the identification contour images in the set are compared with the target contour images one by one.
S303: and determining the characteristics of the object to be identified in the similar contour image and the characteristics of the target object.
In the embodiment of the application, the characteristics of the object to be recognized in the similar contour image and the characteristics of the target object in the target contour image are obtained, so that the characteristics of the object to be recognized and the characteristics of the target object can be compared to determine whether the object to be recognized in the similar contour image is matched with the target object.
S305: and comparing the characteristics of the object to be recognized with the characteristics of the target object to obtain a matching result between the object to be recognized and the target object.
In the embodiment of the application, the similarity between the object to be recognized and the target object is determined by comparing the characteristics of the object to be recognized and the characteristics of the target object, so that the matching result between the object to be recognized and the target object is obtained. As an optional implementation manner, the features of the object to be recognized and the features of the target object may both be represented in the form of feature vectors, and therefore, the features of the object to be recognized and the features of the target object may be respectively generated into an object feature vector to be recognized and a target object feature vector, and then the similarity between the object to be recognized and the target object is determined according to the object feature vector to be recognized and the target object feature vector, so as to obtain a matching result with the target object in the object to be recognized. When the similarity between the object to be recognized and the target object is greater than the threshold value, it may be determined that the object to be recognized matches the target object.
In some embodiments, the features of the object to be recognized and the features of the target object may also be characterized in the form of feature points. The characteristic points of the object to be recognized and the characteristic point rows of the target object are compared one by one, so that the similarity between the object to be recognized and the target object is determined, and the matching result between the object to be recognized and the target object is obtained.
S209: and planning a path according to the current position of the vehicle and the position of the matching object.
In the embodiment of the application, when the matching result is used for indicating that a matching object matched with the target object exists in the objects to be recognized, the position of the matching object is determined. Alternatively, when an object matching the target object exists in the objects to be recognized, an image frame where the matching object is located, that is, an environment image where the matching object is located may be determined first. Then, the position of the matched object in the image coordinate system in the frame environment image is determined. And then determining the position of the matching object in the vehicle body coordinate system according to the position of the matching object in the image coordinate system. And taking the position of the matching object in the vehicle body coordinate system as the position of the matching object. Thus, the position of the matching object is obtained, and the path can be planned according to the current position of the vehicle and the position of the matching object.
In some embodiments, the location of the matching object may also be determined based on-board data acquisition sensors. Specifically, after the path planning device determines the matching object, the matching object is locked by calling the image acquisition sensor, and then the position of the matching object is determined based on the radar sensor.
In other embodiments, the location of the matching object may also be obtained based on the user terminal. Specifically, after the path planning apparatus determines the matching object, the position obtaining information is sent to the user terminal of the matching object, so as to obtain the position of the matching object.
In the embodiment, after the path planning device performs the path planning according to the current position of the vehicle and the position of the matching object, the vehicle can be controlled to move to the position of the matching object based on the navigation path obtained by the path planning. Alternatively, controlling vehicle movement may be done based on vehicle autopilot.
In some alternative embodiments, the location of the received passenger is not typically too far from the location of the vehicle. Sometimes, the position of the received passenger is inconvenient for parking and getting on, and the vehicle is difficult to drive to the passenger. In addition, it may sometimes happen that the picked-up passenger is on one side of the road and the vehicle is on the other half of the road, in which case it may be necessary to turn around a long distance if the vehicle is to reach the picked-up passenger, and it may be a more convenient option for the pedestrian to cross the road than the vehicle. Therefore, when the path planning device performs path planning, the planned navigation path may include a driving navigation path and a pedestrian navigation path. When the difference between the distance of the pedestrian navigation path and the distance of the pedestrian navigation path is larger than the threshold value, the pedestrian navigation path is sent to a target terminal, namely a user terminal of the received passenger, so that the received passenger moves towards the vehicle, and the purpose of receiving the passenger is achieved.
In the embodiment, when the received passenger moves towards the vehicle or the vehicle moves towards the received passenger, the path planning device can lock the received passenger by calling the image acquisition sensor so as to determine the moving direction of the received passenger relative to the vehicle and the distance of the vehicle in real time.
The path planning method is based on a vehicle-mounted data acquisition sensor acquisition vehicle
The embodiment of the present application further discloses a path planning apparatus, and fig. 4 is a schematic structural diagram of the path planning apparatus provided in the embodiment of the present application, and as shown in fig. 4, the apparatus includes:
the acquiring module 401 is configured to acquire a preset target image and an environment video image around the vehicle.
A target object determining module 403, configured to determine a target object based on a preset target image.
And an object to be recognized determining module 405, configured to determine an object to be recognized based on the environment video image.
The position determining module 407 is configured to determine a position of a matching object when the matching object matching the target object exists in the objects to be recognized.
And a path planning module 409, configured to plan a path according to the current position of the vehicle and the position of the matching object.
In some optional embodiments, the target object determination module comprises:
and the target contour image determining unit is used for carrying out image segmentation on the preset target image to obtain a target contour image containing the target object.
And the target object determining unit is used for extracting the characteristics of the target contour image to obtain a target object.
In some optional embodiments, the to-be-recognized object determination module includes:
and the environment image determining unit is used for performing framing processing on the environment video image to obtain a multi-frame environment image.
And the identification contour image determining unit is used for carrying out image segmentation on the multi-frame environment image to obtain an identification contour image containing the object to be identified.
And the to-be-recognized object determining unit is used for extracting the characteristics of the recognition contour image to obtain the to-be-recognized object.
In some optional embodiments, the apparatus further comprises:
and the similar contour image determining module is used for determining an image with similarity greater than a similarity threshold value with the target contour image in the identified contour image to obtain a similar contour image.
And the characteristic point determining module is used for determining the characteristics of the object to be identified in the similar contour image and the characteristics of the target object.
And the matching module is used for comparing the characteristics of the object to be recognized with the characteristics of the target object to obtain a matching result between the object to be recognized and the target object.
The position determining module is used for determining the position of the matching object when the matching result is used for indicating that the matching object matched with the target object exists in the objects to be identified.
In some optional embodiments, the position determination module comprises:
and the image coordinate system processing unit is used for determining the position of the matched object in the image coordinate system when the object matched with the target object exists in the objects to be recognized.
And the vehicle body coordinate system processing unit is used for determining the position of the matching object in the vehicle body coordinate system according to the position of the matching object in the image coordinate system.
And a matching object position determination unit for determining the position of the matching object in the vehicle body coordinate system as the position of the matching object.
In some optional embodiments, the apparatus further comprises:
and the vehicle control module is used for controlling the vehicle to move to the position of the matched object based on the navigation path obtained by the path planning.
In some optional embodiments, the navigation path obtained by the path planning includes a driving navigation path and a pedestrian navigation path. The device also includes:
and the navigation path sending module is used for sending the pedestrian navigation path to the target terminal when the difference between the distance of the driving navigation path and the distance of the pedestrian navigation path is greater than the threshold value.
The embodiments of the path planning apparatus and the path planning method in the embodiments of the present application are based on the same application, and for the specific implementation of the path planning apparatus, reference is made to the implementation of the path planning method, which is not repeated herein.
The embodiment of the application also discloses an electronic device, which comprises a processor and a memory, wherein at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded by the processor and executes the path planning method.
In the embodiment of the present application, the memory may be used to store software programs and modules, and the processor executes various functional applications and data processing by operating the software programs and modules stored in the memory. The memory can mainly comprise a program storage area and a data storage area, wherein the program storage area can store an operating system, application programs needed by functions and the like; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory may also include a memory controller to provide the processor access to the memory. As an example, the device is an on-board computer, such as an Electronic Control Unit (ECU).
The embodiment of the application also discloses a computer-readable storage medium, in which at least one instruction or at least one program is stored, and the at least one instruction or the at least one program is loaded and executed by a processor to implement the path planning method described above.
In an embodiment of the present application, the computer storage medium may be located in at least one network server of a plurality of network servers of a computer network. Optionally, the computer-readable storage medium may include: read Only Memory (ROM), random Access Memory (RAM), solid State Drive (SSD), or optical disc, etc. The random access memory may include a resistive random access memory (ReRAM) and a Dynamic Random Access Memory (DRAM).
It should be noted that: the sequence of the embodiments of the present application is only for description, and does not represent the advantages and disadvantages of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
All the embodiments in the present specification are described in a progressive manner, and portions similar to each other in the embodiments may be referred to each other, and each embodiment focuses on differences from other embodiments. In particular, for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk, an optical disk, or the like.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A method of path planning, the method comprising:
acquiring a preset target image and an environment video image around a vehicle;
determining a target object based on the preset target image;
determining an object to be recognized based on the environment video image;
when a matching object matched with the target object exists in the objects to be recognized, determining the position of the matching object;
and planning a path according to the current position of the vehicle and the position of the matching object.
2. The method of claim 1, wherein determining a target object based on the preset target image comprises:
carrying out image segmentation on the preset target image to obtain a target contour image containing the target object;
and performing feature extraction on the target contour image to obtain the target object.
3. The method of claim 2, wherein determining an object to be identified based on the environmental video image comprises:
performing frame division processing on the environment video image to obtain a plurality of frame environment images;
carrying out image segmentation on the multi-frame environment image to obtain an identification contour image containing the object to be identified;
and performing feature extraction on the identification contour image to obtain the object to be identified.
4. The method according to claim 3, wherein when there is a matching object matching the target object in the objects to be recognized, determining the position of the matching object comprises:
determining an image with the similarity of the target contour image being greater than a similarity threshold value in the identification contour image to obtain a similar contour image;
determining the characteristics of the object to be identified in the similar contour image and the characteristics of the target object;
comparing the characteristics of the object to be recognized with the characteristics of the target object to obtain a matching result of the object to be recognized and the target object;
and when the matching result is used for indicating that a matching object matched with the target object exists in the objects to be identified, determining the position of the matching object.
5. The method of claim 1, when a matching object matched with the target object exists in the objects to be recognized, determining the position of the matching object comprises the following steps:
when an object matched with the target object exists in the objects to be recognized, determining the position of the matched object in an image coordinate system;
determining the position of the matching object in the vehicle body coordinate system according to the position of the matching object in the image coordinate system;
and taking the position of the matching object in the vehicle body coordinate system as the position of the matching object.
6. The method of claim 1, wherein after the path planning based on the current location of the vehicle and the location of the matching object, the method further comprises:
and controlling the vehicle to move to the position of the matching object based on the navigation path obtained by path planning.
7. The method according to claim 6, wherein the navigation path obtained by path planning comprises a driving navigation path and a pedestrian navigation path; after the path planning is performed according to the current position of the vehicle and the position of the matching object, the method further includes:
and when the difference between the distance of the driving navigation path and the distance of the pedestrian navigation path is greater than a threshold value, the pedestrian navigation path is sent to a target terminal.
8. A path planning apparatus, the apparatus comprising:
the acquisition module is used for acquiring a preset target image and an environment video image around the vehicle;
the target object determining module is used for determining a target object based on the preset target image;
the to-be-recognized object determining module is used for determining an object to be recognized based on the environment video image;
the position determining module is used for determining the position of a matched object matched with the target object when the matched object exists in the objects to be identified;
and the path planning module is used for planning a path according to the current position of the vehicle and the position of the matching object.
9. An electronic device, characterized in that the device comprises a processor and a memory, in which at least one instruction or at least one program is stored, which is loaded by the processor and executes the path planning method according to any of claims 1-7.
10. A computer-readable storage medium, in which at least one instruction or at least one program is stored, which is loaded and executed by a processor to implement the path planning method according to any one of claims 1 to 7.
CN202210706988.8A 2022-06-21 2022-06-21 Path planning method, device, equipment and storage medium Pending CN115342811A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210706988.8A CN115342811A (en) 2022-06-21 2022-06-21 Path planning method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210706988.8A CN115342811A (en) 2022-06-21 2022-06-21 Path planning method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115342811A true CN115342811A (en) 2022-11-15

Family

ID=83947800

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210706988.8A Pending CN115342811A (en) 2022-06-21 2022-06-21 Path planning method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115342811A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861958A (en) * 2023-02-23 2023-03-28 中科大路(青岛)科技有限公司 Vehicle-mounted FOD identification method, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861958A (en) * 2023-02-23 2023-03-28 中科大路(青岛)科技有限公司 Vehicle-mounted FOD identification method, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112184818B (en) Vision-based vehicle positioning method and parking lot management system applying same
CN110045729B (en) Automatic vehicle driving method and device
EP2450667B1 (en) Vision system and method of analyzing an image
CN111292352B (en) Multi-target tracking method, device, equipment and storage medium
CN111611853A (en) Sensing information fusion method and device and storage medium
DE112018004953T5 (en) INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING PROCESS, PROGRAM AND MOVING BODY
US11105639B2 (en) Map refinement using feature extraction from images
US20200200545A1 (en) Method and System for Determining Landmarks in an Environment of a Vehicle
Aryal Object detection, classification, and tracking for autonomous vehicle
CN113256731A (en) Target detection method and device based on monocular vision
CN113537047A (en) Obstacle detection method, obstacle detection device, vehicle and storage medium
CN115342811A (en) Path planning method, device, equipment and storage medium
US11377125B2 (en) Vehicle rideshare localization and passenger identification for autonomous vehicles
CN113076896A (en) Standard parking method, system, device and storage medium
CN110781730B (en) Intelligent driving sensing method and sensing device
CN109344776B (en) Data processing method
CN114998861A (en) Method and device for detecting distance between vehicle and obstacle
JP2019152976A (en) Image recognition control device and image recognition control program
US11461944B2 (en) Region clipping method and recording medium storing region clipping program
Charaya LiDAR for Object Detection in Self Driving Cars
CN115635955A (en) Method for generating parking information, electronic device, storage medium, and program product
Al Baghdadi et al. Unmanned aerial vehicles and machine learning for detecting objects in real time
Unnisa et al. Obstacle detection for self driving car in Pakistan's perspective
WO2020073268A1 (en) Snapshot image to train roadmodel
WO2020073270A1 (en) Snapshot image of traffic scenario

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination