CN112964265A - Obstacle area marking method and device, electronic equipment and storage medium - Google Patents

Obstacle area marking method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112964265A
CN112964265A CN202110232459.4A CN202110232459A CN112964265A CN 112964265 A CN112964265 A CN 112964265A CN 202110232459 A CN202110232459 A CN 202110232459A CN 112964265 A CN112964265 A CN 112964265A
Authority
CN
China
Prior art keywords
obstacle
vehicle
information
image information
lane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110232459.4A
Other languages
Chinese (zh)
Inventor
谢兆夫
周泽斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Evergrande New Energy Automobile Investment Holding Group Co Ltd
Original Assignee
Evergrande New Energy Automobile Investment Holding Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Evergrande New Energy Automobile Investment Holding Group Co Ltd filed Critical Evergrande New Energy Automobile Investment Holding Group Co Ltd
Priority to CN202110232459.4A priority Critical patent/CN112964265A/en
Publication of CN112964265A publication Critical patent/CN112964265A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/343Calculating itineraries, i.e. routes leading from a starting point to a series of categorical destinations using a global route restraint, round trips, touristic trips

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a method and a device for marking an obstacle area, electronic equipment and a storage medium. The obstacle area marking method comprises the following steps: acquiring an obstacle area marking instruction; sending a control signal to the unmanned aerial vehicle according to the obstacle area marking instruction, wherein the control signal is used for controlling the unmanned aerial vehicle to acquire image information of an obstacle lane with obstacle signs in front of the vehicle; acquiring image information returned by the unmanned aerial vehicle; determining the geographical range information of the obstacle area corresponding to the obstacle sign according to the image information; an obstacle area is indicated in the vehicle navigation map based on the geographic range information. According to the technical scheme of the embodiment of the invention, the geographical range information of the obstacle area can be obtained by utilizing the image information of the obstacle lane acquired by the unmanned aerial vehicle, and the obstacle area is marked in the vehicle navigation map so as to sense the obstacle area caused by the unconventional event occurring in front of the vehicle, and the accuracy of sensing the road condition in front of the vehicle is improved.

Description

Obstacle area marking method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of automobiles, and in particular, to a method and an apparatus for marking an obstacle area, an electronic device, and a storage medium.
Background
The high-precision map can provide over-the-horizon map information, is widely applied to an automatic driving system, can effectively make up for the defects of a current sensor, and provides remote driving decision information for the driving system.
However, high-precision maps can only provide information about the static environment captured by the capture vehicle. When an irregular event, such as a sudden traffic accident, occurs in a lane in front of the vehicle, or a part of the lane is divided into a construction site, it is difficult for the vehicle to perceive an obstacle region in front caused by the traffic accident or the construction site through a high-precision map. Therefore, the following problems exist in the prior art: how to improve the accuracy of perception of the road condition in front of the vehicle.
Disclosure of Invention
An embodiment of the present application provides a method, an apparatus, an electronic device and a storage medium for marking an obstacle area, so as to solve the problem of how to improve accuracy of sensing a road condition ahead of a vehicle.
To solve the above technical problem, an embodiment of the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a method for marking an obstacle area, where the method includes:
acquiring an obstacle area marking instruction;
sending a control signal to the unmanned aerial vehicle according to the barrier area marking instruction; the control signal is used for controlling the unmanned aerial vehicle to collect image information of an obstacle lane with obstacle signs in front of the vehicle;
acquiring image information returned by the unmanned aerial vehicle;
determining the geographical range information of the obstacle area corresponding to the obstacle sign according to the image information;
an obstacle area is indicated in the vehicle navigation map based on the geographic range information.
In a second aspect, another embodiment of the present application provides an obstacle area indicating device, including:
the instruction acquisition module is used for acquiring an obstacle area marking instruction;
the signal sending module is used for sending a control signal to the unmanned aerial vehicle according to the barrier area marking instruction; the control signal is used for controlling the unmanned aerial vehicle to acquire image information of an obstacle lane in front of the vehicle; the obstacle lane is provided with an obstacle area;
the image acquisition module is used for acquiring image information returned by the unmanned aerial vehicle;
the information determining module is used for determining the geographical range information of the obstacle area according to the image information;
and the map marking module is used for marking the obstacle area in the vehicle navigation map according to the geographic range information.
In a third aspect, another embodiment of the present application provides an electronic device, including: a memory, a processor and computer executable instructions stored on the memory and executable on the processor, the computer executable instructions when executed by the processor being capable of implementing the steps of the obstacle area identification method as described in the first aspect above.
In a fourth aspect, a further embodiment of the present application provides a storage medium, where computer-executable instructions are stored, and when executed by a processor, the steps of the obstacle area identification method according to the first aspect may be implemented.
According to the technical scheme of the embodiment of the invention, firstly, an obstacle area marking instruction is obtained; secondly, sending a control signal to the unmanned aerial vehicle according to the obstacle area marking instruction, wherein the control signal is used for controlling the unmanned aerial vehicle to acquire image information of an obstacle lane with obstacle signs in front of the vehicle; then, acquiring image information returned by the unmanned aerial vehicle; secondly, determining the geographical range information of the obstacle area corresponding to the obstacle sign according to the image information; and finally, according to the geographical range information, marking an obstacle area in the vehicle navigation map. According to the technical scheme of the embodiment of the invention, the geographical range information of the obstacle area can be obtained by utilizing the image information of the obstacle lane acquired by the unmanned aerial vehicle, and the obstacle area is marked in the vehicle navigation map so as to sense the obstacle area caused by the unconventional event occurring in front of the vehicle, and the accuracy of sensing the road condition in front of the vehicle is improved.
Drawings
In order to more clearly illustrate the technical solutions in one or more embodiments of the present application, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and it is obvious for a person skilled in the art to obtain other drawings based on these drawings without any creative effort.
Fig. 1 is a schematic flow chart illustrating a method for marking an obstacle area according to an embodiment of the present application;
fig. 2 is a schematic view of an application scenario of a method for marking an obstacle area according to an embodiment of the present application;
fig. 3 is a schematic flow chart of another obstacle area indication method according to an embodiment of the present application;
fig. 4 is a schematic view of another application scenario of a method for marking an obstacle area according to an embodiment of the present application;
fig. 5 is an interaction diagram of a vehicle end and an unmanned aerial vehicle end in a method for marking an obstacle area according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an obstacle area indicating device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in one or more embodiments of the present application, the technical solutions in one or more embodiments of the present application will be clearly and completely described below with reference to the drawings in one or more embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all embodiments. All other embodiments that can be derived by a person skilled in the art from one or more embodiments of the present application without inventive step shall fall within the scope of protection of this document.
Fig. 1 is a flowchart illustrating a method for marking an obstacle area according to an embodiment of the present application.
Referring to fig. 1, a method for marking an obstacle area in the exemplary embodiment of fig. 1 will be described in detail.
In one or more embodiments as shown in fig. 1, the obstacle area indicating method may be applied to a vehicle, which may be an autonomous vehicle or a manually driven vehicle, and the present application does not specifically limit the kind of the vehicle.
The automatic driving vehicle applying the obstacle area marking method can be loaded with a high-precision map. High-precision maps are widely used in current autonomous driving systems at L2+ and beyond, because they can provide over-the-horizon map information. The high-precision map can effectively make up the defects of the current sensor and provide remote driving decision information for a driving system.
However, the high-precision map has the limitation that only information of a static environment collected by a collection vehicle can be provided. Due to the limitation of the update frequency of the high-precision map, when an area which is not updated has an obstacle area caused by an irregular event, such as construction or traffic accident, the high-precision map is difficult to provide dynamic information generated by the area which is not updated, and further the road condition in front of the vehicle sensed by the high-precision map is inaccurate, so that the vehicle is difficult to deal with the upcoming emergency.
In the face of manually driving a vehicle, a common map is used for vehicle navigation, and the problems of the common map and the automatic driving vehicle loaded with a high-precision map are the same or similar, and are not repeated herein.
Step S102, obtaining an indication instruction of the obstacle area.
When an emergency occurs in front of the vehicle, for example, when a navigation system loaded in the vehicle sends congestion prompt information, the vehicle acquires an obstacle area marking instruction. The congestion prompt information is used for prompting the vehicle to run slowly when encountering the traffic flow in front of the vehicle.
The obstacle area marking command may be sent by the navigation system according to the congestion prompt information, or sent by other control modules of the vehicle after receiving the congestion prompt information sent by the navigation system, or sent manually.
The obstacle area marking command generated by the congestion prompt information can be used for starting the unmanned aerial vehicle to acquire the image information of the obstacle lane under the condition that the abnormal congestion in front of the vehicle is obviously sensed, and the working efficiency of the unmanned aerial vehicle is improved.
The obstacle area marking instruction is used for indicating the vehicle to mark an obstacle area in a vehicle navigation map, wherein the geographical range information of the obstacle area is determined by the image information of the obstacle lane collected and returned by the unmanned aerial vehicle.
Step S104, sending a control signal to the unmanned aerial vehicle according to the obstacle area marking instruction; the control signal is used for controlling the unmanned aerial vehicle to acquire image information of an obstacle lane in front of the vehicle; the obstacle lane has an obstacle area.
The drone may be an on-board drone. The image information may be video information shot in real time or picture information obtained by continuous shooting, and the form of the image information is not particularly limited in the present application.
The lane may be a road on which the vehicle is traveling, for example, a motor lane, a pedestrian road which may affect the traveling of the vehicle, for example, a crosswalk, or a hybrid road through which both the vehicle and the pedestrian pass, for example, an intersection.
The obstacle lane with obstacle sign in front of the vehicle may be a partial lane with one or more obstacle signs in the lane in front of the vehicle.
For example, a construction mark for indicating a passing vehicle or pedestrian exists a meter ahead of the vehicle, and a construction area exists behind the construction mark when facing the construction mark. The shortest distance between the obstacle sign and the construction area may be b meters, and the construction area may be a construction area having a length of c meters and a width of d meters. The width of the complete lane may be e meters. The distance between the vehicle and the construction area ahead of the vehicle is f meters from the construction area boundary of the vehicle, where f is a + b + c.
In the foregoing example, the obstacle lane having the obstacle sign may be a lane having a width of e meters from a meter ahead of the vehicle to f meters ahead of the vehicle. In addition, the preset length of the obstacle lane can be set to be g meters in advance, and the obstacle lane with the obstacle sign can also be a lane with the width of e meters from x meters to (x + g) meters in front of the vehicle.
The foregoing examples of the plurality of obstacle lanes are only examples, as long as there is an obstacle sign in the obstacle lane, and the present application does not specifically limit the definition of the obstacle lane.
It should be noted that the image information of the obstacle lane is used here to more accurately describe the content of the image information acquired by the unmanned aerial vehicle. The image information of the obstacle lane includes not only the obstacle sign but also an obstacle area corresponding to the obstacle sign in the lane, for example, a construction obstacle sign for indicating a passing vehicle or pedestrian and a construction area corresponding to the construction obstacle sign, and when facing the construction obstacle sign, a construction area is located behind the construction obstacle sign, that is, the construction area corresponding to the construction obstacle sign.
In addition, the obstacle lane further includes one or more lane edges. For example, when a construction area exists in the obstacle lane, one of the two original lane edges may be located in the construction area, and at this time, only one visible lane edge exists in the obstacle lane; alternatively, the obstacle lane may include a junction, in which case the obstacle lane may include more than two lane edges.
It should be emphasized here that, although the description of the obstacle lane is used herein and the obstacle lane is explained by way of example, the unmanned aerial vehicle does not perform the operation of determining a section of lane as the obstacle lane, but when the unmanned aerial vehicle detects an object feature matching with a preset mark feature during forward flight, the unmanned aerial vehicle is determined to be above the obstacle lane, and enters the image information returning mode, at this time, the unmanned aerial vehicle may stop flying forward and collect image information of the lane, use the image information as image information of the obstacle lane, and return the image information of the obstacle lane to the vehicle in real time by wireless. Just begin to gather the information of making a video recording and the passback of obstacle lane and give the vehicle through detecting when the object characteristic that matches with the preset mark characteristic, can save the electric quantity that unmanned aerial vehicle consumed for unmanned aerial vehicle's operating time is more lasting.
Although the unmanned aerial vehicle also has always been detecting in the lane in the in-process of flying forward and whether have with predetermine the object characteristic that marks the characteristic matching, nevertheless need not the image information passback of gathering in this process, unmanned aerial vehicle can not rely on and the vehicle between the interaction of going on, detects the obstacle sign by oneself, has saved the electric quantity that gives the vehicle consumption with the image information passback.
When the unmanned aerial vehicle flies forwards, when object features matched with the preset mark features are detected, the unmanned aerial vehicle is determined to be located above a barrier lane, an image information returning mode is entered, at the moment, the unmanned aerial vehicle can also fly forwards continuously while collecting image information on the current lane, the image information collected in the process of flying forwards is used as image information of the barrier lane, and the image information of the barrier lane is wirelessly returned to a vehicle in real time. The image information of the obstacle lane is returned while flying forwards, so that the image information can completely reflect the obstacle area in front of the vehicle, and the length of the obstacle area marked in the vehicle navigation map in the subsequent step is more accurate. Simultaneously, just begin to gather the information of making a video recording and the passback of obstacle lane and give the vehicle through detecting when the object characteristic that matches with the preset mark characteristic, also can save the electric quantity that unmanned aerial vehicle consumed for unmanned aerial vehicle's operating time is more lasting.
The specific steps of the unmanned aerial vehicle acquiring the image information of the obstacle lane will be discussed in detail later.
It should be noted that the front of the vehicle in the present application may be a front of a straight line where the current driving direction of the vehicle is located, or may be a front in a preset driving route of the vehicle.
For example, the preset driving route of the vehicle is to reach the intersection after the vehicle is driven straight x meters from the starting point of the vehicle, turn right, and then drive straight y meters, so that the lane in front of the vehicle includes both the x meter lane before the right turn and the y meter lane after the right turn.
When the vehicle is not provided with the preset running route or the preset running route is changed, the image information of the obstacle lane in front of the straight line where the current running direction of the vehicle is located can be used for marking the obstacle area in front of the vehicle in the vehicle navigation map, and the accuracy of sensing the road condition in front of the vehicle is improved.
When the vehicle is provided with the preset running route, the image information of the front obstacle lane in the preset running route of the vehicle is utilized, the front obstacle area in the preset running route can be marked in the vehicle navigation system, and the accuracy of sensing the front road condition of the vehicle in the preset running route is improved.
The obstacle lane has an obstacle area. The obstacle area may be a first obstacle area caused by a traffic accident. Vehicles or pedestrians who have left the first obstacle area and have experienced the traffic accident, for example, two vehicles collide with each other, temporarily stay in the first obstacle area, and wait for a traffic police, a claimant, or a trailer to further process the traffic accident. In the first obstacle area, there may be one or more traffic accident obstacle signs placed in front of, behind, or in other directions of the vehicle in which the accident occurs, the traffic accident obstacle signs being used to indicate that the traffic accident occurs there.
The obstacle area may also be a second obstacle area caused by construction. The second obstacle area includes a construction site on which construction work is being performed, such as a newly laid asphalt road. In one embodiment, the second obstacle area comprises a fence for isolating the construction site, and a plurality of construction obstacle signs are arranged at corners of the outer edge of the fence, the construction obstacle signs being used for prompting passers-by to be the construction area surrounded by the fence. The second obstacle area includes not only the construction site surrounded by the fence but also an area occupied by the plurality of construction obstacle signs.
It should be noted that, the obstacle area is mentioned here, so as to facilitate understanding of the content of the image information acquired by the unmanned aerial vehicle, for example, when the unmanned aerial vehicle acquires the image information of the obstacle area, in fact, the image information includes image information of a complete construction area, or image information of a partial area of the construction area. In fact, the drone does not perform the operation of determining the obstacle area determined by the vehicle from the image information returned by the drone.
In one embodiment, after acquiring the image information of the obstacle lane closest to the vehicle, the drone may be regarded as completing the obstacle area marking task corresponding to the obstacle area marking instruction, for example, returning to the vehicle after acquiring a confirmation signal that the obstacle area marking instruction transmitted by the vehicle through wireless has been completed.
Optionally, the unmanned aerial vehicle carries lane information within a second predetermined distance in front of the vehicle; the lane information is used for the unmanned aerial vehicle to determine the flight path.
In one embodiment, after the unmanned aerial vehicle receives the control signal, the unmanned aerial vehicle determines a flight path according to lane information within a second predetermined distance in front of the carried vehicle.
The second predetermined distance is the farthest flight distance of the preset unmanned aerial vehicle from the starting point, and under the ordinary condition, the unmanned aerial vehicle can meet the requirement of sensing the road condition in front of the vehicle only by acquiring the image information of the obstacle lane near the vehicle.
The lane information includes, but is not limited to, lane parameters, for example, a connection relationship between detailed information on the lane and the lane; road components such as traffic signs, and gantries; road properties such as curvature, heading, and grade.
The lane information can be obtained by reading the vehicle from a prestored electronic map and actively sending the vehicle to the unmanned aerial vehicle after the vehicle acquires the indication instruction of the obstacle area, or can be obtained by requesting the prestored electronic map after the unmanned aerial vehicle receives the control signal, or can be prestored in the unmanned aerial vehicle. The lane information may be delivered in an off-line or on-line update manner.
The pre-stored electronic map can be a high-precision map or a common electronic map.
In one embodiment, the unmanned aerial vehicle stores the lane information in front of the vehicle according to the preset driving route in advance, and the stored lane information is updated at regular time. For example, before the vehicle starts, a preset driving route is set, lane information within x kilometers ahead of the preset driving route is stored in the unmanned aerial vehicle in advance, after the vehicle drives for 10 minutes, the lane information stored in the unmanned aerial vehicle is updated by using the high-precision map, and after the vehicle drives for 20 minutes, the lane information stored in the unmanned aerial vehicle is updated by using the high-precision map again.
Optionally, the image information is acquired by the unmanned aerial vehicle in the following manner: the unmanned aerial vehicle flies forward along the flight path, and the image information of the lane is shot in the flight process; the unmanned aerial vehicle extracts object features in the image information of the shot lane and matches the object features with preset mark features; when the unmanned aerial vehicle determines that the object characteristics are matched with the preset mark characteristics, the unmanned aerial vehicle stops flying forwards and collects the image information of the lane, and the collected image information is used as the image information of the obstacle lane.
From the perspective of the drone:
after receiving the control signal, the unmanned aerial vehicle determines a flight path, flies forwards along the flight path, the flying height is a preset specific height, and the flying distance does not exceed a preset second preset distance.
The unmanned aerial vehicle shoots the image information of the passing lane while flying forwards. Unmanned aerial vehicle discerns and feature extraction through installing image acquisition and the processing unit on the unmanned aerial vehicle organism additional to the image information in this lane of passing through, obtains one or more object characteristics, for example, the object characteristic of the traffic sign on the lane, utilizes this image acquisition and processing unit, matches the object characteristic that will obtain with presetting the sign characteristic respectively. The preset feature signs include, but are not limited to, construction obstacle signs, cone barrels, safety helmets worn by constructors, and traffic accident obstacle signs.
When the obtained object characteristics are determined to be matched with the preset mark characteristics, the unmanned aerial vehicle stops flying forwards, collects image information on the current lane and takes the collected image information as image information of the obstacle lane.
Here, it can be understood that, when the unmanned aerial vehicle determines that a certain object feature matches a preset mark feature, an object containing the object feature is an obstacle mark detected by the unmanned aerial vehicle, and the unmanned aerial vehicle detects the obstacle mark, that is, the unmanned aerial vehicle is considered to have reached the upper space of the obstacle lane. Therefore, the unmanned aerial vehicle stops flying forwards at the moment, and the image information collected on the current lane is the image information of the obstacle lane with the obstacle area.
For example, after receiving the control signal, unmanned aerial vehicle flies ahead of the vehicle, and the object on the lane that passes through is discerned at the in-process of flying to seek the object that possesses the sign characteristic of presetting. Once detecting the object that possesses the predetermined mark characteristic, like the region that construction obstacle sign or a large amount of awl section of thick bamboo were put, unmanned aerial vehicle locks the scope of gathering image information, gathers the image information in lane according to the specific height that sets up in advance, passes through wireless communication with this image information and returns the vehicle again.
In another embodiment, the unmanned aerial vehicle may also enter the image information return mode when it is determined that the obtained object feature matches the preset mark feature, and continue to fly forward while acquiring the image information on the current lane, and use the image information acquired in the process of continuing to fly forward in the image information return mode as the image information of the obstacle lane, and return the image information of the obstacle lane to the vehicle in real time through wireless. The image information of the obstacle lane is returned while flying forwards, so that the image information can completely reflect the obstacle area in front of the vehicle, and the length of the obstacle area marked in the vehicle navigation map in the subsequent step is more accurate. Simultaneously, just begin to gather the information of making a video recording and the passback of obstacle lane and give the vehicle through detecting when the object characteristic that matches with the preset mark characteristic, also can save the electric quantity that unmanned aerial vehicle consumed for unmanned aerial vehicle's operating time is more lasting.
In one embodiment, the unmanned aerial vehicle can fly forward at a constant speed according to a preset flying speed corresponding to the mode in the image information return mode, and the image acquisition work of the unmanned aerial vehicle is more stable and more power-saving due to the constant-speed flying.
If unmanned aerial vehicle takes off from the vehicle, reach the in-process of second predetermined distance to the flying distance, the object characteristic that obtains all does not match with the sign characteristic of predetermineeing, then regard as vehicle the place ahead and do not have the obstacle zone that unmanned aerial vehicle can be detected, unmanned aerial vehicle can send the vehicle the place ahead through wireless this moment and do not have the prompt message in obstacle zone, unmanned aerial vehicle also can return the vehicle.
And step S106, acquiring the image information returned by the unmanned aerial vehicle.
The vehicle acquires the image information of the obstacle lane returned by the unmanned aerial vehicle in real time in a wireless manner.
It should be noted that the unmanned aerial vehicle does not need to return to the vehicle to take the image information of the passing lane during the forward flight. The unmanned aerial vehicle only needs to return the image information of the obstacle lane collected when stopping flying forward to the vehicle.
In one embodiment, the vehicle may obtain the position information of the drone returned by the drone in the terrestrial coordinate system.
And step S108, determining the geographical range information of the obstacle area according to the image information.
The geographical range information includes, but is not limited to, size information and location information of the obstacle area.
Optionally, the obstacle indicator includes a first obstacle indicator, a second obstacle indicator, and a third obstacle indicator; determining the geographical range information of the obstacle area corresponding to the obstacle sign according to the image information, wherein the geographical range information comprises the following steps: extracting first image information of a first obstacle sign, second image information of a second obstacle sign and third image information of a third obstacle sign from the image information, and acquiring first relative position information of the unmanned aerial vehicle relative to the vehicle; the first obstacle sign is an obstacle sign closest to a specified edge in the obstacle lane; the second obstacle mark is the obstacle mark farthest from the designated edge; the third obstacle mark is an obstacle mark closest to the vehicle; determining size information of the obstacle area based on the first image information and the second image information, and determining position information of the obstacle area based on the third image information and the first relative position information; and determining the geographical range information of the obstacle area according to the size information and the position information.
Wherein the number of obstacle flags is two or more. The first obstacle flag and the third obstacle flag may be the same obstacle flag, and the second obstacle flag and the third obstacle flag may also be the same obstacle flag. The first obstacle indicator, the second obstacle indicator, and the third obstacle indicator may be three different obstacle indicators.
Obstacle signs include, but are not limited to, construction obstacle signs, cone barrels, constructors, and traffic accident obstacle signs. Therefore, the first obstacle sign, the second obstacle sign and the third obstacle sign include, but are not limited to, a construction obstacle sign, a cone, a constructor and a traffic accident obstacle sign.
The first obstacle sign is an obstacle sign closest to a specified edge in the obstacle lane, and the second obstacle sign is an obstacle sign farthest from the specified edge.
Wherein, the specific description of the designated edge is as follows: for example, in an obstacle lane, the obstacle lane has a left side lane edge and a right side lane edge when facing the vehicle traveling direction, and the lane edge, i.e., the road edge of the lane, may be the intersection of the lane and the non-lane. The designated edge is only a fixed reference object provided for determining the width of the obstacle area, and therefore, the designated edge may be a left lane edge of the obstacle lane or a right lane edge of the obstacle lane, and the designated edge may be determined in advance.
The first obstacle flag and the second obstacle flag are explained by the following exemplary examples.
For example, when the designated edge is the lane edge on the left side of the obstacle lane, there are two obstacle signs in the obstacle lane, and the shortest distance between the first cone and the lane edge on the left side of the obstacle lane is smaller than the shortest distance between the second cone and the lane edge on the left side of the obstacle lane. It is understood that in an obstacle lane, facing the direction of travel of the vehicle, the first cone is to the left of the second cone. The first cone may be a first obstacle sign and the second cone may be a second obstacle sign.
In another example, when the designated edge is the lane edge on the right side of the obstacle lane, there are four obstacle signs in the obstacle lane, and the shortest distance between the first traffic accident obstacle sign and the lane edge on the right side of the obstacle lane is smaller than the shortest distance between the first cone and the lane edge on the right side of the obstacle lane; the shortest distance between the first cone bucket and the lane edge on the right side of the obstacle lane is smaller than the shortest distance between the second cone bucket and the lane edge on the right side of the obstacle lane; the shortest distance between the second cone bucket and the right side lane edge of the obstacle lane is smaller than the shortest distance between the second traffic accident obstacle sign and the right side lane edge of the obstacle lane. It is understood that, in the obstacle lane, when facing the vehicle traveling direction, the first traffic accident obstacle sign is on the rightmost side, the first cone is on the left side of the first traffic accident obstacle sign, the second cone is on the left side of the first cone, and the second traffic accident obstacle sign is on the left side of the second cone, that is, the leftmost side. Then the first traffic accident obstacle sign may be a first obstacle sign and the second traffic accident obstacle sign may be a second obstacle sign in the obstacle lane.
The third obstacle flag is the obstacle flag closest to the vehicle. For example, the obstacle lane has three cone barrels, the shortest distance between the first cone barrel and the vehicle is smaller than the shortest distance between the second cone barrel and the vehicle, and the shortest distance between the second cone barrel and the vehicle is smaller than the shortest distance between the third cone barrel and the vehicle. It is understood that in an obstacle lane, facing the direction of travel of the vehicle, the first cone is in front of the second cone, which is in front of the third cone. The first cone may be the third obstacle sign. The example herein is merely an illustrative example.
In one embodiment, first image information of a first obstacle sign and second image information of a second obstacle sign are extracted from image information; the first obstacle sign is an obstacle sign closest to a specified edge in the obstacle lane; the second obstacle sign is the obstacle sign farthest from the specified edge.
For example, the image information corresponds to an obstacle lane having a rectangular construction area thereinThe middle area is provided with two cone barrels which are used as two obstacle signs and are respectively positioned at two adjacent corners of the construction area, and the connecting line between the two adjacent corners is almost vertical to the straight line direction of the lane. When the vehicle faces the obstacle lane, the obstacle lane has left and right edges, and one of the edges may be designated in advance as a designated edge. The vehicle processes the image information to obtain the distance between the two cone buckets and the designated edge, for example, the distance between the first cone bucket and the designated edge is the first distance d1The distance between the second conical barrel and the designated edge is a second distance d2And d is2Greater than d1Then the first cone is the first obstacle sign and the second cone is the second obstacle sign.
The first relative position information of the unmanned aerial vehicle relative to the vehicle is obtained by obtaining the coordinate origin of the vehicle coordinate system in advance, constructing the vehicle coordinate system, obtaining the vehicle position information of the vehicle under the terrestrial coordinate system, obtaining the position information of the unmanned aerial vehicle returned by the unmanned aerial vehicle under the terrestrial coordinate system, and calculating to obtain the first relative position information of the unmanned aerial vehicle relative to the vehicle through coordinate conversion. Wherein, the coordinate origin of the vehicle coordinate system can be the central point of the rear axle of the vehicle. In this step, fourth relative position information of the vehicle relative to the unmanned aerial vehicle can also be calculated in a similar manner, and the effect of the fourth relative position information is similar to that of the first relative position information, and is not described herein again.
Optionally, determining size information of the obstacle area based on the first image information and the second image information includes: determining a first distance between the first obstacle sign and the designated edge based on the first image information, and determining a second distance between the second obstacle sign and the designated edge based on the second image information; the difference between the first distance and the second distance is determined as the width of the obstacle region, and the width is taken as the size information.
For example, the first video information is the video information of the first cone bucket, and the second video information is the video information of the second cone bucket. The designated edge is the left edge of the obstacle lane when the vehicle is facing the obstacle lane. The vehicle can obtain the first image information by calculation according to the first image informationThe distance between the cone barrel and the left edge is a first distance d1The distance between the second conical barrel and the left edge is a second distance d2And d is2Greater than d1Calculating d2And d1The difference value may be used as the width of the barrier area corresponding to the two cone barrels, or the sum of the difference value and a preset value may be used as the width of the barrier area corresponding to the two cone barrels.
Whether the vehicle can pass through the obstacle area or not in the running process of the vehicle is greatly influenced by the width of the obstacle area, and whether the vehicle can avoid the obstacle area or not can be determined according to the width of the obstacle area. In this embodiment, the determined width is used as the size information of the obstacle area.
It should be noted that, in the foregoing embodiment, when it is determined that the obtained object feature matches the preset mark feature, the unmanned aerial vehicle stops flying forward, collects image information on the current lane, uses the collected image information as image information of an obstacle lane, and determines whether the vehicle can avoid the obstacle area by considering that the vehicle can determine the width of the obstacle area through the collected image information. Even if only include the image information in some regional areas of obstacle in the image information of the obstacle lane that unmanned aerial vehicle gathered when stopping to fly forward, the width in obstacle area just can be confirmed to the image information through this obstacle lane, and unmanned aerial vehicle stops to fly forward and also can save unmanned aerial vehicle's power consumption for unmanned aerial vehicle's operating time is more lasting. Therefore, through the technical scheme in the embodiment, the power consumption of the unmanned aerial vehicle can be saved, the impassable area can be determined, and then behavior decision and path planning are optimized.
After the width of the obstacle area is determined, the length of the obstacle area can be calculated by using a similar method, or the length of the obstacle area can be directly used as the length of the obstacle area according to a preset numerical value. The length of the obstacle area may also be used as the size information of the obstacle area.
In one embodiment, the third image information of a third obstacle sign is extracted from the image information, the third obstacle sign being the obstacle sign closest to the vehicle. For example, the image information corresponds to an obstacle lane having a rectangular obstacle area having a plurality of traffic accident obstacle signs, and the vehicle processes the image information to determine the traffic accident obstacle sign closest to the vehicle. It should be noted that, when the image information is processed to determine the traffic accident obstacle sign closest to the vehicle, it is not necessary to obtain the actual distance between each traffic accident obstacle sign and the vehicle, but it is only necessary to compare the distances between each traffic accident obstacle sign and the specific position extracted from the image information.
The obstacle sign closest to the vehicle may be an obstacle sign closest to a specified edge in the obstacle lane, and the obstacle sign closest to the vehicle may also be an obstacle sign farthest from the specified edge in the obstacle lane. That is, the third obstacle flag may be the same obstacle flag as the first obstacle flag; the third obstacle flag and the second obstacle flag may be the same obstacle flag.
Optionally, determining the position information of the obstacle area based on the third image information and the first relative position information, includes: determining second relative position information of the third obstacle sign relative to the unmanned aerial vehicle based on the third image information; and determining third relative position information of the third obstacle sign relative to the vehicle according to the first relative position information and the second relative position information, and taking the third relative position information as the position information of the obstacle area.
In one embodiment, the third image information is processed to determine second relative position information of the third obstacle sign with respect to the drone, where the second relative position information may be sign position information of the third obstacle sign in a rectangular coordinate system established with the drone as a coordinate origin.
According to the determined first relative position information of the unmanned aerial vehicle relative to the vehicle and the second relative position information of the third obstacle sign relative to the unmanned aerial vehicle, and then the coordinate system is unified, the third relative position information of the third obstacle sign relative to the vehicle can be determined as the position information of the obstacle area in the embodiment. The third relative position information may also be understood as the closest distance of the obstacle area to the vehicle.
In one embodiment, geographic range information for the obstacle area is determined based on the size information and the location information. It is understood that the geographic location and geographic extent of the obstacle area relative to the vehicle can be determined based on the closest distance between the obstacle area and the vehicle, and the width of the obstacle area.
And step S110, marking an obstacle area in the vehicle navigation map according to the geographical range information.
The obstacle area is marked in the vehicle navigation map according to the geographic position and the geographic range of the obstacle area relative to the vehicle.
It should be noted that the vehicle navigation map is different from the high-precision map. In general, an individual vehicle does not have the authority to modify a high-precision map, and the obstacle area can be marked only on a vehicle navigation map used by the vehicle itself, and the vehicle navigation map cannot be used by other vehicles without being downloaded or transmitting data to other vehicles.
Optionally, in step S110, before the step of marking the obstacle area in the vehicle navigation map according to the geographical range information is executed, the obstacle area marking method further includes: reading lane information within a first preset distance in front of the vehicle from a pre-stored electronic map; a vehicle navigation map is generated based on the lane information.
The pre-stored electronic map may be a high-precision map. The first predetermined distance may be a specified distance ahead of the vehicle that is predetermined according to the driving strategy requirements of the user. The first predetermined distance and the second predetermined distance may be the same or different. The lane information herein includes, but is not limited to, lane parameters, for example, a connection relationship between detailed information on the lane and the lane; road components such as traffic signs, and gantries; road properties such as curvature, heading, and grade.
In one embodiment, lane information within a first predetermined distance in front of the vehicle is read from a high-precision map, and a vehicle navigation map in the form of an image is generated from the lane information, in which an obstacle area is indicated. Here, it is understood that the lane information provided by the high-precision map is combined with the geographical range information of the obstacle area determined by the drone.
The obstacle area marked in the vehicle navigation map can be used for providing high-instantaneity and high-effectiveness reference information for a decision module of an automatic driving task of an automatic driving vehicle, determining an impassable area, further optimizing behavior decision and path planning, avoiding unpredictable safety problems caused by accidental intrusion of the vehicle into a construction area, and ensuring the comfort and safety of the vehicle in executing the automatic driving task.
In another embodiment, lane information within a first preset distance in front of a vehicle is read from a high-precision map, a vehicle navigation map in an image form is not generated, the lane information and the determined geographical range information of an obstacle area are combined to be used as a part of input parameters of a decision module of an automatic driving task of the automatic driving vehicle, then reference information with high real-time performance and high effectiveness is provided for the decision module, an impassable area is determined, behavior decision and path planning are optimized, unpredictable safety problems caused by accidental vehicle intrusion into a construction area can be avoided, and comfort and safety of the vehicle in executing the automatic driving task are guaranteed.
According to the obstacle area marking method in the exemplary embodiment of fig. 1, first, an obstacle area marking instruction is obtained; secondly, sending a control signal to the unmanned aerial vehicle according to the obstacle area marking instruction, wherein the control signal is used for controlling the unmanned aerial vehicle to acquire image information of an obstacle lane in front of the vehicle, and the obstacle lane is provided with an obstacle area; then, acquiring image information returned by the unmanned aerial vehicle; secondly, determining the geographical range information of the obstacle area according to the image information; and finally, according to the geographical range information, marking an obstacle area in the vehicle navigation map. According to the technical scheme of the embodiment of the invention, the geographical range information of the obstacle area can be obtained by utilizing the image information of the obstacle lane acquired by the unmanned aerial vehicle, and the obstacle area is marked in the vehicle navigation map so as to sense the obstacle area caused by the unconventional event occurring in front of the vehicle, and the accuracy of sensing the road condition in front of the vehicle is improved.
Fig. 2 is a schematic view of an application scenario of a method for marking an obstacle area according to an embodiment of the present application.
Referring to fig. 2, a vehicle 202 is in wireless communication with an unmanned aerial vehicle 204 in front of the vehicle, the unmanned aerial vehicle 204 flies at a designated height, and the unmanned aerial vehicle 204 collects image information of an obstacle lane having an obstacle area with a plurality of obstacle signs, namely, a first obstacle sign 208 (i.e., a construction sign 1 shown in fig. 2), a construction sign 2, and a second obstacle sign 206 (i.e., a construction sign 3 shown in fig. 2) and returns to the vehicle 202 in real time.
In one embodiment, the drone 204 may also return the drone's position information to the vehicle 202 in a terrestrial coordinate system.
In another embodiment, the drone 204 may also return the drone's attitude information to the vehicle 202. The attitude information may be a shooting angle at which the drone 202 collects the image information of the obstacle lane. The vehicle 202 may process the acquired image information using the attitude information.
The obstacle area marking method in the embodiment shown in fig. 2 can implement each process in the foregoing embodiment of the obstacle area marking method, and achieve the same effect and function, which is not described herein again.
Fig. 3 is a flowchart illustrating another obstacle area identification method according to an embodiment of the present application.
And step S302, the unmanned aerial vehicle flies towards the front of the vehicle.
After receiving the control signal, the unmanned aerial vehicle determines a flight path and flies forwards along the flight path.
Step S304, the unmanned aerial vehicle shoots the image information of the passing lane in the flying process and identifies the object characteristics in the image information.
The unmanned aerial vehicle flies forwards, the flying height is a preset specific height, and the flying distance does not exceed a preset second preset distance. And the second preset distance is the farthest flying distance of the unmanned aerial vehicle from the flying starting point.
The unmanned aerial vehicle shoots the image information of the passing lane while flying forwards. Unmanned aerial vehicle discerns and feature extraction through installing image acquisition and the processing unit on the unmanned aerial vehicle organism additional to the image information in this lane of passing through, obtains one or more object characteristics, for example, the object characteristic of the traffic sign on the lane, utilizes this image acquisition and processing unit, matches the object characteristic that will obtain with presetting the sign characteristic respectively. The preset feature signs include, but are not limited to, construction obstacle signs, cone barrels, safety helmets worn by constructors, and traffic accident obstacle signs.
And step S306, judging whether the object characteristics are matched with the preset mark characteristics.
If yes, go to step S308; if not, the process returns to step S304.
And step S308, stopping the forward flight of the unmanned aerial vehicle, and acquiring the image information of the object characteristics matched with the preset mark characteristics.
When the obtained object features are determined to be matched with the preset mark features, the unmanned aerial vehicle stops flying forwards, and acquires image information on the current lane, namely the image information on the lane where the object features matched with the preset mark features are located. The unmanned aerial vehicle takes the acquired image information as the image information of the obstacle lane.
Step S310, the vehicle receives the image information returned by the unmanned aerial vehicle and the position information of the unmanned aerial vehicle.
The vehicle acquires the image information of the obstacle lane returned by the unmanned aerial vehicle in real time through wireless and the position information of the unmanned aerial vehicle under the terrestrial coordinate system.
In step S312, the vehicle processes the image information and the position information to obtain the geographical range information of the obstacle area.
Extracting first image information of a first obstacle sign, second image information of a second obstacle sign and third image information of a third obstacle sign from the image information, and acquiring first relative position information of the unmanned aerial vehicle relative to the vehicle; the first obstacle sign is an obstacle sign closest to a specified edge in the obstacle lane; the second obstacle mark is the obstacle mark farthest from the designated edge; the third obstacle mark is an obstacle mark closest to the vehicle; determining size information of the obstacle area based on the first image information and the second image information, and determining position information of the obstacle area based on the third image information and the first relative position information; and determining the geographical range information of the obstacle area according to the size information and the position information.
Wherein, unmanned aerial vehicle's positional information can be used for acquireing unmanned aerial vehicle first relative positional information relative to the vehicle.
Determining size information of the obstacle area based on the first image information and the second image information, including: determining a first distance between the first obstacle sign and the designated edge based on the first image information, and determining a second distance between the second obstacle sign and the designated edge based on the second image information; the difference between the first distance and the second distance is determined as the width of the obstacle region, and the width is taken as the size information.
Determining position information of the obstacle area based on the third image information and the first relative position information, including: determining second relative position information of the third obstacle sign relative to the unmanned aerial vehicle based on the third image information; and determining third relative position information of the third obstacle sign relative to the vehicle according to the first relative position information and the second relative position information, and taking the third relative position information as the position information of the obstacle area.
In step S314, the vehicle marks an obstacle area in the vehicle navigation map generated from the high-precision map.
Reading lane information within a first preset distance in front of the vehicle from a high-precision map; a vehicle navigation map is generated based on the lane information. The first predetermined distance may be a specified distance ahead of the vehicle that is predetermined according to the driving strategy requirements of the user. The first predetermined distance and the second predetermined distance may be the same or different.
And the vehicle marks the obstacle area in the vehicle navigation map according to the geographical range information of the obstacle area. This process may be considered to combine lane information provided by the high-precision map with geographical range information of the obstacle area determined by the image information.
In step S316, the obstacle area indicating task is completed.
After the unmanned aerial vehicle collects the image information of an obstacle area closest to the vehicle, the unmanned aerial vehicle can be regarded as finishing the obstacle area marking task corresponding to the obstacle area marking instruction, for example, obtaining a confirmation signal that the obstacle area marking instruction sent by the vehicle in a wireless mode is finished.
Step S318, the drone returns to the vehicle.
The unmanned aerial vehicle returns to the vehicle after finishing the obstacle area marking task.
In another embodiment, also can be, unmanned aerial vehicle takes off from the vehicle, and in the process that the distance of flying reaches second predetermined distance, the object characteristic that obtains all does not match with preset sign characteristic, then regards as there is no detectable obstacle region of unmanned aerial vehicle in vehicle the place ahead, and unmanned aerial vehicle can also return the vehicle through wireless suggestion information that does not have the obstacle region in vehicle the place ahead of sending the vehicle this moment.
The obstacle area marking method in the embodiment shown in fig. 3 can implement each process in the foregoing embodiment of the obstacle area marking method, and achieve the same effect and function, which is not described herein again.
Fig. 4 is a schematic view of another application scenario of a method for marking an obstacle area according to an embodiment of the present application.
Referring to fig. 4, the obstacle lane has an obstacle area therein, and the obstacle area has a first cone 404 and a second cone 406. Wherein, the first cone barrel 404 is the obstacle sign closest to the designated edge 402 of the obstacle lane, i.e. the first obstacle sign; the second bucket 406 is the obstacle sign furthest from the designated edge 402 of the obstacle lane, i.e., the second obstacle sign.
The distance between the designated edge 402 of the obstacle lane and the first cone bucket 404 is a first distance and the distance between the designated edge 402 of the obstacle lane and the second cone bucket 406 is a second distance. The width of the obstacle area is the difference between the second distance and the first distance.
In another embodiment, the difference between the second distance and the first distance may be calculated, and then the sum of the calculated difference and the preset length may be used as the width of the obstacle area.
The obstacle area marking method in the embodiment shown in fig. 4 can implement each process in the foregoing embodiment of the obstacle area marking method, and achieve the same effect and function, which is not described herein again.
Fig. 5 is an interaction diagram of a vehicle end and an unmanned aerial vehicle end in an obstacle area marking method according to an embodiment of the present application.
The vehicle end 502 is the end where the vehicle is located, and the drone end 504 is the end where the drone is located. Referring to fig. 5, the vehicle end 502 sends lane information to the unmanned aerial vehicle end 504, and the unmanned aerial vehicle end 504 returns image information and position information of the unmanned aerial vehicle to the vehicle end 502.
Wherein the lane information may be lane information within a second predetermined distance ahead of the vehicle. The second preset distance is the maximum flying distance of the unmanned aerial vehicle from the flying starting point. The lane information includes, but is not limited to, lane parameters, for example, a connection relationship between detailed information on the lane and the lane; road components such as traffic signs, and gantries; road properties such as curvature, heading, and grade. The lane information may come from a high-precision map in the vehicle end 502.
The vehicle end 502 may also send vehicle status information to the drone end 504. The vehicle state information includes, but is not limited to, a vehicle congestion state. It can be understood here that after the navigation system in the vehicle end 502 sends the congestion prompt information, the vehicle state information includes the vehicle congestion state, and then the vehicle end 502 sends a control signal carrying the vehicle state information to the unmanned aerial vehicle end 504, where the control signal is used to control the unmanned aerial vehicle end 504 to collect image information of the obstacle lane in front of the vehicle.
In the embodiment shown in fig. 5, the image information may be image information of an obstacle lane in front of the vehicle collected by the drone; the obstacle lane has an obstacle area. The position information of the unmanned aerial vehicle can be the position information of the unmanned aerial vehicle under the terrestrial coordinate system.
The drone end 504 may also send the pose information of the drone to the vehicle end 502.
The attitude information of the unmanned aerial vehicle can be understood as the shooting angle of the unmanned aerial vehicle relative to the obstacle area when the unmanned aerial vehicle acquires the image information at the predetermined flying height. The vehicle end 502 may process the acquired image information according to the posture information.
The obstacle area marking method in the embodiment shown in fig. 5 can implement each process in the foregoing embodiment of the obstacle area marking method, and achieve the same effect and function, which is not described herein again.
Fig. 6 is a schematic structural diagram of an obstacle area indicating device according to an embodiment of the present application.
Referring to fig. 6, the obstacle area indicating apparatus includes:
an instruction obtaining module 602, configured to obtain an obstacle area marking instruction;
the signal sending module 604 is configured to send a control signal to the unmanned aerial vehicle according to the obstacle area indication instruction; the control signal is used for controlling the unmanned aerial vehicle to collect image information of an obstacle lane with obstacle signs in front of the vehicle;
an image obtaining module 606, configured to obtain image information returned by the unmanned aerial vehicle;
an information determining module 608, configured to determine, according to the image information, geographical range information of an obstacle area corresponding to the obstacle sign;
and the map marking module 610 is used for marking the obstacle area in the vehicle navigation map according to the geographic range information.
In some embodiments of the present invention, based on the above scheme, the obstacle flag includes a first obstacle flag, a second obstacle flag, and a third obstacle flag; an information determination module 608 comprising:
the image information extraction unit is used for extracting first image information of a first obstacle sign, second image information of a second obstacle sign and third image information of a third obstacle sign from the image information and acquiring first relative position information of the unmanned aerial vehicle relative to the vehicle;
wherein the first obstacle sign is an obstacle sign closest to a designated edge in the obstacle lane; the second obstacle sign is an obstacle sign farthest from the designated edge; the third obstacle sign is an obstacle sign closest to the vehicle;
a first information determination unit configured to determine size information of the obstacle area based on the first image information and the second image information, and determine position information of the obstacle area based on the third image information and the first relative position information;
and the second information determining unit is used for determining the geographical range information of the obstacle area according to the size information and the position information.
In some embodiments of the present invention, based on the foregoing scheme, the first information determining unit is specifically configured to:
determining a first distance between the first obstacle sign and the designated edge based on the first image information, and determining a second distance between the second obstacle sign and the designated edge based on the second image information;
determining a difference between the first distance and the second distance as a width of the obstacle region, the width being the size information.
In some embodiments of the present invention, based on the above scheme, the first information determining unit is further configured to:
determining second relative position information of the third obstacle sign relative to the unmanned aerial vehicle based on the third image information;
and determining third relative position information of the third obstacle sign relative to the vehicle according to the first relative position information and the second relative position information, and taking the third relative position information as the position information of the obstacle area.
In some embodiments of the present invention, based on the above solution, the obstacle area indicating device further includes:
the lane information reading module is used for reading lane information in a first preset distance in front of the vehicle from a prestored electronic map;
and the navigation map generation module is used for generating the vehicle navigation map based on the lane information.
In some embodiments of the invention, based on the above scheme, the unmanned aerial vehicle carries lane information within a second predetermined distance in front of the vehicle; the lane information is used for the unmanned aerial vehicle to determine a flight path.
In some embodiments of the present invention, based on the above scheme, the image information is obtained by the drone through the following manner:
the unmanned aerial vehicle flies forwards along the flight path, and the image information of the lane is shot in the flying process;
the unmanned aerial vehicle extracts object features in the image information of the shot lane and matches the object features with preset mark features;
when the unmanned aerial vehicle determines that the object characteristics are matched with the preset mark characteristics, the unmanned aerial vehicle stops flying forwards and collects the image information of the lane, and the collected image information is used as the image information of the obstacle lane.
In one embodiment of the application, firstly, an obstacle area marking instruction is obtained; secondly, sending a control signal to the unmanned aerial vehicle according to the obstacle area marking instruction, wherein the control signal is used for controlling the unmanned aerial vehicle to acquire image information of an obstacle lane with obstacle signs in front of the vehicle; then, acquiring image information returned by the unmanned aerial vehicle; secondly, determining the geographical range information of the obstacle area corresponding to the obstacle sign according to the image information; and finally, according to the geographical range information, marking an obstacle area in the vehicle navigation map. According to the technical scheme of the embodiment of the invention, the geographical range information of the obstacle area can be obtained by utilizing the image information of the obstacle lane acquired by the unmanned aerial vehicle, and the obstacle area is marked in the vehicle navigation map so as to sense the obstacle area caused by the unconventional event occurring in front of the vehicle, and the accuracy of sensing the road condition in front of the vehicle is improved.
The obstacle area marking device in fig. 6 may implement each process in the foregoing embodiment of the obstacle area marking method, and achieve the same effect and function, which is not described herein again.
Further, an embodiment of the present application further provides an electronic device, fig. 7 is a schematic structural diagram of the electronic device provided in an embodiment of the present application, and as shown in fig. 7, the electronic device includes: memory 701, processor 702, bus 703, and communication interface 704. The memory 701, processor 702, and communication interface 704 communicate via bus 703. the communication interface 704 may include input and output interfaces including, but not limited to, a keyboard, mouse, display, microphone, and the like.
In fig. 7, the memory 701 has stored thereon computer-executable instructions executable on the processor 702, and when executed by the processor 702, the following processes can be implemented:
acquiring an obstacle area marking instruction;
sending a control signal to the unmanned aerial vehicle according to the obstacle area marking instruction; the control signal is used for controlling the unmanned aerial vehicle to collect image information of an obstacle lane with obstacle signs in front of the vehicle;
acquiring the image information returned by the unmanned aerial vehicle;
determining the geographical range information of the obstacle area corresponding to the obstacle sign according to the image information;
and marking the obstacle area in a vehicle navigation map according to the geographic range information.
Optionally, the computer executable instructions, when executed by the processor 702, comprise a first obstacle flag, a second obstacle flag, and a third obstacle flag; determining the geographical range information of the obstacle area corresponding to the obstacle sign according to the image information, wherein the geographical range information comprises:
extracting first image information of a first obstacle sign, second image information of a second obstacle sign and third image information of a third obstacle sign from the image information, and acquiring first relative position information of the unmanned aerial vehicle relative to the vehicle;
wherein the first obstacle sign is an obstacle sign closest to a designated edge in the obstacle lane; the second obstacle sign is an obstacle sign farthest from the designated edge; the third obstacle sign is an obstacle sign closest to the vehicle;
determining size information of the obstacle area based on the first image information and the second image information, and determining position information of the obstacle area based on the third image information and the first relative position information;
and determining the geographical range information of the obstacle area according to the size information and the position information.
Optionally, when executed by the processor 702, the computer-executable instructions determine the size information of the obstacle area based on the first image information and the second image information, including:
determining a first distance between the first obstacle sign and the designated edge based on the first image information, and determining a second distance between the second obstacle sign and the designated edge based on the second image information;
determining a difference between the first distance and the second distance as a width of the obstacle region, the width being the size information.
Optionally, when executed by the processor 702, the computer-executable instructions determine the position information of the obstacle area based on the third image information and the first relative position information, including:
determining second relative position information of the third obstacle sign relative to the unmanned aerial vehicle based on the third image information;
and determining third relative position information of the third obstacle sign relative to the vehicle according to the first relative position information and the second relative position information, and taking the third relative position information as the position information of the obstacle area.
Optionally, the computer executable instructions, when executed by the processor 702, further comprise, before the step of marking the obstacle area in the vehicle navigation map according to the geographical range information is executed:
reading lane information within a first preset distance in front of the vehicle from a pre-stored electronic map;
generating the vehicle navigation map based on the lane information.
Optionally, when the computer executable instructions are executed by the processor 702, the drone carries lane information within a second predetermined distance in front of the vehicle; the lane information is used for the unmanned aerial vehicle to determine a flight path.
Optionally, the image information is obtained by the unmanned aerial vehicle in the following manner:
the unmanned aerial vehicle flies forwards along the flight path, and the image information of the lane is shot in the flying process;
the unmanned aerial vehicle extracts object features in the image information of the shot lane and matches the object features with preset mark features;
when the unmanned aerial vehicle determines that the object characteristics are matched with the preset mark characteristics, the unmanned aerial vehicle stops flying forwards and collects the image information of the lane, and the collected image information is used as the image information of the obstacle lane.
In one embodiment of the application, firstly, an obstacle area marking instruction is obtained; secondly, sending a control signal to the unmanned aerial vehicle according to the obstacle area marking instruction, wherein the control signal is used for controlling the unmanned aerial vehicle to acquire image information of an obstacle lane with obstacle signs in front of the vehicle; then, acquiring image information returned by the unmanned aerial vehicle; secondly, determining the geographical range information of the obstacle area corresponding to the obstacle sign according to the image information; and finally, according to the geographical range information, marking an obstacle area in the vehicle navigation map. According to the technical scheme of the embodiment of the invention, the geographical range information of the obstacle area can be obtained by utilizing the image information of the obstacle lane acquired by the unmanned aerial vehicle, and the obstacle area is marked in the vehicle navigation map so as to sense the obstacle area caused by the unconventional event occurring in front of the vehicle, and the accuracy of sensing the road condition in front of the vehicle is improved.
The electronic device provided by the embodiment of the application can realize each process in the embodiment of the obstacle area marking method, and achieve the same functions and effects, which are not repeated here.
Further, another embodiment of the present application also provides a storage medium, in which computer-executable instructions are stored, and when the computer-executable instructions are executed by the processor 702, the following process can be implemented:
acquiring an obstacle area marking instruction;
sending a control signal to the unmanned aerial vehicle according to the obstacle area marking instruction; the control signal is used for controlling the unmanned aerial vehicle to collect image information of an obstacle lane with obstacle signs in front of the vehicle;
acquiring the image information returned by the unmanned aerial vehicle;
determining the geographical range information of the obstacle area corresponding to the obstacle sign according to the image information;
and marking the obstacle area in a vehicle navigation map according to the geographic range information.
Optionally, the computer executable instructions, when executed by the processor 702, comprise a first obstacle flag, a second obstacle flag, and a third obstacle flag; determining the geographical range information of the obstacle area corresponding to the obstacle sign according to the image information, wherein the geographical range information comprises:
extracting first image information of a first obstacle sign, second image information of a second obstacle sign and third image information of a third obstacle sign from the image information, and acquiring first relative position information of the unmanned aerial vehicle relative to the vehicle;
wherein the first obstacle sign is an obstacle sign closest to a designated edge in the obstacle lane; the second obstacle sign is an obstacle sign farthest from the designated edge; the third obstacle sign is an obstacle sign closest to the vehicle;
determining size information of the obstacle area based on the first image information and the second image information, and determining position information of the obstacle area based on the third image information and the first relative position information;
and determining the geographical range information of the obstacle area according to the size information and the position information.
Optionally, when executed by the processor 702, the computer-executable instructions determine the size information of the obstacle area based on the first image information and the second image information, including:
determining a first distance between the first obstacle sign and the designated edge based on the first image information, and determining a second distance between the second obstacle sign and the designated edge based on the second image information;
determining a difference between the first distance and the second distance as a width of the obstacle region, the width being the size information.
Optionally, when executed by the processor 702, the computer-executable instructions determine the position information of the obstacle area based on the third image information and the first relative position information, including:
determining second relative position information of the third obstacle sign relative to the unmanned aerial vehicle based on the third image information;
and determining third relative position information of the third obstacle sign relative to the vehicle according to the first relative position information and the second relative position information, and taking the third relative position information as the position information of the obstacle area.
Optionally, the computer executable instructions, when executed by the processor 702, further comprise, before the step of marking the obstacle area in the vehicle navigation map according to the geographical range information is executed:
reading lane information within a first preset distance in front of the vehicle from a pre-stored electronic map;
generating the vehicle navigation map based on the lane information.
Optionally, when the computer executable instructions are executed by the processor 702, the drone carries lane information within a second predetermined distance in front of the vehicle; the lane information is used for the unmanned aerial vehicle to determine a flight path.
Optionally, the image information is obtained by the unmanned aerial vehicle in the following manner:
the unmanned aerial vehicle flies forwards along the flight path, and the image information of the lane is shot in the flying process;
the unmanned aerial vehicle extracts object features in the image information of the shot lane and matches the object features with preset mark features;
when the unmanned aerial vehicle determines that the object characteristics are matched with the preset mark characteristics, the unmanned aerial vehicle stops flying forwards and collects the image information of the lane, and the collected image information is used as the image information of the obstacle lane.
In one embodiment of the application, firstly, an obstacle area marking instruction is obtained; secondly, sending a control signal to the unmanned aerial vehicle according to the obstacle area marking instruction, wherein the control signal is used for controlling the unmanned aerial vehicle to acquire image information of an obstacle lane with obstacle signs in front of the vehicle; then, acquiring image information returned by the unmanned aerial vehicle; secondly, determining the geographical range information of the obstacle area according to the image information; and finally, according to the geographical range information, marking an obstacle area in the vehicle navigation map. According to the technical scheme of the embodiment of the invention, the geographical range information of the obstacle area can be obtained by utilizing the image information of the obstacle lane acquired by the unmanned aerial vehicle, and the obstacle area is marked in the vehicle navigation map so as to sense the obstacle area caused by the unconventional event occurring in front of the vehicle, and the accuracy of sensing the road condition in front of the vehicle is improved.
The storage medium includes a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
The storage medium provided in an embodiment of the present application can implement each process in the foregoing method for marking an obstacle area, and achieve the same function and effect, which are not repeated here.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. An obstacle area marking method, comprising:
acquiring an obstacle area marking instruction;
sending a control signal to the unmanned aerial vehicle according to the obstacle area marking instruction; the control signal is used for controlling the unmanned aerial vehicle to collect image information of an obstacle lane with obstacle signs in front of the vehicle;
acquiring the image information returned by the unmanned aerial vehicle;
determining the geographical range information of the obstacle area corresponding to the obstacle sign according to the image information;
and marking the obstacle area in a vehicle navigation map according to the geographic range information.
2. The method of claim 1, wherein the obstacle flags comprise a first obstacle flag, a second obstacle flag, and a third obstacle flag; determining the geographical range information of the obstacle area corresponding to the obstacle sign according to the image information, wherein the geographical range information comprises:
extracting first image information of the first obstacle sign, second image information of the second obstacle sign and third image information of the third obstacle sign from the image information, and acquiring first relative position information of the unmanned aerial vehicle relative to a vehicle;
wherein the first obstacle sign is an obstacle sign closest to a designated edge in the obstacle lane; the second obstacle sign is an obstacle sign farthest from the designated edge; the third obstacle sign is an obstacle sign closest to the vehicle;
determining size information of the obstacle area based on the first image information and the second image information, and determining position information of the obstacle area based on the third image information and the first relative position information;
and determining the geographical range information of the obstacle area according to the size information and the position information.
3. The method of claim 2, wherein determining the size information of the obstacle area based on the first image information and the second image information comprises:
determining a first distance between the first obstacle sign and the designated edge based on the first image information, and determining a second distance between the second obstacle sign and the designated edge based on the second image information;
determining a difference between the first distance and the second distance as a width of the obstacle region, the width being the size information.
4. The method of claim 2, wherein determining the location information of the obstacle area based on the third image information and the first relative location information comprises:
determining second relative position information of the third obstacle sign relative to the unmanned aerial vehicle based on the third image information;
and determining third relative position information of the third obstacle sign relative to the vehicle according to the first relative position information and the second relative position information, and taking the third relative position information as the position information of the obstacle area.
5. The method of claim 1, wherein said step of marking said obstacle area in a vehicle navigation map based on said geographic range information is performed further comprising:
reading lane information within a first preset distance in front of the vehicle from a pre-stored electronic map;
generating the vehicle navigation map based on the lane information.
6. The method of claim 1,
the unmanned aerial vehicle carries lane information within a second preset distance in front of the vehicle; the lane information is used for the unmanned aerial vehicle to determine a flight path.
7. The method of claim 6, wherein the image information is obtained by the drone by:
the unmanned aerial vehicle flies forwards along the flight path, and the image information of the lane is shot in the flying process;
the unmanned aerial vehicle extracts object features in the image information of the shot lane and matches the object features with preset mark features;
when the unmanned aerial vehicle determines that the object characteristics are matched with the preset mark characteristics, the unmanned aerial vehicle stops flying forwards and collects the image information of the lane, and the collected image information is used as the image information of the obstacle lane.
8. An obstacle area indicating device, comprising:
the instruction acquisition module is used for acquiring an obstacle area marking instruction;
the signal sending module is used for sending a control signal to the unmanned aerial vehicle according to the obstacle area marking instruction; the control signal is used for controlling the unmanned aerial vehicle to collect image information of an obstacle lane with obstacle signs in front of the vehicle;
the image acquisition module is used for acquiring the image information returned by the unmanned aerial vehicle;
the information determining module is used for determining the geographical range information of the obstacle area corresponding to the obstacle sign according to the image information;
and the map marking module is used for marking the obstacle area in the vehicle navigation map according to the geographic range information.
9. An electronic device, comprising: memory, processor and computer-executable instructions stored on the memory and executable on the processor, the computer-executable instructions, when executed by the processor, being capable of implementing the steps of the obstacle area identification method of any one of the preceding claims 1 to 7.
10. A storage medium having computer-executable instructions stored thereon which, when executed by a processor, perform the steps of the obstacle area identification method of any one of claims 1 to 7.
CN202110232459.4A 2021-03-02 2021-03-02 Obstacle area marking method and device, electronic equipment and storage medium Pending CN112964265A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110232459.4A CN112964265A (en) 2021-03-02 2021-03-02 Obstacle area marking method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110232459.4A CN112964265A (en) 2021-03-02 2021-03-02 Obstacle area marking method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112964265A true CN112964265A (en) 2021-06-15

Family

ID=76276189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110232459.4A Pending CN112964265A (en) 2021-03-02 2021-03-02 Obstacle area marking method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112964265A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117058922A (en) * 2023-10-12 2023-11-14 中交第一航务工程局有限公司 Unmanned aerial vehicle monitoring method and system for road and bridge construction

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105318888A (en) * 2015-12-07 2016-02-10 北京航空航天大学 Unmanned perception based unmanned aerial vehicle route planning method
JP2018165930A (en) * 2017-03-28 2018-10-25 株式会社ゼンリンデータコム Drone navigation device, drone navigation method and drone navigation program
CN109426255A (en) * 2017-09-04 2019-03-05 中兴通讯股份有限公司 Automatic driving vehicle control method, device and storage medium based on unmanned plane
CN109597077A (en) * 2019-01-02 2019-04-09 奇瑞汽车股份有限公司 A kind of detection system based on unmanned plane
CN110045736A (en) * 2019-04-12 2019-07-23 淮安信息职业技术学院 A kind of curve barrier preventing collision method and its system based on unmanned plane
CN110606071A (en) * 2019-09-06 2019-12-24 中国第一汽车股份有限公司 Parking method, parking device, vehicle and storage medium
CN111169479A (en) * 2020-01-14 2020-05-19 中国第一汽车股份有限公司 Cruise control method, device and system, vehicle and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105318888A (en) * 2015-12-07 2016-02-10 北京航空航天大学 Unmanned perception based unmanned aerial vehicle route planning method
JP2018165930A (en) * 2017-03-28 2018-10-25 株式会社ゼンリンデータコム Drone navigation device, drone navigation method and drone navigation program
CN109426255A (en) * 2017-09-04 2019-03-05 中兴通讯股份有限公司 Automatic driving vehicle control method, device and storage medium based on unmanned plane
CN109597077A (en) * 2019-01-02 2019-04-09 奇瑞汽车股份有限公司 A kind of detection system based on unmanned plane
CN110045736A (en) * 2019-04-12 2019-07-23 淮安信息职业技术学院 A kind of curve barrier preventing collision method and its system based on unmanned plane
CN110606071A (en) * 2019-09-06 2019-12-24 中国第一汽车股份有限公司 Parking method, parking device, vehicle and storage medium
CN111169479A (en) * 2020-01-14 2020-05-19 中国第一汽车股份有限公司 Cruise control method, device and system, vehicle and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117058922A (en) * 2023-10-12 2023-11-14 中交第一航务工程局有限公司 Unmanned aerial vehicle monitoring method and system for road and bridge construction
CN117058922B (en) * 2023-10-12 2024-01-09 中交第一航务工程局有限公司 Unmanned aerial vehicle monitoring method and system for road and bridge construction

Similar Documents

Publication Publication Date Title
CN110928284B (en) Method, apparatus, medium and system for assisting in controlling automatic driving of vehicle
CN104572065B (en) Remote vehicle monitoring system and method
CN108010360A (en) A kind of automatic Pilot context aware systems based on bus or train route collaboration
CN110928286B (en) Method, apparatus, medium and system for controlling automatic driving of vehicle
US20130197736A1 (en) Vehicle control based on perception uncertainty
US11530931B2 (en) System for creating a vehicle surroundings model
CN107533800A (en) Cartographic information storage device, automatic Pilot control device, control method, program and storage medium
KR20180009755A (en) Lane estimation method
CN111429739A (en) Driving assisting method and system
CN102222236A (en) Image processing system and position measurement system
CN109084794A (en) A kind of paths planning method
CN112325896B (en) Navigation method, navigation device, intelligent driving equipment and storage medium
CN113140050B (en) Flow dividing and controlling method and device, computer equipment and medium
US10699571B2 (en) High definition 3D mapping
WO2022083487A1 (en) Method and apparatus for generating high definition map and computer-readable storage medium
US20220319327A1 (en) Information processing device, information processing method, and program
US10836385B2 (en) Lane keeping assistance system
US11499833B2 (en) Inferring lane boundaries via high speed vehicle telemetry
CN113029187A (en) Lane-level navigation method and system fusing ADAS fine perception data
US20220221298A1 (en) Vehicle control system and vehicle control method
CN110869989A (en) Method for generating a passing probability set, method for operating a control device of a motor vehicle, passing probability collection device and control device
CN112964265A (en) Obstacle area marking method and device, electronic equipment and storage medium
WO2021261228A1 (en) Obstacle information management device, obstacle information management method, and device for vehicle
CN111204342B (en) Map information system
CN115917615A (en) Parking place management device, parking place management method, and vehicle device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination