CN116524457B - Parking space identification method, system, device, electronic equipment and readable storage medium - Google Patents
Parking space identification method, system, device, electronic equipment and readable storage medium Download PDFInfo
- Publication number
- CN116524457B CN116524457B CN202310235372.1A CN202310235372A CN116524457B CN 116524457 B CN116524457 B CN 116524457B CN 202310235372 A CN202310235372 A CN 202310235372A CN 116524457 B CN116524457 B CN 116524457B
- Authority
- CN
- China
- Prior art keywords
- vehicle
- line segment
- parking space
- current
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 62
- 230000004927 fusion Effects 0.000 claims abstract description 33
- 238000012216 screening Methods 0.000 claims abstract description 19
- 238000012937 correction Methods 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 10
- 239000000523 sample Substances 0.000 claims description 9
- 230000000007 visual effect Effects 0.000 claims description 7
- 238000009499 grossing Methods 0.000 claims description 6
- 238000012217 deletion Methods 0.000 claims description 5
- 230000037430 deletion Effects 0.000 claims description 5
- 238000001514 detection method Methods 0.000 claims description 3
- 230000010354 integration Effects 0.000 abstract description 4
- 230000036544 posture Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000006243 chemical reaction Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000007499 fusion processing Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/586—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of parking space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/34—Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Traffic Control Systems (AREA)
Abstract
The application discloses a parking space identification method, a system, a device, electronic equipment and a computer readable storage medium, based on the technical scheme provided by the application, after a parking space line segment is obtained by utilizing vehicle environment image identification, position prediction is carried out based on information such as an initial position, a current position, a change route and the like of the parking space line segment to obtain a predicted position, and then the accuracy of the parking space line segment identification is determined through a matching result of the current position and the predicted position, so that only the parking space line segment passing through the matching is reserved, further, for the reserved parking space line segment, a more accurate parking space line segment is obtained through fusion of the predicted position and the current position, and a parking space is generated, obviously, the automatic generation of the parking space is realized by utilizing the parking space line segment obtained through screening integration, the accuracy of the parking space identification result can be effectively ensured, and further, the vehicle can be parked in the parking space with a more accurate posture.
Description
Technical Field
The application relates to the field of computer vision, in particular to a parking space identification method, and also relates to a parking space identification system, a device, electronic equipment and a computer readable storage medium.
Background
With rapid development of technology, automatic parking technology about vehicles is becoming mature, and in an automatic parking system, parking space identification is a key step. In the current automatic parking technology, parking functions are rich and various, schemes are different, but most of the automatic parking technology has the problem that the posture of a finally parked vehicle in a parking space is deviated to a certain direction due to inaccurate parking space identification.
Therefore, how to effectively improve the accuracy of the parking space recognition result, and further ensure that the vehicle is parked in the parking space in a more accurate posture is a problem to be solved by those skilled in the art.
Disclosure of Invention
The application aims to provide a parking space recognition method which can effectively improve the accuracy of a parking space recognition result, thereby ensuring that a vehicle is parked in a parking space in a more accurate posture; another object of the present application is to provide a parking space recognition device, a system, an electronic device, and a computer readable storage medium, which have the above advantages.
In a first aspect, the present application provides a parking space recognition method, including:
acquiring a vehicle environment image and identifying a first vehicle line segment in the vehicle environment image;
For each first vehicle position line segment, determining an initial position of the first vehicle position line segment at the identification time and a current position of the first vehicle position line segment at the current time to obtain a change route from the identification time to the current time;
position prediction is carried out according to the initial position and the change route, and a predicted position of the current moment is obtained;
reserving a first vehicle position line segment of which the current position is matched with the predicted position in all the first vehicle position line segments to obtain a second vehicle position line segment;
for each second bit line segment, fusing the current position and the predicted position of the second bit line segment to obtain a fused position;
and generating a parking space according to the fusion position of each second vehicle space line segment.
Optionally, for each of the second bit line segments, fusing the current position and the predicted position of the second bit line segment to obtain a fused position, including:
determining a first line segment parameter at the current location and a second line segment parameter at the predicted location, respectively, for the second vehicle bit line segment; the line segment parameters comprise an angle between the second vehicle line segment and a vehicle rear axle center line coordinate system, a distance between the second vehicle line segment and a vehicle rear axle center point, and a line segment length of the second vehicle line segment;
Smoothing the first line segment parameter and the second line segment parameter according to preset weights to obtain a fused line segment parameter;
and determining the fusion position of the second vehicle line segment according to the fusion line segment parameters.
Optionally, before the first vehicle position line segment with the current position matched with the predicted position is reserved in all the first vehicle position line segments to obtain a second vehicle position line segment, the method further includes:
acquiring the matching passing times of the first vehicle position line segment;
judging whether the matching passing times reach preset times or not;
if yes, executing the first vehicle position line segments which are matched with the predicted position in all the first vehicle position line segments, and reserving the first vehicle position line segments which are matched with the current position to obtain a second vehicle position line segment.
Optionally, for each first vehicle position line segment, determining an initial position of the first vehicle position line segment at an identification time, and before a current position at a current time, the method further includes:
screening all the first vehicle line segments according to a preset screening index to obtain screened first vehicle line segments; the preset screening index comprises one or more of line segment length, line segment definition, line segment distance and interested area.
Optionally, for each of the second bit line segments, fusing the current position and the predicted position of the second bit line segment to obtain a fused position, and further includes:
adjusting all the second vehicle line segments according to a preset adjustment rule to obtain adjusted second vehicle line segments; the preset adjustment rules comprise one or more of line segment deletion, line segment merging and line segment extension.
Optionally, after the parking space is generated according to the fusion position of each second vehicle line segment, the method further includes:
identifying obstacles for each parking space to obtain a parking space without the obstacles; wherein the obstacle identification comprises an ultrasonic identification and/or a visual identification of the obstacle;
and outputting each parking space capable of being parked.
Optionally, after the outputting each parking space, the method further includes:
determining a target parking space according to the selection instruction;
determining a parking route according to the current pose of the vehicle and the target parking space;
and controlling the vehicle to drive into the target parking space according to the parking route.
Optionally, the controlling the vehicle to drive into the target parking space according to the parking route includes:
Acquiring distance information between the target parking space and the vehicle in real time;
correcting the target parking space according to the distance information to obtain a real-time corrected parking space;
acquiring ultrasonic sensing signals about path obstacles in the running process of the vehicle in real time;
correcting the parking route according to the real-time corrected parking space and the ultrasonic sensing signal to obtain a real-time corrected route;
and controlling the vehicle to drive into the real-time correction parking space according to the real-time correction route.
In a second aspect, the present application also discloses a parking space recognition system, including:
an image pickup apparatus for acquiring a vehicle environment image;
and the controller is used for executing the steps of any parking space identification method according to the vehicle environment image.
Optionally, the camera device is a super-fisheye lens and is arranged at the positions of the head, the tail and the left and right inverted mirrors.
Optionally, the parking space recognition system further includes:
an ultrasonic probe for acquiring an ultrasonic detection signal about an obstacle;
the ultrasonic probe is arranged at the left side and the right side of the vehicle head, the vehicle tail and the vehicle.
In a third aspect, the present application also discloses a parking space recognition device, including:
The vehicle environment recognition module is used for acquiring a vehicle environment image and recognizing a first vehicle line segment in the vehicle environment image;
the determining module is used for determining the initial position of the first vehicle position line segment at the identification time and the current position of the first vehicle position line segment at the current time for each first vehicle position line segment so as to obtain a change route from the identification time to the current time;
the prediction module is used for carrying out position prediction according to the initial position and the change route to obtain a predicted position of the current moment;
the reservation module is used for reserving the first vehicle position line segment of which the current position is matched with the predicted position in all the first vehicle position line segments to obtain a second vehicle position line segment;
the fusion module is used for fusing the current position and the predicted position of each second vehicle bit line segment to obtain a fusion position;
and the generating module is used for generating a parking space according to the fusion position of each second vehicle line segment.
In a fourth aspect, the present application also discloses an electronic device, including:
a memory for storing a computer program;
and the processor is used for realizing any parking space identification method when executing the computer program.
In a fifth aspect, the present application also discloses a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of any of the parking space identification methods described above.
The application provides a parking space identification method, which comprises the following steps: acquiring a vehicle environment image and identifying a first vehicle line segment in the vehicle environment image; for each first vehicle position line segment, determining an initial position of the first vehicle position line segment at the identification time and a current position of the first vehicle position line segment at the current time to obtain a change route from the identification time to the current time; position prediction is carried out according to the initial position and the change route, and a predicted position of the current moment is obtained; reserving a first vehicle position line segment of which the current position is matched with the predicted position in all the first vehicle position line segments to obtain a second vehicle position line segment; for each second bit line segment, fusing the current position and the predicted position of the second bit line segment to obtain a fused position; and generating a parking space according to the fusion position of each second vehicle space line segment.
By applying the technical scheme provided by the application, after the parking space line segments are obtained by utilizing the vehicle environment image recognition, the position prediction is carried out based on the information such as the initial position, the current position and the change route of the parking space line segments to obtain the predicted position, and then the accuracy of the parking space line segment recognition is determined through the matching result of the current position and the predicted position, so that only the parking space line segments passing through the matching are reserved, further, the reserved parking space line segments are fused with the current position to obtain more accurate parking space line segments and generate parking spaces, and obviously, the parking space line segments obtained by screening integration are utilized to realize the automatic generation of the parking spaces, so that the accuracy of the parking space recognition result can be effectively ensured, and the vehicles can be ensured to be parked in the parking spaces in more accurate postures.
The parking space recognition device, the system, the electronic equipment and the computer readable storage medium provided by the application have the technical effects as well, and the application is not repeated here.
Drawings
In order to more clearly illustrate the technical solutions in the prior art and the embodiments of the present application, the following will briefly describe the drawings that need to be used in the description of the prior art and the embodiments of the present application. Of course, the following drawings related to embodiments of the present application are only a part of embodiments of the present application, and it will be obvious to those skilled in the art that other drawings can be obtained from the provided drawings without any inventive effort, and the obtained other drawings also fall within the scope of the present application.
Fig. 1 is a schematic flow chart of a parking space recognition method provided by the application;
fig. 2 is a schematic structural diagram of a parking space recognition device provided by the application;
fig. 3 is a schematic structural diagram of an electronic device according to the present application.
Detailed Description
The core of the application is to provide a parking space recognition method, which can effectively improve the accuracy of a parking space recognition result, thereby ensuring that a vehicle is parked in a parking space in a more accurate posture; the application further provides a parking space recognition system, a parking space recognition device, electronic equipment and a computer readable storage medium, which have the beneficial effects.
In order to more clearly and completely describe the technical solutions in the embodiments of the present application, the technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The embodiment of the application provides a parking space identification method.
Referring to fig. 1, fig. 1 is a flow chart of a parking space recognition method provided by the present application, where the parking space recognition method may include the following steps S101 to S106.
S101: a vehicle environment image is acquired and a first vehicle-location line segment in the vehicle environment image is identified.
The method aims at achieving acquisition of a vehicle environment image and identification processing of parking space line segments in the vehicle environment image, and obtaining a first vehicle space line segment in the vehicle environment image. The vehicle environment image can be acquired by the image pickup device arranged on the vehicle to be parked, of course, the use type of the image pickup device and the installation position of the image pickup device on the vehicle do not affect the implementation of the technical scheme, and a plurality of 195-degree super-fisheye lenses can be used for acquiring the most comprehensive, clear and accurate vehicle environment image and are respectively arranged at the head, the tail and the left and right mirror positions of the vehicle so as to acquire the vehicle environment image.
Further, after the vehicle environment images are acquired, line segment identification (the parking space is formed by combining line segments) can be performed on each vehicle environment image, and each first vehicle line segment is determined. In one possible implementation manner, the first vehicle line segment recognition process may be implemented by using a line segment recognition model based on a deep learning network, specifically, a deep learning network model may be created in advance, then a large amount of data is collected for labeling, and training is performed on the model, where a vehicle line is labeled, and when training is performed, a plurality of points in the line segment are extracted as standards, so that a vehicle line segment recognition model is obtained through training, and when using the vehicle line segment recognition model to perform line segment recognition, a group of points similar to the labeled line segment may be first recognized, and then line segments, that is, the first vehicle line segment, are clustered.
S102: for each first vehicle position line segment, determining the initial position of the first vehicle position line segment at the identification time and the current position of the first vehicle position line segment at the current time to obtain a change route from the identification time to the current time.
The method aims at acquiring various relevant information of the first vehicle position line segment, and mainly comprises an initial position of the first vehicle position line segment at an initial identification time, a current position at a current time and a change route from the initial identification time to the current time. Of course, the number of first vehicle line segments identified in the vehicle environment image is not unique.
It will be appreciated that the vehicle may be in a driving state all the time during parking, and based on this, for each first vehicle position line segment, the initial position at the identification time refers to the position information of the first vehicle position line segment compared with the position of the vehicle itself at the identification time, the current position at the current time refers to the position information of the first vehicle position line segment compared with the position of the vehicle itself at the current time, and the change route is also the change route compared with the vehicle itself. The time length from the identification time to the current time is a preset calculation period.
It should be noted that, the first vehicle position line segment obtained based on the vehicle environment image is a vehicle position line segment under the image coordinate system, in order to realize the vehicle position identification and automatic parking under the world coordinate system, after the first vehicle position line segment is obtained, the first vehicle position line segment may be first subjected to coordinate system conversion, and then converted from the image coordinate system to the world coordinate system, and then the above various information of the first vehicle position line segment under the world coordinate system is obtained. The coordinate system conversion can be realized based on a matrix conversion relation between an image coordinate system and a world coordinate system, and specifically, internal and external parameters of the image pickup device can be acquired, wherein before the image pickup device is installed on a vehicle, the image pickup device can be placed in a calibration box for internal parameter calibration, and internal parameter data comprise, but are not limited to, deviation of an optical center, distortion parameters and the like; after the image pickup apparatus is mounted on the vehicle, the image pickup apparatus is calibrated by the factory production line, and the external parameter, that is, the posture of the image pickup apparatus is compared with that of the vehicle, and then the matrix conversion relation is established based on the internal and external parameters.
S103: and carrying out position prediction according to the initial position and the change route to obtain a predicted position at the current moment.
This step aims at achieving a position prediction, i.e. a predicted position of the first vehicle-position line segment at the current moment is predicted based on its initial position and the course of the change. The predicted position is used for matching with the current position to determine the accuracy of the first vehicle-position line segment recognition result, when the matching is passed, namely, the matching degree of the predicted position and the current position reaches a preset threshold value, the first vehicle-position line segment recognition is determined to be accurate, and when the matching is not passed, namely, the matching degree of the predicted position and the current position does not reach the preset threshold value, the first vehicle-position line segment recognition is determined to be inaccurate. The preset threshold value is set by a technician according to actual requirements, and the application is not limited to this.
S104: and reserving the first vehicle position line segments with the current positions matched with the predicted positions in all the first vehicle position line segments to obtain second vehicle position line segments.
The step aims at realizing screening of the first vehicle position line segments, reserving the first vehicle position line segments with high accuracy, deleting the first vehicle position line segments with low accuracy, and finally reserving the first vehicle position line segments as the second vehicle position line segments. The first vehicle line segment with high accuracy, namely the current position, is matched with the predicted position, and the first vehicle line segment with low accuracy, namely the current position, is not matched with the predicted position.
S105: and fusing the current position and the predicted position of each second vehicle bit line segment to obtain a fused position.
The step aims at realizing fusion processing of the position information so as to obtain a second vehicle bit line segment with more accurate position information. Specifically, for each of the second vehicle bit line segments that are retained, the current position and the predicted position of the second vehicle bit line segment may be fused to obtain a fused position, where the fused position is the position information with higher accuracy.
In one possible implementation manner, for each second vehicle bit line segment, fusing the current position and the predicted position of the second vehicle bit line segment to obtain the fused position may include:
determining a first line segment parameter of the second vehicle line segment at the current position and a second line segment parameter of the second vehicle line segment at the predicted position respectively; the line segment parameters comprise an angle between the second vehicle line segment and a line coordinate system in a rear axle of the vehicle, a distance between the second vehicle line segment and a central point of the rear axle of the vehicle and a line segment length of the second vehicle line segment;
smoothing the first line segment parameter and the second line segment parameter according to preset weights to obtain a fused line segment parameter;
and determining the fusion position of the second vehicle line segment according to the fusion line segment parameters.
The embodiment of the application provides an implementation mode for carrying out fusion processing on position information, namely a smoothing processing mode of line segment parameters. Specifically, for each of the second vehicle position line segments, parameter information of the second vehicle position line segments at the current position and the predicted position thereof, that is, the first line segment parameter corresponding to the current position and the second line segment parameter corresponding to the predicted position, may be obtained, where specific content of the line segment parameters is not unique, and may include, but is not limited to, an angle between the second vehicle position line segment and a line coordinate system of a rear axle of the vehicle, a distance between the second vehicle position line segment and a center point of the rear axle of the vehicle, a line segment length of the second vehicle position line segment, and the like; further, smoothing the first line segment parameter and the second line segment parameter according to a preset weight to obtain a fused line segment parameter, wherein a specific value of the preset weight can be set according to practical situations, for example, because the current position is an actual position and has higher reliability compared with the predicted position, the weight corresponding to the current position can be set to be larger than the weight corresponding to the predicted position, and the sum of the weight and the weight is 1, so that the fused line segment parameter is obtained by calculation; finally, the fusion position of the second vehicle line segment can be determined according to the fusion line segment parameter, and obviously, the process is the reverse operation of the above-mentioned ' determining the first line segment parameter of the second vehicle line segment at the current position and the second line segment parameter at the predicted position ', respectively '. Therefore, the fusion position of the second vehicle position line segment is determined, and further accuracy guarantee can be provided for the vehicle position identification result.
S106: and generating a parking space according to the fusion position of each second vehicle line segment.
The step aims at realizing automatic generation of parking spaces. Specifically, after obtaining relatively accurate fusion positions of the second vehicle line segments, the second vehicle line segments can be used for generating parking spaces based on the fusion positions, and of course, the number of the parking spaces may not be unique, so that the parking spaces can be selected, and an automatic parking function is realized.
Therefore, according to the parking space identification method provided by the embodiment of the application, after the parking space line segments are obtained by utilizing the vehicle environment image identification, the position prediction is carried out based on the information of the initial position, the current position, the change route and the like of the parking space line segments, the prediction position is obtained, and then the accuracy of the parking space line segment identification is determined through the matching result of the current position and the prediction position, so that only the parking space line segments passing through the matching are reserved, further, the reserved parking space line segments are fused through the prediction position and the current position to obtain more accurate parking space line segments, and a parking space is generated, and obviously, the automatic generation of the parking space is realized by utilizing the parking space line segments obtained through screening integration, so that the accuracy of the parking space identification result can be effectively ensured, and the vehicle can be further ensured to be parked into the parking space in more accurate postures.
Based on the above embodiments:
in an embodiment of the present application, the above-mentioned steps of reserving the first vehicle line segment with the current position matching the predicted position in all the first vehicle line segments, before obtaining the second vehicle line segment, may further include:
acquiring the matching passing times of the first vehicle line segment;
judging whether the number of times of matching passing reaches a preset number of times;
if yes, executing the first vehicle position line segments with the current position matched with the predicted position in all the first vehicle position line segments, and obtaining a second vehicle position line segment.
In order to further improve the accuracy of parking space identification, the matching process about the current position and the predicted position may be set to multiple matching, and the first vehicle segment is reserved when multiple matching is successful. Therefore, after each position matching is completed, the number of times of matching passing of the first vehicle position line segment can be counted, whether the number of times of matching passing reaches the preset number of times or not is judged, and the first vehicle position line segment is reserved when the condition is met. Of course, the preset number of times of value does not affect the implementation of the technical scheme, and the technical scheme is set by a technician according to actual requirements, so that the application is not limited.
In an embodiment of the present application, the determining, for each first vehicle-position line segment, an initial position of the first vehicle-position line segment at the identification time, before identifying the change route from the identification time to the current time, may further include:
screening all the first vehicle line segments according to a preset screening index to obtain screened first vehicle line segments; the preset screening index comprises one or more of line segment length, line segment definition, line segment distance and interested area.
It can be understood that many false identifications may exist in the first vehicle position line segments obtained based on deep learning, so that in order to effectively improve the vehicle position identification efficiency and the accuracy of the vehicle position identification result, after each first vehicle position line segment in the vehicle environment image is obtained by identification, all first vehicle position line segments can be screened according to a preset screening index, and the first vehicle position line segments which do not meet the index standard value are removed. The preset screening index may include, but is not limited to, line segment length, line segment definition, distance between line segments, region of interest, and the like. For example, some first vehicle segments that exceed the region of interest, or that are too short in length, or that are too low in line definition, may be preferentially rejected.
In an embodiment of the present application, for each second vehicle bit line segment, fusing the current position and the predicted position of the second vehicle bit line segment to obtain the fused position may further include:
adjusting all the second vehicle line segments according to a preset adjustment rule to obtain adjusted second vehicle line segments; the preset adjustment rules comprise one or more of line segment deletion, line segment combination and line segment extension.
The parking space recognition method provided by the embodiment of the application can further realize the adjustment and correction processing of the second vehicle position line segment so as to further improve the accuracy of the parking space recognition result. Specifically, after the fusion position of the second vehicle line segments is obtained, an adjustment correction process may be performed on each second vehicle line segment according to a preset adjustment rule, where the preset adjustment rule may include, but is not limited to, a process rule of line segment deletion, line segment merging, line segment extension, and the like. For example, a second vehicle line segment that is more than a certain distance (e.g., 15 meters) from the center of the rear axle of the vehicle or more than a certain angle (e.g., 175 degrees) from the current vehicle may be deleted, a second vehicle line segment that is less than a certain distance (e.g., 0.5 meters) from the line segment in parallel relationship to each other may be merged, a second vehicle line segment that is about to intersect within a certain distance (e.g., 0.5 meters) may be extended to form an intersection, and so on.
In an embodiment of the present application, after generating the parking space according to the fused position of each second vehicle line segment, the method may further include:
identifying obstacles for each parking space to obtain a parking space without obstacles; wherein the obstacle recognition comprises ultrasonic recognition and/or visual recognition of the obstacle;
outputting each parking space.
It will be appreciated that, for the identified parking spaces, there may be situations where parking is impossible due to the presence of an obstacle (e.g., a pedestrian, an animal, a vehicle, etc.) in the space, so that after each parking space is identified, the obstacle identification may be performed on each parking space to obtain a parkable space without any obstacle, and of course, the number of identified parkable spaces is also not unique. The obstacle recognition can be realized by adopting ultrasonic recognition and/or visual recognition technology about the obstacle, the visual recognition depends on image pickup equipment, the ultrasonic recognition depends on ultrasonic probes, and likewise, in order to ensure the accuracy of a recognition result, a plurality of ultrasonic probes can be arranged on the vehicle and distributed at the positions of the head, the tail and the left and right sides of the vehicle, for example, the head, the tail and the left and right sides of the vehicle are respectively two, and basically all the areas around the vehicle can be covered.
In an embodiment of the present application, after outputting each parking space, the method may further include:
determining a target parking space according to the selection instruction;
determining a parking route according to the current pose of the vehicle and a target parking space;
and controlling the vehicle to drive into the target parking space according to the parking route.
As described above, the number of the selected parkable parking spaces among all the identified parking spaces may not be unique, and in order to achieve automatic parking of the vehicle, the user is required to select a target parkable parking space from all the parkable parking spaces in a self-defined manner for parking; further, after the user selects and determines the target parking space, the parking route of the vehicle can be planned by combining the current pose of the vehicle, so that the vehicle is controlled to park to the target parking space according to the planned parking route. Of course, the number of parking routes may be multiple to provide a reference for the parking process, enabling more accurate automated parking. The selection of the user about the target parking space can be realized through a visual large screen arranged in the vehicle.
In an embodiment of the present application, the controlling the vehicle to drive into the target parkable parking space according to the parking route may include:
Acquiring distance information between a target parking space and a vehicle in real time;
correcting the target parking space according to the distance information to obtain a real-time corrected parking space;
acquiring ultrasonic sensing signals about path obstacles in the running process of the vehicle in real time;
correcting the parking route according to the real-time corrected parking space and the ultrasonic sensing signal to obtain a real-time corrected route;
and controlling the vehicle to drive into the real-time correction parking space according to the real-time correction route.
It can be understood that the precision of the visually identified parking space line segment is related to the position between the parking space line segment and the vehicle, and the closer the line segment is to the camera, the higher the precision is; in addition, certain motion errors can be generated from the beginning of parking to the stopping stage of the vehicle in the parking process; in addition, obstacles may suddenly appear in the field of view during parking. Based on the method, in order to further ensure the accuracy of the parking result, the target parking space and the parking route can be corrected in real time in the parking process.
Specifically, distance information between the target parkable parking space and the vehicle, including but not limited to distance length, distance pose angle and the like, can be obtained in real time, and then the target parkable parking space is corrected according to the distance information to obtain a real-time corrected parking space; meanwhile, the ultrasonic probe is used for detecting the path obstacle in real time to obtain an ultrasonic sensing signal, so that the parking route can be corrected in real time by using the real-time correction parking space and the ultrasonic sensing signal to obtain a real-time correction route, and automatic parking of the vehicle is realized. It should be noted that the above process is performed in real time during the parking process.
The embodiment of the application provides a parking space recognition system.
The parking space recognition system provided by the embodiment of the application can comprise:
an image pickup apparatus for acquiring a vehicle environment image;
and the controller is used for executing the steps of any parking space identification method according to the vehicle environment image.
In one embodiment of the present application, the image capturing apparatus may be a super-fisheye lens, and is mounted on a head, a tail, and left and right mirror positions.
In one embodiment of the present application, the parking space recognition system may further include:
an ultrasonic probe for acquiring an ultrasonic detection signal about an obstacle;
the ultrasonic probe is arranged at the left side and the right side of the vehicle head and the vehicle tail.
For the description of the system provided by the embodiment of the present application, please refer to the above method embodiment, and the description of the present application is omitted here.
The embodiment of the application provides a parking space recognition device.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a parking space recognition device provided by the present application, where the parking space recognition device may include:
an identification module 1, configured to acquire a vehicle environment image and identify a first vehicle line segment in the vehicle environment image;
a determining module 2, configured to determine, for each first vehicle-position line segment, an initial position of the first vehicle-position line segment at the identification time and a current position of the first vehicle-position line segment at the current time, so as to obtain a change route from the identification time to the current time;
The prediction module 3 is used for carrying out position prediction according to the initial position and the change route to obtain a predicted position at the current moment;
a reserving module 4, configured to reserve, among all the first vehicle line segments, a first vehicle line segment whose current position matches with the predicted position, to obtain a second vehicle line segment;
the fusion module 5 is used for fusing the current position and the predicted position of each second vehicle line segment to obtain a fused position;
and the generating module 6 is used for generating a parking space according to the fusion position of each second vehicle line segment.
Therefore, the parking space recognition device provided by the embodiment of the application performs position prediction based on the information such as the initial position, the current position, the change route and the like of the parking space line segment after the parking space line segment is obtained by utilizing the vehicle environment image recognition, so as to obtain the predicted position, and then determines the accuracy of the parking space line segment recognition through the matching result of the current position and the predicted position, so that only the parking space line segment passing through the matching is reserved, further, the reserved parking space line segment is obtained through the fusion of the predicted position and the current position, and a parking space is generated, and obviously, the automatic generation of the parking space is realized by utilizing the parking space line segment obtained through screening integration, so that the accuracy of the parking space recognition result can be effectively ensured, and the vehicle can be further ensured to be parked in the parking space in a more accurate posture.
In one embodiment of the present application, the fusion module 5 may be specifically configured to determine a first segment parameter at a current position and a second segment parameter at a predicted position of the second vehicle segment, respectively; the line segment parameters comprise an angle between the second vehicle line segment and a line coordinate system in a rear axle of the vehicle, a distance between the second vehicle line segment and a central point of the rear axle of the vehicle and a line segment length of the second vehicle line segment; smoothing the first line segment parameter and the second line segment parameter according to preset weights to obtain a fused line segment parameter; and determining the fusion position of the second vehicle line segment according to the fusion line segment parameters.
In an embodiment of the present application, the parking space recognition device may further include a matching statistics module, configured to, among all the first vehicle position segments, reserve a first vehicle position segment whose current position matches with the predicted position, and obtain a number of matching passes of the first vehicle position segment before obtaining the second vehicle position segment; judging whether the number of times of matching passing reaches a preset number of times; if yes, executing the first vehicle position line segments with the current position matched with the predicted position in all the first vehicle position line segments, and obtaining a second vehicle position line segment.
In an embodiment of the present application, the parking space recognition device may further include a screening module, configured to determine, for each first vehicle-position line segment, an initial position of the first vehicle-position line segment at a recognition time, screen, according to a preset screening index, all first vehicle-position line segments before a change route from the recognition time to the current time is identified at the current position of the current time, and obtain a screened first vehicle-position line segment; the preset screening index comprises one or more of line segment length, line segment definition, line segment distance and interested area.
In an embodiment of the present application, the parking space recognition device may further include an adjustment module, configured to fuse, for each second vehicle segment, a current position and a predicted position of the second vehicle segment, and after obtaining the fused position, adjust all second vehicle segments according to a preset adjustment rule, to obtain an adjusted second vehicle segment; the preset adjustment rules comprise one or more of line segment deletion, line segment combination and line segment extension.
In an embodiment of the present application, the parking space recognition device may further include an obstacle recognition module, configured to perform obstacle recognition on each parking space after generating the parking space according to the fusion position of each second vehicle line segment, to obtain a parkable parking space without an obstacle; wherein the obstacle recognition comprises ultrasonic recognition and/or visual recognition of the obstacle; outputting each parking space.
In one embodiment of the present application, the parking space recognition device may further include a parking module, configured to determine a target parking space according to the selection instruction after outputting each parking space; determining a parking route according to the current pose of the vehicle and a target parking space; and controlling the vehicle to drive into the target parking space according to the parking route.
In one embodiment of the present application, the parking module may be specifically configured to obtain, in real time, distance information between a target parking space and a vehicle; correcting the target parking space according to the distance information to obtain a real-time corrected parking space; acquiring ultrasonic sensing signals about path obstacles in the running process of the vehicle in real time; correcting the parking route according to the real-time corrected parking space and the ultrasonic sensing signal to obtain a real-time corrected route; and controlling the vehicle to drive into the real-time correction parking space according to the real-time correction route.
For the description of the apparatus provided by the embodiment of the present application, refer to the above method embodiment, and the description of the present application is omitted here.
The embodiment of the application provides electronic equipment.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device according to the present application, where the electronic device may include:
a memory for storing a computer program;
and the processor is used for realizing the steps of any parking space identification method when executing the computer program.
As shown in fig. 3, which is a schematic diagram of a composition structure of an electronic device, the electronic device may include: a processor 10, a memory 11, a communication interface 12 and a communication bus 13. The processor 10, the memory 11 and the communication interface 12 all complete communication with each other through a communication bus 13.
In an embodiment of the present application, the processor 10 may be a central processing unit (Central Processing Unit, CPU), an asic, a dsp, a field programmable gate array, or other programmable logic device, etc.
The processor 10 may call a program stored in the memory 11, and in particular, the processor 10 may perform operations in an embodiment of the parking space recognition method.
The memory 11 is used for storing one or more programs, and the programs may include program codes including computer operation instructions, and in the embodiment of the present application, at least the programs for implementing the following functions are stored in the memory 11:
acquiring a vehicle environment image and identifying a first vehicle line segment in the vehicle environment image;
for each first vehicle position line segment, determining the initial position of the first vehicle position line segment at the identification time and the current position of the first vehicle position line segment at the current time to obtain a change route from the identification time to the current time;
position prediction is carried out according to the initial position and the change route, and a predicted position at the current moment is obtained;
reserving a first vehicle line segment with the current position matched with the predicted position in all the first vehicle line segments to obtain a second vehicle line segment;
For each second vehicle line segment, fusing the current position and the predicted position of the second vehicle line segment to obtain a fused position;
and generating a parking space according to the fusion position of each second vehicle line segment.
In one possible implementation, the memory 11 may include a storage program area and a storage data area, where the storage program area may store an operating system, and at least one application program required for functions, etc.; the storage data area may store data created during use.
In addition, the memory 11 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device or other volatile solid-state storage device.
The communication interface 12 may be an interface of a communication module for interfacing with other devices or systems.
Of course, it should be noted that the structure shown in fig. 3 is not limited to the electronic device in the embodiment of the present application, and the electronic device may include more or less components than those shown in fig. 3, or may be combined with some components in practical applications.
Embodiments of the present application provide a computer-readable storage medium.
The computer readable storage medium provided by the embodiment of the application stores a computer program, and when the computer program is executed by a processor, the steps of any parking space identification method can be realized.
The computer readable storage medium may include: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
For the description of the computer-readable storage medium provided in the embodiment of the present application, refer to the above method embodiment, and the description of the present application is omitted here.
In the description, each embodiment is described in a progressive manner, and each embodiment is mainly described by the differences from other embodiments, so that the same similar parts among the embodiments are mutually referred. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The technical scheme provided by the application is described in detail. The principles and embodiments of the present application have been described herein with reference to specific examples, the description of which is intended only to facilitate an understanding of the method of the present application and its core ideas. It should be noted that it will be apparent to those skilled in the art that the present application may be modified and practiced without departing from the spirit of the present application.
Claims (14)
1. The parking space recognition method is characterized by comprising the following steps of:
acquiring a vehicle environment image and identifying a first vehicle line segment in the vehicle environment image;
for each first vehicle position line segment, determining an initial position of the first vehicle position line segment at the identification time and a current position of the first vehicle position line segment at the current time to obtain a change route from the identification time to the current time;
Position prediction is carried out according to the initial position and the change route, and a predicted position of the current moment is obtained;
reserving a first vehicle position line segment of which the current position is matched with the predicted position in all the first vehicle position line segments to obtain a second vehicle position line segment;
for each second bit line segment, fusing the current position and the predicted position of the second bit line segment to obtain a fused position;
and generating a parking space according to the fusion position of each second vehicle space line segment.
2. The parking space recognition method according to claim 1, wherein for each of the second vehicle bit line segments, fusing the current position and the predicted position of the second vehicle bit line segment to obtain a fused position includes:
determining a first line segment parameter at the current location and a second line segment parameter at the predicted location, respectively, for the second vehicle bit line segment; the line segment parameters comprise an angle between the second vehicle line segment and a vehicle rear axle center line coordinate system, a distance between the second vehicle line segment and a vehicle rear axle center point, and a line segment length of the second vehicle line segment;
Smoothing the first line segment parameter and the second line segment parameter according to preset weights to obtain a fused line segment parameter;
and determining the fusion position of the second vehicle line segment according to the fusion line segment parameters.
3. The parking space recognition method according to claim 1, wherein the step of reserving the first vehicle position line segment with the current position matched with the predicted position in all the first vehicle position line segments, before obtaining the second vehicle position line segment, further comprises:
acquiring the matching passing times of the first vehicle position line segment;
judging whether the matching passing times reach preset times or not;
if yes, executing the first vehicle position line segments which are matched with the predicted position in all the first vehicle position line segments, and reserving the first vehicle position line segments which are matched with the current position to obtain a second vehicle position line segment.
4. The parking space recognition method according to claim 1, wherein the determining, for each of the first vehicle-space line segments, an initial position of the first vehicle-space line segment at a recognition time, before a current position at a current time, a change route from the recognition time to the current time, further comprises:
screening all the first vehicle line segments according to a preset screening index to obtain screened first vehicle line segments; the preset screening index comprises one or more of line segment length, line segment definition, line segment distance and interested area.
5. The parking space recognition method according to claim 1, wherein for each of the second vehicle bit line segments, fusing the current position and the predicted position of the second vehicle bit line segment to obtain a fused position, further comprising:
adjusting all the second vehicle line segments according to a preset adjustment rule to obtain adjusted second vehicle line segments; the preset adjustment rules comprise one or more of line segment deletion, line segment merging and line segment extension.
6. The parking space recognition method according to claim 1, wherein after generating the parking space according to the fusion position of each of the second vehicle line segments, further comprises:
identifying obstacles for each parking space to obtain a parking space without the obstacles; wherein the obstacle identification comprises an ultrasonic identification and/or a visual identification of the obstacle;
and outputting each parking space capable of being parked.
7. The method of claim 6, wherein after outputting each of the parkable parking spaces, further comprising:
determining a target parking space according to the selection instruction;
determining a parking route according to the current pose of the vehicle and the target parking space;
And controlling the vehicle to drive into the target parking space according to the parking route.
8. The parking space recognition method according to claim 7, wherein the control vehicle driving into the target parkable parking space in accordance with the parking route includes:
acquiring distance information between the target parking space and the vehicle in real time;
correcting the target parking space according to the distance information to obtain a real-time corrected parking space;
acquiring ultrasonic sensing signals about path obstacles in the running process of the vehicle in real time;
correcting the parking route according to the real-time corrected parking space and the ultrasonic sensing signal to obtain a real-time corrected route;
and controlling the vehicle to drive into the real-time correction parking space according to the real-time correction route.
9. A parking space recognition system, comprising:
an image pickup apparatus for acquiring a vehicle environment image;
a controller for executing the steps of the parking space recognition method according to any one of claims 1 to 8 based on the vehicle environment image.
10. The parking space recognition system according to claim 9, wherein the image pickup device is a super fisheye lens mounted at a head, a tail, and left and right mirror positions.
11. The parking space identification system of claim 9, further comprising:
an ultrasonic probe for acquiring an ultrasonic detection signal about an obstacle;
the ultrasonic probe is arranged at the left side and the right side of the vehicle head, the vehicle tail and the vehicle.
12. A parking space recognition device, comprising:
the vehicle environment recognition module is used for acquiring a vehicle environment image and recognizing a first vehicle line segment in the vehicle environment image;
the determining module is used for determining the initial position of the first vehicle position line segment at the identification time and the current position of the first vehicle position line segment at the current time for each first vehicle position line segment so as to obtain a change route from the identification time to the current time;
the prediction module is used for carrying out position prediction according to the initial position and the change route to obtain a predicted position of the current moment;
the reservation module is used for reserving the first vehicle position line segment of which the current position is matched with the predicted position in all the first vehicle position line segments to obtain a second vehicle position line segment;
the fusion module is used for fusing the current position and the predicted position of each second vehicle bit line segment to obtain a fusion position;
And the generating module is used for generating a parking space according to the fusion position of each second vehicle line segment.
13. An electronic device, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the parking space identification method according to any one of claims 1 to 8 when executing the computer program.
14. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the parking space identification method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310235372.1A CN116524457B (en) | 2023-03-13 | 2023-03-13 | Parking space identification method, system, device, electronic equipment and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310235372.1A CN116524457B (en) | 2023-03-13 | 2023-03-13 | Parking space identification method, system, device, electronic equipment and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116524457A CN116524457A (en) | 2023-08-01 |
CN116524457B true CN116524457B (en) | 2023-09-05 |
Family
ID=87405377
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310235372.1A Active CN116524457B (en) | 2023-03-13 | 2023-03-13 | Parking space identification method, system, device, electronic equipment and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116524457B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118097999B (en) * | 2024-04-29 | 2024-08-02 | 知行汽车科技(苏州)股份有限公司 | Parking space identification method, device, equipment and medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7142157B2 (en) * | 2004-09-14 | 2006-11-28 | Sirf Technology, Inc. | Determining position without use of broadcast ephemeris information |
CN114511632A (en) * | 2022-01-10 | 2022-05-17 | 北京经纬恒润科技股份有限公司 | Construction method and device of parking space map |
-
2023
- 2023-03-13 CN CN202310235372.1A patent/CN116524457B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7142157B2 (en) * | 2004-09-14 | 2006-11-28 | Sirf Technology, Inc. | Determining position without use of broadcast ephemeris information |
CN114511632A (en) * | 2022-01-10 | 2022-05-17 | 北京经纬恒润科技股份有限公司 | Construction method and device of parking space map |
Also Published As
Publication number | Publication date |
---|---|
CN116524457A (en) | 2023-08-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7132960B2 (en) | DRIVING ROUTE PLANNING METHOD, APPARATUS, VEHICLE, AND PROGRAM | |
US20200265710A1 (en) | Travelling track prediction method and device for vehicle | |
CN112417926B (en) | Parking space identification method and device, computer equipment and readable storage medium | |
CN111862157A (en) | Multi-vehicle target tracking method integrating machine vision and millimeter wave radar | |
CN112753038B (en) | Method and device for identifying lane change trend of vehicle | |
CN116524457B (en) | Parking space identification method, system, device, electronic equipment and readable storage medium | |
US11249174B1 (en) | Automatic calibration method and system for spatial position of laser radar and camera sensor | |
CN117311369B (en) | Multi-scene intelligent robot inspection method | |
JP2022073894A (en) | Driving scene classification method, device, apparatus, and readable storage medium | |
US12012102B2 (en) | Method for determining a lane change indication of a vehicle | |
CN113267199A (en) | Driving track planning method and device | |
CN112829747A (en) | Driving behavior decision method and device and storage medium | |
US20230278587A1 (en) | Method and apparatus for detecting drivable area, mobile device and storage medium | |
CN112000226B (en) | Human eye sight estimation method, device and sight estimation system | |
CN113954836B (en) | Sectional navigation channel changing method and system, computer equipment and storage medium thereof | |
CN110766760A (en) | Method, device, equipment and storage medium for camera calibration | |
EP4386324A1 (en) | Method and apparatus for identifying road information, electronic device, vehicle, and medium | |
CN114475656B (en) | Travel track prediction method, apparatus, electronic device and storage medium | |
CN110285977B (en) | Test method, device, equipment and storage medium for automatic driving vehicle | |
CN114475593A (en) | Travel track prediction method, vehicle, and computer-readable storage medium | |
CN116534059B (en) | Adaptive perception path decision method, device, computer equipment and storage medium | |
CN108363387B (en) | Sensor control method and device | |
CN111857113B (en) | Positioning method and positioning device for movable equipment | |
CN109948656B (en) | Information processing method, device and storage medium | |
CN114694112B (en) | Traffic signal lamp identification method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |