CN114566055A - Automatic parking space determining method and system - Google Patents

Automatic parking space determining method and system Download PDF

Info

Publication number
CN114566055A
CN114566055A CN202011355567.2A CN202011355567A CN114566055A CN 114566055 A CN114566055 A CN 114566055A CN 202011355567 A CN202011355567 A CN 202011355567A CN 114566055 A CN114566055 A CN 114566055A
Authority
CN
China
Prior art keywords
pixel
coordinate
gray value
target
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011355567.2A
Other languages
Chinese (zh)
Other versions
CN114566055B (en
Inventor
田甜
苏松恺
张慧豫
左雄一男
张玮
王晨
郑瑜
王孜怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SAIC Motor Corp Ltd
Shanghai Automotive Industry Corp Group
Original Assignee
SAIC Motor Corp Ltd
Shanghai Automotive Industry Corp Group
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SAIC Motor Corp Ltd, Shanghai Automotive Industry Corp Group filed Critical SAIC Motor Corp Ltd
Priority to CN202011355567.2A priority Critical patent/CN114566055B/en
Publication of CN114566055A publication Critical patent/CN114566055A/en
Application granted granted Critical
Publication of CN114566055B publication Critical patent/CN114566055B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The application provides a method and a system for determining an automatic parking stall, wherein when a stall boundary is detected, a first coordinate and a second coordinate on the stall boundary are determined; acquiring a visual image in a preset range, and determining a first coordinate and a second coordinate on a coordinate system constructed based on the visual image; acquiring a gray value of a first pixel, a gray value of a second pixel, a gray value of a third pixel and a gray value of a fourth pixel; determining a first translation distance based on the gray value of the first pixel and the gray value of the third pixel, and determining a second translation distance based on the gray value of the second pixel and the gray value of the fourth pixel; and correcting the first coordinate according to the first translation distance to obtain a target first coordinate, correcting the second coordinate according to the second translation distance to obtain a target second coordinate, determining the boundary of the target vehicle, and determining the parking space according to the boundary of the target vehicle. The invention can accurately determine the idle parking space.

Description

Automatic parking space determining method and system
Technical Field
The invention belongs to the technical field of intelligent driving, and particularly relates to a method and a system for determining an automatic parking space.
Background
With the rapid development of economy, the automobile industry is rapidly developing, and particularly, intelligent parking in the automobile industry is developed most rapidly. The automatic parking system can help a driver to automatically find a parking space.
The existing automatic parking system identifies a space parking space through an ultrasonic detector, visually identifies a marking parking space through a camera, and fuses the space parking space and the identified marking parking space to obtain a final parking space. However, the parking space is identified by the ultrasonic detector and limited by the physical attributes of the ultrasonic detector, so that certain errors exist between the detected space parking space and the actual space parking space, and the final parking space obtained by fusing the space parking space and the identified marking parking space is inaccurate.
Disclosure of Invention
In view of this, the present invention provides a method and a system for determining an automatic parking space, so as to achieve the purpose of accurately determining an idle parking space.
The invention discloses a method for determining an automatic parking space in a first aspect, which comprises the following steps:
when a parking space boundary is detected, determining a first coordinate and a second coordinate on the parking space boundary, wherein the first coordinate and the second coordinate are end points at two ends of the parking space boundary;
acquiring a visual image in a preset range, and constructing a coordinate system based on the visual image;
determining a first pixel corresponding to the first coordinate and a second pixel corresponding to the second coordinate on the coordinate system;
acquiring gray values of a first pixel, a second pixel, a third pixel and a fourth pixel, wherein the third pixel is adjacent to the first pixel and is positioned at the same side with the second pixel, and the fourth pixel is adjacent to the second pixel and is positioned at the same side with the first pixel;
determining a first translation distance by which the first pixel is translated based on the grayscale value of the first pixel and the grayscale value of the third pixel;
determining a second translation distance by which the second pixel is translated based on the grayscale value of the second pixel and the grayscale value of the fourth pixel;
correcting the first coordinate according to the first translation distance to obtain a target first coordinate;
correcting the second coordinate according to the second translation distance to obtain a target second coordinate;
and determining a boundary of the target vehicle based on the first target coordinate and the second target coordinate, and determining a parking space according to the boundary of the target vehicle.
Optionally, the determining a first translation distance by which the first pixel is translated based on the gray value of the first pixel and the gray value of the third pixel includes:
judging whether the gray value of the first pixel is smaller than the gray value of the third pixel;
if the gray value of the first pixel is not smaller than the gray value of the third pixel, translating the first pixel by a preset number of pixels towards the direction of the second pixel until the gray value of the adjacent pixel is larger than the gray value of the first pixel, and determining a first translation distance for translating the first pixel;
and if the gray value of the first pixel is smaller than the gray value of the third pixel, determining that the first translation distance of the first pixel translation is 0.
Optionally, the determining a second translation distance by which the second pixel is translated based on the gray-scale value of the second pixel and the gray-scale value of the fourth pixel includes:
judging whether the gray value of the second pixel is smaller than the gray value of the fourth pixel;
if the gray value of the second pixel is not smaller than the gray value of the fourth pixel, translating the second pixel by the preset number of pixels towards the direction of the first pixel until the gray value of the adjacent pixel is larger than the gray value of the second pixel, and determining a second translation distance of the second pixel;
and if the gray value of the second pixel is smaller than the gray value of the fourth pixel, determining that the second translation distance of the second pixel translation is 0.
Optionally, the modifying the first coordinate according to the first translation distance to obtain a target first coordinate includes:
adding the first translation distance to the abscissa of the first coordinate to obtain a first target abscissa, and determining the ordinate of the first coordinate as a first target ordinate;
determining a target first coordinate based on the target first abscissa and the target first ordinate.
Optionally, the correcting the second coordinate according to the second translation distance to obtain a target second coordinate includes:
adding the second translation distance to the abscissa of the second coordinate to obtain a second target abscissa, and determining the ordinate of the second coordinate as a second target ordinate;
determining a target second coordinate based on the target second abscissa and the target second ordinate.
In a second aspect of the present invention, an automatic parking space determining system is disclosed, the system comprising:
the ultrasonic subsystem is used for determining a first coordinate and a second coordinate on the parking space boundary when the parking space boundary is detected; correcting the first coordinate based on the first translation distance to obtain a target first coordinate; correcting the second coordinate based on the second translation distance to obtain a target second coordinate; determining a boundary of a target vehicle based on the first target coordinate and the second target coordinate, and determining a parking space according to the boundary of the target vehicle, wherein the first coordinate and the second coordinate are end points at two ends of the boundary of the parking space;
the visual subsystem is used for acquiring a visual image in a preset range and constructing a coordinate system based on the visual image; determining a first pixel corresponding to the first coordinate and a second pixel corresponding to the second coordinate on the coordinate system; acquiring gray values of a first pixel, a second pixel, a third pixel and a fourth pixel; determining a first translation distance by which the first pixel is translated based on the grayscale value of the first pixel and the grayscale value of the third pixel; determining a second translation distance by which the second pixel is translated based on the gray value of the second pixel and the gray value of a fourth pixel, wherein the third pixel is adjacent to the first pixel and on the same side as the second pixel, and the fourth pixel is adjacent to the second pixel and on the same side as the first pixel.
Optionally, the vision subsystem that determines the first translation distance by which the first pixel is translated based on the gray value of the first pixel and the gray value of the third pixel is specifically configured to:
judging whether the gray value of the first pixel is smaller than the gray value of the third pixel; if the gray value of the first pixel is not smaller than the gray value of the third pixel, translating the first pixel by a preset number of pixels towards the direction of the second pixel until the gray value of the adjacent pixel is larger than the gray value of the first pixel, and determining a first translation distance for translating the first pixel; and if the gray value of the first pixel is smaller than the gray value of the third pixel, determining that the first translation distance of the first pixel translation is 0.
Optionally, the vision subsystem that determines the second translation distance by which the second pixel is translated based on the gray value of the second pixel and the gray value of the fourth pixel is specifically configured to:
judging whether the gray value of the second pixel is smaller than the gray value of the fourth pixel; if the gray value of the second pixel is not smaller than the gray value of the fourth pixel, translating the second pixel by the preset number of pixels towards the direction of the first pixel until the gray value of the adjacent pixel is larger than the gray value of the second pixel, and determining a second translation distance of the second pixel; and if the gray value of the second pixel is smaller than the gray value of the fourth pixel, determining that the second translation distance of the second pixel translation is 0.
Optionally, the vision subsystem that corrects the first coordinate according to the first translation distance to obtain a target first coordinate is specifically configured to:
adding the first translation distance to the abscissa of the first coordinate to obtain a first target abscissa, and determining the ordinate of the first coordinate as a first target ordinate; determining a target first coordinate based on the target first abscissa and the target first ordinate.
Optionally, the vision subsystem is configured to correct the second coordinate according to the second translation distance to obtain a target second coordinate, and is specifically configured to:
adding the second translation distance to the abscissa of the second coordinate to obtain a second target abscissa, and determining the ordinate of the second coordinate as a second target ordinate; determining a target second coordinate based on the target second abscissa and the target second ordinate.
The invention provides a method and a system for determining an automatic parking stall, wherein a stall boundary is detected through an ultrasonic subsystem, and a first coordinate and a second coordinate at two ends of the stall boundary are determined when the stall boundary is detected; acquiring a visual image in a preset range through a visual subsystem, constructing a coordinate system according to the acquired visual image, and determining a first pixel corresponding to a first coordinate and a second pixel corresponding to a second coordinate on the constructed coordinate; acquiring a gray value of a first pixel, a gray value of a second pixel, a gray value of a third pixel which is adjacent to the first pixel and is positioned at the same side with the second pixel, and a gray value of a fourth pixel which is adjacent to the second pixel and is positioned at the same side with the first pixel, determining a first translation distance for translating the first pixel according to the gray value of the first pixel and the gray value of the third pixel, determining a second translation distance for translating the second pixel according to the gray value of the second pixel and the gray value of the fourth pixel, so that the ultrasonic subsystem corrects the first coordinate according to the first translation distance determined by the vision subsystem to obtain a more accurate target first coordinate, corrects the second coordinate according to the second translation distance to obtain a more accurate target second coordinate, and further determines a target vehicle boundary according to the corrected target first coordinate and the target second coordinate, and determining a more accurate parking space according to the determined boundary of the target vehicle.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic diagram of an ultrasonic detector receiving echoes during a moving process of a vehicle according to an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating a boundary error when determining a parking space boundary in a conventional automatic parking system according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an automatic parking space determining system according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a coordinate system constructed based on visual images according to an embodiment of the present invention;
fig. 5 is a flowchart illustrating an automatic parking space determining method according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In this application, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
With the background technology, when the parking space is determined by using the existing automatic parking system, the parking space is identified by using the ultrasonic detector and is limited by the physical attribute of the ultrasonic detector, so that certain errors exist between the detected space parking space and the actual space parking space, and the final parking space obtained by fusing the space parking space and the identified marking parking space is inaccurate.
It should be noted that, when the ultrasonic detector detects a distance, a string of beams is shot out, the angle of the beam in the horizontal direction is about 60 degrees, and in the driving process of the vehicle, even if the vehicle has driven to an idle area, the beam can also be shot onto the vehicle body of a vehicle behind, at this time, the ultrasonic detector may receive an echo, as shown in fig. 1, the automatic parking system may consider that there is still an obstacle, and the transverse coordinate of the first boundary coordinate edge0 of the space parking space may not be accurately obtained, so that the obtained final parking space is inaccurate.
The applicant finds that, through research, 3 points are found back when the echo distance value y is received by the ultrasonic detector and has severe jump, and as shown in fig. 2, a boundary coordinate is calculated through interpolation processing of a trigonometric algorithm, and on the basis, a technician superimposes a fixed error compensation value according to own experience to obtain a more accurate boundary coordinate. However, when the chamfers of the bumpers of the vehicles are different, the algorithm is very limited due to the fact that the error compensation values are fixed, the boundary of the vehicle cannot be accurately identified, the obtained boundary coordinates still have certain errors, and the obtained final parking space still has certain errors.
Therefore, the invention provides the method and the system for determining the automatic parking space, which can accurately determine the free parking space.
Referring to fig. 1, a schematic structural diagram of an automatic parking space determination system 300 according to an embodiment of the present invention is shown, where the automatic parking space determination system 300 includes an ultrasonic subsystem 301 and a visual subsystem 302.
And the ultrasonic subsystem 301 is used for determining a first coordinate and a second coordinate on the parking space boundary when the parking space boundary is detected.
In the embodiment of the present application, during the driving of the vehicle, the ultrasonic subsystem 301 of the automatic parking space determining system 300 detects the space boundary, when the ultrasonic subsystem 301 detects the space boundary, the coordinates of two ends of the detected space boundary are determined (for convenience of distinguishing, the coordinate of one end of the space boundary is referred to as a first coordinate, and the coordinate of the other end of the space boundary is referred to as a second coordinate), and the determined first coordinate and second coordinate are transmitted to the vision subsystem 302.
For example, when the coordinates of the right end of the space boundary are referred to as first coordinates, the coordinates of the left end of the space boundary are referred to as second coordinates, or when the coordinates of the left end of the space boundary are referred to as first coordinates, the coordinates of the right end of the space boundary are referred to as second coordinates.
In an embodiment of the present application, the ultrasound subsystem may be an ultrasound probe.
The vision subsystem 302 is configured to acquire a visual image in a preset range, construct a coordinate system based on the acquired visual image, and determine a first pixel corresponding to a first coordinate and a second pixel corresponding to a second coordinate on the constructed coordinate system; acquiring gray values of a third pixel and a fourth pixel; determining a first translation distance by which the first pixel is translated based on the gray value of the first pixel and the gray value of the third pixel; a second translation distance by which the second pixel is translated is determined based on the gray value of the second pixel and the gray value of the fourth pixel, and the first translation distance and the second translation distance are generated to the ultrasound subsystem 301.
In the embodiment of the present application, the third pixel is adjacent to the first pixel and located at the same side as the second pixel, and the fourth pixel is adjacent to the second pixel and located at the same side as the first pixel.
In the embodiment of the present application, the preset range may be 5 meters, 10 meters, 12 meters, and the like, and the specific range of the preset range may be set by the inventor according to his own needs, and the embodiment of the present application is not limited.
The ultrasonic subsystem 301 is configured to correct the first coordinate according to the first translation distance sent by the vision subsystem 301 to obtain a first target coordinate, correct the second coordinate according to the second translation distance to obtain a second target coordinate, so as to determine a boundary of the target vehicle according to the first target coordinate and the second target coordinate obtained after correction, and determine a parking space according to the boundary of the target vehicle.
In this embodiment, the vision subsystem 302 may be a camera, and acquires a vision image within a preset range through the camera, constructs a coordinate system with a vehicle rear axle center in the acquired vision image as an origin, and determines a pixel corresponding to a first coordinate (for convenience of distinguishing, the pixel of the first coordinate is referred to as a first pixel) and a pixel corresponding to a second coordinate (for convenience of distinguishing, the pixel of the second coordinate is referred to as a second pixel) in the constructed coordinate system.
The gray scale value of the first pixel, the gray scale value of the second pixel, the gray scale value of the pixel adjacent to the first pixel and on the same side as the second pixel (for convenience of distinction, the gray scale value of the pixel adjacent to the first pixel and on the same side as the second pixel is referred to herein as the gray scale value of the third pixel), and the gray scale value of the pixel adjacent to the second pixel and on the same side as the first pixel (for convenience of distinction, the gray scale value of the pixel adjacent to the second pixel and on the same side as the first pixel is referred to herein as the gray scale value of the fourth pixel).
Optionally, whether the gray value of the first pixel is smaller than the gray value of the third pixel is judged, and when the gray value of the first pixel is smaller than the gray value of the third pixel, it may be determined that the area where the third pixel is located is a road surface area, and it is determined that the first pixel is a boundary pixel between the vehicle and the road surface, at this time, it may be considered that the first coordinate corresponding to the first pixel is the accurate endpoint coordinate of the parking space boundary, and then it may be determined that the first translation distance of the first pixel in translation is 0.
If the gray value of the first pixel is not less than the gray value of the third pixel, it may be determined that the area where the third pixel is located is a vehicle area, the first pixel is translated by a preset number of pixels in a direction pointing to the second pixel until the gray value of the adjacent pixel is greater than the gray value of the first pixel, and a first translation distance for the first pixel to translate is determined, where the first translation distance for the first pixel to translate is determined, and the following formula is specifically shown:
L=K*l
where L is the first translation distance in mm, L is the distance in the visual image in pixel pix, K is the coefficient of relationship between the two in mm/pix.
In the embodiment of the present application, the predetermined number of pixels may be 1 pixel, 2 pixels, 3 pixels, and so on, and the inventor may set the predetermined number of pixels according to his own requirement, which is not limited in the embodiment of the present application.
For example, the predetermined number of pixels is 1 pixel, the first pixel is on the right of the second pixel, and the second pixel is on the left of the first pixel. If the gray value of the first pixel is not less than the gray value of the third pixel, determining that the area where the third pixel is located is a vehicle area, and translating the first pixel rightward by 1 pixel until the gray value of the pixel translated to the right adjacent pixel is greater than the gray value of the first pixel; if the first pixel is shifted by 4 pixels to the right, the gray value of the pixel adjacent to the right is greater than the gray value of the first pixel, and it can be determined that the first shift distance of the first pixel shift is 4 × K.
Optionally, whether the gray value of the second pixel is smaller than the gray value of the fourth pixel is judged, and when the gray value of the second pixel is smaller than the gray value of the fourth pixel, it may be determined that the area where the fourth pixel is located is a road surface area, and it is determined that the second pixel is a boundary pixel between the vehicle and the road surface, at this time, it may be considered that a second coordinate corresponding to the second pixel is an accurate endpoint coordinate of the parking space boundary, and then it may be determined that a second translation distance of the second pixel in translation is 0.
If the gray value of the second pixel is not less than the gray value of the fourth pixel, it may be determined that the region where the fourth pixel is located is the vehicle region, the second pixel is translated by a preset number of pixels in the direction pointing to the first pixel until the gray value of the adjacent pixel is greater than the gray value of the second pixel, and a second translation distance of the second pixel is determined, where a manner of determining the second translation distance of the second pixel may refer to the above formula, and details are not repeated here.
For example, the predetermined number of pixels is 1 pixel, the first pixel is on the right of the second pixel, and the second pixel is on the left of the first pixel. If the gray value of the second pixel is not less than the gray value of the fourth pixel, determining that the area where the third pixel is located is a vehicle area, and translating the first pixel by 1 pixel to the left until the gray value of the adjacent pixel is greater than the gray value of the second pixel; if the second pixel is shifted rightward by 2 pixels, the gray value of the left adjacent pixel is greater than the gray value of the second pixel, and it can be determined that the second shift distance of the second pixel shift is 2 × K.
In the embodiment of the present application, after the vision subsystem 301 determines the first translation distance of the first pixel and the second translation distance of the second pixel, the determined first translation distance and second translation distance are sent to the ultrasound subsystem 301, so that the ultrasound subsystem 301 adds the first translation distance to the abscissa of the first coordinate to obtain the target first abscissa, and determines the ordinate of the first coordinate as the target first ordinate; determining a first target coordinate based on a first target abscissa and a first target ordinate; adding the second translation distance to the abscissa of the second coordinate to obtain a second target abscissa, and determining the ordinate of the second coordinate as a second target ordinate; and determining a second target coordinate based on the second target abscissa and the second target ordinate, determining a parking space boundary based on the first target coordinate and the second target coordinate, and determining a parking space according to the determined parking space boundary.
The invention provides an automatic parking stall determining system, which detects a stall boundary through an ultrasonic subsystem, and determines a first coordinate and a second coordinate at two ends of the stall boundary when the stall boundary is detected; acquiring a visual image in a preset range through a visual subsystem, constructing a coordinate system according to the acquired visual image, and determining a first pixel corresponding to a first coordinate and a second pixel corresponding to a second coordinate on the constructed coordinate; acquiring a gray value of a first pixel, a gray value of a second pixel, a gray value of a third pixel which is adjacent to the first pixel and is positioned at the same side with the second pixel, and a gray value of a fourth pixel which is adjacent to the second pixel and is positioned at the same side with the first pixel, determining a first translation distance for translating the first pixel according to the gray value of the first pixel and the gray value of the third pixel, determining a second translation distance for translating the second pixel according to the gray value of the second pixel and the gray value of the fourth pixel, so that the ultrasonic subsystem corrects the first coordinate according to the first translation distance determined by the vision subsystem to obtain a more accurate target first coordinate, corrects the second coordinate according to the second translation distance to obtain a more accurate target second coordinate, and further determines a target vehicle boundary according to the corrected target first coordinate and the target second coordinate, and determining a more accurate parking space according to the determined boundary of the target vehicle.
For a better understanding of the above, the following examples are given.
For example, during the driving of the vehicle, the parking space boundary is detected by the ultrasonic subsystem 301, when the parking space boundary is detected by the ultrasonic subsystem 301, the coordinates of the left end of the parking space boundary are referred to as first coordinates P0(X0, Y0), the coordinates of the right end of the parking space boundary are referred to as second coordinates P1(X1, Y1), and the first coordinates and the second coordinates are transmitted to the vision subsystem 302.
A visual image within a range of 5 meters is acquired through the vision subsystem 302, a coordinate system is constructed with the vehicle rear axle center in the acquired visual image as an origin, and a first pixel edge0 corresponding to a first coordinate and a second pixel edge1 corresponding to a second coordinate are determined on the constructed coordinate system, as shown in fig. 4.
Acquiring a gray value of a first pixel edge0, a gray value of a second pixel edge1, a gray value of a third pixel adjacent to the right of the first pixel, and a gray value of a fourth pixel adjacent to the left of the second pixel; judging whether the gray value of the first pixel edge0 is smaller than the gray value of the third pixel, if the gray value of the first pixel is not smaller than the gray value of the third pixel, determining that the area where the third pixel is located is a vehicle area, and translating the first pixel rightward by 1 pixel until the gray value of the pixel adjacent to the right is larger than the gray value of the first pixel; if the first pixel is shifted rightward by 3 pixels, the gray value of the right adjacent pixel is greater than the gray value of the first pixel, and it can be determined that the first shift distance of the first pixel shift is 3 × K.
Judging whether the gray value of the second pixel edge1 is smaller than the gray value of the fourth pixel, if the gray value of the second pixel is not smaller than the gray value of the fourth pixel, determining that the area where the fourth pixel is located is a vehicle area, and translating the first pixel leftwards by 1 pixel until the gray value of the pixel adjacent to the left is larger than the gray value of the first pixel; if the first pixel is shifted by 4 pixels to the left, and the gray value of the left adjacent pixel is greater than the gray value of the second pixel, it may be determined that the second shift distance by which the second pixel is shifted is 4 × K, and the first shift distance and the second shift distance are sent to the ultrasound subsystem 301.
The ultrasonic subsystem 301 obtains a first target abscissa (X0+3 × K) by adding the first translation distance 3 × K to the abscissa X0 of the first coordinate P0(X0, Y0), and determines the ordinate Y0 of the first coordinate as a first target ordinate; determining a first target coordinate (X0+ 3X K, Y0) based on the first target abscissa (X0+ 3X K) and the first target ordinate Y0; adding a second translation distance 4X K to an abscissa X1 of a second coordinate P1(X1, Y1) to obtain a second target abscissa (X1+ 4X K), and determining an ordinate Y1 of the second coordinate as a second target ordinate; and determining a target second coordinate based on the target second abscissa (X1+4 xK) and the target second ordinate Y1, determining a parking space boundary based on the target first coordinate and the target second coordinate, and determining a parking space according to the determined parking space boundary.
Based on the automatic parking space determining system shown in fig. 3, the invention also correspondingly discloses an automatic parking space determining method, which is applied to the automatic parking space determining system, as shown in fig. 5, the invention provides a flow diagram of the automatic parking space determining method provided by the embodiment of the invention, and the method comprises the following steps:
s501: when the parking space boundary is detected, a first coordinate and a second coordinate on the parking space boundary are determined, wherein the first coordinate and the second coordinate are end points at two ends of the parking space boundary.
In the specific implementation of step S501, during the driving of the vehicle, the parking space boundary is detected by the ultrasonic subsystem in the automatic parking space determination system, when the parking space boundary is detected by the ultrasonic subsystem, the coordinates of two ends of the detected parking space boundary are determined (for the convenience of distinguishing, the coordinate of one end of the parking space boundary is referred to as a first coordinate, and the coordinate of the other end of the parking space boundary is referred to as a second coordinate), and the determined first coordinate and the determined second coordinate are sent to the vision subsystem.
S502: and acquiring a visual image in a preset range, and constructing a coordinate system based on the visual image.
In the process of specifically executing step S502, the vision subsystem may be a camera, and the camera collects a vision image within a preset range, and a coordinate system is constructed with a vehicle rear axle center in the collected vision image as an origin.
S503: and determining the gray value of the first pixel corresponding to the first coordinate and the gray value of the second pixel corresponding to the second coordinate on the coordinate system.
In the specific implementation of S503, after the vision subsystem constructs a coordinate system based on the acquired visual image, a pixel corresponding to the first coordinate (for convenience of distinction, the pixel of the first coordinate is referred to as a first pixel) and a pixel corresponding to the second coordinate (for convenience of distinction, the pixel of the second coordinate is referred to as a second pixel) are determined in the constructed coordinate system.
S504: the method comprises the steps of obtaining gray values of a first pixel, a second pixel, a third pixel and a fourth pixel, wherein the third pixel is adjacent to the first pixel and is positioned at the same side with the second pixel, and the fourth pixel is adjacent to the second pixel and is positioned at the same side with the first pixel.
S505: a first translation distance by which the first pixel is translated is determined based on the gray value of the first pixel and the gray value of the third pixel.
In the process of specifically executing step S505, the vision subsystem may determine that the area where the third pixel is located is the road surface area and the first pixel is the boundary pixel between the vehicle and the road surface by determining whether the gray value of the first pixel is smaller than the gray value of the third pixel, and when the gray value of the first pixel is smaller than the gray value of the third pixel, it may be determined that the first coordinate corresponding to the first pixel is the accurate endpoint coordinate of the parking space boundary, and then it may be determined that the first translation distance of the first pixel in translation is 0.
If the gray value of the first pixel is not less than the gray value of the third pixel, it may be determined that the region where the third pixel is located is the vehicle region, the first pixel is translated by a preset number of pixels in the direction pointing to the second pixel until the gray value of the adjacent pixel is greater than the gray value of the first pixel, a first translation distance for translating the first pixel is determined, and the determined first translation distance is sent to the ultrasound subsystem, where a manner of determining the first translation distance for translating the first pixel may refer to the above formula, and details are not repeated here.
S506: a second translation distance by which the second pixel is translated is determined based on the gray value of the second pixel and the gray value of the fourth pixel.
In the specific process of executing step S506, the vision subsystem may determine that the area where the fourth pixel is located is the road surface area and determine that the second pixel is the boundary pixel between the vehicle and the road surface by determining whether the gray value of the second pixel is smaller than the gray value of the fourth pixel and when the gray value of the second pixel is smaller than the gray value of the fourth pixel, at this time, it may be considered that the second coordinate corresponding to the second pixel is the accurate endpoint coordinate of the parking space boundary, and then may determine that the second translation distance of the second pixel translation is 0.
If the gray value of the second pixel is not less than the gray value of the fourth pixel, it may be determined that the region where the fourth pixel is located is the vehicle region, the second pixel is translated by a preset number of pixels in the direction pointing to the first pixel until the gray value of the adjacent pixel is greater than the gray value of the second pixel, a second translation distance of the second pixel translation is determined, and the determined second translation distance is sent to the ultrasound subsystem, where a manner of determining the second translation distance of the second pixel may refer to the above formula, and details are not repeated here.
S507: and correcting the first coordinate according to the first translation distance to obtain a first target coordinate.
In the specific process of executing step S507, the ultrasonic subsystem adds the first translation distance to the abscissa of the first coordinate to obtain a first target abscissa, and determines the ordinate of the first coordinate as a first target ordinate; a target first coordinate is determined based on the target first abscissa and the target first ordinate.
S508: and correcting the second coordinate according to the second translation distance to obtain a target second coordinate.
In the specific process of step S508, the ultrasonic subsystem adds the second translation distance to the abscissa of the second coordinate to obtain a second target abscissa, determines the ordinate of the second coordinate as a second target ordinate, and determines the second target coordinate based on the second target abscissa and the second target ordinate.
S509: and determining the boundary of the target vehicle based on the first target coordinate and the second target coordinate, and determining the parking space according to the boundary of the target vehicle.
In the specific process of executing step S509, the ultrasonic subsystem corrects the first coordinate according to the first translation distance to obtain a target first coordinate, corrects the second coordinate according to the second translation distance to obtain a target second coordinate, determines a target vehicle boundary based on the target first coordinate and the target second coordinate, and determines a parking space according to the target vehicle boundary.
The invention provides a method for determining an automatic parking stall, which detects a stall boundary through an ultrasonic subsystem, and determines a first coordinate and a second coordinate at two ends of the stall boundary when the stall boundary is detected; acquiring a visual image in a preset range through a visual subsystem, constructing a coordinate system according to the acquired visual image, and determining a first pixel corresponding to a first coordinate and a second pixel corresponding to a second coordinate on the constructed coordinate; acquiring a gray value of a first pixel, a gray value of a second pixel, a gray value of a third pixel which is adjacent to the first pixel and is positioned at the same side with the second pixel, and a gray value of a fourth pixel which is adjacent to the second pixel and is positioned at the same side with the first pixel, determining a first translation distance for translating the first pixel according to the gray value of the first pixel and the gray value of the third pixel, determining a second translation distance for translating the second pixel according to the gray value of the second pixel and the gray value of the fourth pixel, so that the ultrasonic subsystem corrects the first coordinate according to the first translation distance determined by the vision subsystem to obtain a more accurate target first coordinate, corrects the second coordinate according to the second translation distance to obtain a more accurate target second coordinate, and further determines a target vehicle boundary according to the corrected target first coordinate and the target second coordinate, and determining a more accurate parking space according to the determined boundary of the target vehicle.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, the system or system embodiments are substantially similar to the method embodiments and therefore are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for related points. The above-described system and system embodiments are only illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. An automatic parking space determination method is applied to an automatic parking space determination system, and comprises the following steps:
when a parking space boundary is detected, determining a first coordinate and a second coordinate on the parking space boundary, wherein the first coordinate and the second coordinate are end points at two ends of the parking space boundary;
acquiring a visual image in a preset range, and constructing a coordinate system based on the visual image;
determining a first pixel corresponding to the first coordinate and a second pixel corresponding to the second coordinate on the coordinate system;
acquiring gray values of a first pixel, a second pixel, a third pixel and a fourth pixel, wherein the third pixel is adjacent to the first pixel and is positioned at the same side with the second pixel, and the fourth pixel is adjacent to the second pixel and is positioned at the same side with the first pixel;
determining a first translation distance by which the first pixel is translated based on the gray value of the first pixel and the gray value of the third pixel;
determining a second translation distance by which the second pixel is translated based on the grayscale value of the second pixel and the grayscale value of the fourth pixel;
correcting the first coordinate according to the first translation distance to obtain a target first coordinate;
correcting the second coordinate according to the second translation distance to obtain a target second coordinate;
and determining a boundary of the target vehicle based on the first target coordinate and the second target coordinate, and determining a parking space according to the boundary of the target vehicle.
2. The method of claim 1, wherein determining the first translation distance for the first pixel translation based on the grayscale value of the first pixel and the grayscale value of the third pixel comprises:
judging whether the gray value of the first pixel is smaller than the gray value of the third pixel;
if the gray value of the first pixel is not smaller than the gray value of the third pixel, translating the first pixel by a preset number of pixels towards the direction of the second pixel until the gray value of the adjacent pixel is larger than the gray value of the first pixel, and determining a first translation distance for translating the first pixel;
and if the gray value of the first pixel is smaller than the gray value of the third pixel, determining that the first translation distance of the first pixel translation is 0.
3. The method of claim 1, wherein determining a second translation distance by which the second pixel is translated based on the grayscale value of the second pixel and the grayscale value of the fourth pixel comprises:
judging whether the gray value of the second pixel is smaller than the gray value of the fourth pixel;
if the gray value of the second pixel is not smaller than the gray value of the fourth pixel, translating the second pixel by the preset number of pixels towards the direction of the first pixel until the gray value of the adjacent pixel is larger than the gray value of the second pixel, and determining a second translation distance of the second pixel;
and if the gray value of the second pixel is smaller than the gray value of the fourth pixel, determining that the second translation distance of the second pixel translation is 0.
4. The method of claim 1, wherein said modifying the first coordinate according to the first translation distance to obtain a target first coordinate comprises:
adding the first translation distance to the abscissa of the first coordinate to obtain a first target abscissa, and determining the ordinate of the first coordinate as a first target ordinate;
determining a target first coordinate based on the target first abscissa and the target first ordinate.
5. The method of claim 1, wherein the modifying the second coordinate according to the second translation distance to obtain a target second coordinate comprises:
adding the second translation distance to the abscissa of the second coordinate to obtain a second target abscissa, and determining the ordinate of the second coordinate as a second target ordinate;
determining a target second coordinate based on the target second abscissa and the target second ordinate.
6. An automated parking space determination system, comprising:
the ultrasonic subsystem is used for determining a first coordinate and a second coordinate on the parking space boundary when the parking space boundary is detected; correcting the first coordinate based on the first translation distance to obtain a target first coordinate; correcting the second coordinate based on the second translation distance to obtain a target second coordinate; determining a boundary of a target vehicle based on the first target coordinate and the second target coordinate, and determining a parking space according to the boundary of the target vehicle, wherein the first coordinate and the second coordinate are end points at two ends of the boundary of the parking space;
the visual subsystem is used for acquiring a visual image in a preset range and constructing a coordinate system based on the visual image; determining a first pixel corresponding to the first coordinate and a second pixel corresponding to the second coordinate on the coordinate system; acquiring gray values of a first pixel, a second pixel, a third pixel and a fourth pixel; determining a first translation distance by which the first pixel is translated based on the gray value of the first pixel and the gray value of the third pixel; determining a second translation distance by which the second pixel is translated based on the gray value of the second pixel and the gray value of a fourth pixel, wherein the third pixel is adjacent to the first pixel and on the same side as the second pixel, and the fourth pixel is adjacent to the second pixel and on the same side as the first pixel.
7. The system of claim 6, wherein the vision subsystem that determines the first translation distance of the first pixel translation based on the gray value of the first pixel and the gray value of the third pixel is specifically configured to:
judging whether the gray value of the first pixel is smaller than the gray value of the third pixel; if the gray value of the first pixel is not smaller than the gray value of the third pixel, translating the first pixel by a preset number of pixels towards the direction of the second pixel until the gray value of the adjacent pixel is larger than the gray value of the first pixel, and determining a first translation distance for translating the first pixel; and if the gray value of the first pixel is smaller than the gray value of the third pixel, determining that the first translation distance of the first pixel translation is 0.
8. The system of claim 6, the vision subsystem to determine a second translation distance of the second pixel translation based on the grayscale value of the second pixel and the grayscale value of the fourth pixel, in particular to:
judging whether the gray value of the second pixel is smaller than the gray value of the fourth pixel; if the gray value of the second pixel is not smaller than the gray value of the fourth pixel, translating the second pixel by the preset number of pixels towards the direction of the first pixel until the gray value of the adjacent pixel is larger than the gray value of the second pixel, and determining a second translation distance of the second pixel; and if the gray value of the second pixel is smaller than the gray value of the fourth pixel, determining that the second translation distance of the second pixel translation is 0.
9. The system of claim 6, wherein the vision subsystem that corrects the first coordinate based on the first translation distance to obtain a target first coordinate is specifically configured to:
adding the first translation distance to the abscissa of the first coordinate to obtain a first target abscissa, and determining the ordinate of the first coordinate as a first target ordinate; determining a target first coordinate based on the target first abscissa and the target first ordinate.
10. The system of claim 6, wherein the vision subsystem that corrects the second coordinate based on the second translation distance to obtain a target second coordinate is specifically configured to:
adding the second translation distance to the abscissa of the second coordinate to obtain a second target abscissa, and determining the ordinate of the second coordinate as a second target ordinate; determining a target second coordinate based on the target second abscissa and the target second ordinate.
CN202011355567.2A 2020-11-27 2020-11-27 Automatic parking space determining method and system Active CN114566055B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011355567.2A CN114566055B (en) 2020-11-27 2020-11-27 Automatic parking space determining method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011355567.2A CN114566055B (en) 2020-11-27 2020-11-27 Automatic parking space determining method and system

Publications (2)

Publication Number Publication Date
CN114566055A true CN114566055A (en) 2022-05-31
CN114566055B CN114566055B (en) 2023-06-16

Family

ID=81711691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011355567.2A Active CN114566055B (en) 2020-11-27 2020-11-27 Automatic parking space determining method and system

Country Status (1)

Country Link
CN (1) CN114566055B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101807352A (en) * 2010-03-12 2010-08-18 北京工业大学 Method for detecting parking stalls on basis of fuzzy pattern recognition
CN109067925A (en) * 2018-10-09 2018-12-21 北京智行者科技有限公司 A kind of method and system of parking of remote control
CN110689761A (en) * 2019-12-11 2020-01-14 上海赫千电子科技有限公司 Automatic parking method
CN110861639A (en) * 2019-11-28 2020-03-06 安徽江淮汽车集团股份有限公司 Parking information fusion method and device, electronic equipment and storage medium
CN111462522A (en) * 2020-04-04 2020-07-28 东风汽车集团有限公司 Visual parking space detection method capable of eliminating influence of strong ground reflected light

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101807352A (en) * 2010-03-12 2010-08-18 北京工业大学 Method for detecting parking stalls on basis of fuzzy pattern recognition
CN109067925A (en) * 2018-10-09 2018-12-21 北京智行者科技有限公司 A kind of method and system of parking of remote control
CN110861639A (en) * 2019-11-28 2020-03-06 安徽江淮汽车集团股份有限公司 Parking information fusion method and device, electronic equipment and storage medium
CN110689761A (en) * 2019-12-11 2020-01-14 上海赫千电子科技有限公司 Automatic parking method
CN111462522A (en) * 2020-04-04 2020-07-28 东风汽车集团有限公司 Visual parking space detection method capable of eliminating influence of strong ground reflected light

Also Published As

Publication number Publication date
CN114566055B (en) 2023-06-16

Similar Documents

Publication Publication Date Title
KR102249769B1 (en) Estimation method of 3D coordinate value for each pixel of 2D image and autonomous driving information estimation method using the same
US8126210B2 (en) Vehicle periphery monitoring device, vehicle periphery monitoring program, and vehicle periphery monitoring method
CN109446886B (en) Obstacle detection method, device, equipment and storage medium based on unmanned vehicle
US20010002936A1 (en) Image recognition system
US20080106462A1 (en) Object detection system and object detection method
CN112489136B (en) Calibration method, position determination device, electronic equipment and storage medium
CN107437264B (en) Automatic detection and correction method for external parameters of vehicle-mounted camera
DE102015115743B4 (en) Detection system for detecting an object and method for performing a detection of an object
CN110764110B (en) Path navigation method, device and computer readable storage medium
US9802539B2 (en) Distance and direction estimation of a target point from a vehicle using monocular video camera
CN110068826B (en) Distance measurement method and device
CN114758005B (en) Laser radar and camera external parameter calibration method and device
JP7137464B2 (en) Camera calibration device, camera calibration method, and program
CN114919584A (en) Motor vehicle fixed point target distance measuring method and device and computer readable storage medium
CN111538008A (en) Transformation matrix determining method, system and device
CN110796604A (en) Image correction method and device
CN114566055A (en) Automatic parking space determining method and system
KR101734726B1 (en) Method of tracking parking space and apparatus performing the same
CN112215214A (en) Method and system for adjusting camera offset of intelligent vehicle-mounted terminal
CN108398682B (en) Radar image display method and device and reversing radar system
JP4462533B2 (en) Road lane detection device
CN110836656B (en) Anti-shake distance measuring method and device for monocular ADAS (adaptive Doppler analysis System) and electronic equipment
CN109840894B (en) Disparity map refining method and device and storage medium
CN111611964A (en) Distance detection method and device
CN113494927A (en) Vehicle multi-sensor calibration method and device and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant