CN114495572A - Vehicle and parking space determining device and method thereof - Google Patents

Vehicle and parking space determining device and method thereof Download PDF

Info

Publication number
CN114495572A
CN114495572A CN202011149473.XA CN202011149473A CN114495572A CN 114495572 A CN114495572 A CN 114495572A CN 202011149473 A CN202011149473 A CN 202011149473A CN 114495572 A CN114495572 A CN 114495572A
Authority
CN
China
Prior art keywords
image
area
parking space
obstacle
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011149473.XA
Other languages
Chinese (zh)
Inventor
廖顽强
徐才玲
赵子元
许晨昱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Faurecia Clarion Electronics Xiamen Co Ltd
Original Assignee
Faurecia Clarion Electronics Xiamen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Faurecia Clarion Electronics Xiamen Co Ltd filed Critical Faurecia Clarion Electronics Xiamen Co Ltd
Priority to CN202011149473.XA priority Critical patent/CN114495572A/en
Publication of CN114495572A publication Critical patent/CN114495572A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/168Driving aids for parking, e.g. acoustic or visual feedback on parking space
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096708Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
    • G08G1/096725Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information generates an automatic action on the vehicle control
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Atmospheric Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the application provides a vehicle and a parking space determining device and method thereof, relates to the field of data processing, and can accurately determine available parking spaces for parking according to the surrounding environment of the vehicle. The device includes: the device comprises an environment sensor, a camera module and a processing module. The environment sensor is arranged on the vehicle and used for detecting roads and fault objects in a preset range around the vehicle so as to acquire first road information and first obstacle information; the camera module is arranged on the vehicle and used for taking pictures of the preset range and identifying roads and obstacles in the preset range so as to acquire second road information and second obstacle information; the processing module is arranged on the vehicle, connected with the environment sensor and the camera module and used for generating an environment image corresponding to a preset range according to the first road information, the first obstacle information, the second road information and the second obstacle information; the processing module is also used for determining the available parking spaces of the vehicles according to the environment images.

Description

Vehicle and parking space determining device and method thereof
Technical Field
The invention relates to the field of data processing, in particular to a vehicle and a parking space determining device and method thereof.
Background
In an existing Automatic parking system, a sonar has the defect that certain specific scene obstacles (such as small obstacles at the road edge and the rear end of a parking space) cannot be well recognized. Under the condition of having the curb, if can not effectively discern the curb information, just can't consider the curb information in parking stall planning and route planning, cause the too close or far away from the curb that the vehicle stopped. When the obstacle at the rear end of the parking space cannot be identified, the space other than the parking space may be mistaken for the parking space, such as a short flower bed with a small tree, and the automatic parking assistance system cannot play a necessary role.
Disclosure of Invention
The embodiment of the invention provides a vehicle and a parking space determining device and method thereof, which can accurately determine available parking spaces for parking according to the surrounding environment of the vehicle.
In order to achieve the above purpose, the embodiment of the invention adopts the following technical scheme:
in a first aspect, a parking space determining apparatus is provided, including: the device comprises an environment sensor, a camera module and a processing module. The environment sensor is arranged on the vehicle and used for detecting roads and fault objects in a preset range around the vehicle so as to acquire first road information and first obstacle information; the camera module is arranged on the vehicle and used for taking pictures of the preset range and identifying roads and obstacles in the preset range so as to acquire second road information and second obstacle information; the processing module is arranged on the vehicle, connected with the environment sensor and the camera module and used for generating an environment image corresponding to a preset range according to the first road information, the first obstacle information, the second road information and the second obstacle information; the processing module is also used for determining the available parking spaces of the vehicles according to the environment images.
In the technical scheme provided by the embodiment, the road information and the obstacle information in the environment around the vehicle are respectively acquired through the environment sensor and the camera module, and then the environment image in a certain range around the vehicle is obtained through combination of the road information and the obstacle information acquired through the environment sensor and the camera module. And then the available parking space of the vehicle can be accurately determined through the environment image. Compared with the prior art, the technical scheme provided by the embodiment of the application is more accurate when determining the parking space.
In a second aspect, a parking space determining method is provided, which is applied to the parking space determining apparatus provided in the first aspect, and includes: detecting roads and fault objects in a preset range around the vehicle to acquire first road information and first obstacle information; photographing the preset range and identifying roads and obstacles in the preset range to acquire second road information and second obstacle information; generating an environment image corresponding to the preset range according to the first road information, the first obstacle information, the second road information and the second obstacle information; and determining the available parking spaces of the vehicle according to the environment image.
In a third aspect, a parking space determining apparatus is provided, which includes an environment sensor, a camera module, a peripheral interface, a memory, a processor, a bus, and a communication interface; the environment sensor and the camera module are connected with a peripheral interface through a bus, the memory is used for storing computer execution instructions, and the peripheral interface, the processor and the memory are connected through the bus; when the parking space determining device is operated, the processor executes the computer execution instructions stored in the memory, so that the parking space determining device executes the parking space determining method provided by the second aspect.
In a fourth aspect, a computer-readable storage medium is provided, which includes computer-executable instructions, and when the computer-executable instructions are executed on a computer, the computer is caused to execute the parking space determination method provided in the second aspect.
In a fifth aspect, a vehicle is provided, which includes the parking space determination device provided in the first aspect.
It can be understood that the solutions provided in the second aspect to the fifth aspect include the same technical features as those provided in the technical solution provided in the first aspect, and have the same technical effects as those provided in the technical solution provided in the first aspect, so that the technical effects thereof can refer to the relevant expressions of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a parking space determining device according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of another parking space determination device according to an embodiment of the present application;
fig. 3 is a usage scene diagram of a parking space determining device according to an embodiment of the present application;
fig. 4 is a first flowchart illustrating a process of generating an environment image according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of an image transformation provided in an embodiment of the present application;
fig. 6 is a schematic flowchart illustrating a second process of generating an environment image according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of another image transformation provided in the embodiments of the present application;
fig. 8 is a schematic view of a scenario of barrier region rectangularization according to an embodiment of the present disclosure;
fig. 9 is a schematic flow chart illustrating the barrier area rectangularization according to an embodiment of the present disclosure;
fig. 10 is a schematic flowchart of a third process for generating an environment image according to an embodiment of the present application;
fig. 11 is a first flowchart of a parking space determining method according to an embodiment of the present application;
fig. 12 is a flowchart illustrating a second method for determining a parking space according to an embodiment of the present application;
fig. 13 is a third schematic flowchart of a parking space determining method according to an embodiment of the present application;
fig. 14 is a schematic flowchart of a fourth method for determining a parking space according to the embodiment of the present application;
fig. 15 is a schematic flow chart illustrating a fifth method for determining a parking space according to an embodiment of the present application;
fig. 16 is a sixth schematic flowchart of a parking space determining method according to an embodiment of the present application;
fig. 17 is a seventh flowchart of a parking space determining method according to an embodiment of the present application;
fig. 18 is a schematic supplementary flowchart of a parking space determining method according to an embodiment of the present application;
fig. 19 is a schematic structural diagram of another parking space determination device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
It should be noted that in the embodiments of the present application, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
It should be noted that in the embodiments of the present application, "of", "corresponding" and "corresponding" may be sometimes used in combination, and it should be noted that the intended meaning is consistent when the difference is not emphasized.
For the convenience of clearly describing the technical solutions of the embodiments of the present application, in the embodiments of the present invention, the words "first", "second", and the like are used for distinguishing the same items or similar items with basically the same functions and actions, and those skilled in the art can understand that the words "first", "second", and the like are not limited in number or execution order.
At present, in order to facilitate better parking for a user, a vehicle manufacturer may set an automatic parking assist system in a vehicle to help the user determine a parking space and even plan a parking path according to the parking space. In most of the conventional automatic parking assist systems, a sonar is used as a sensor to detect an obstacle or a road around a vehicle. However, since the sonar cannot well identify some specific scene obstacles (such as road edges and small obstacles (such as short flower beds) at the rear ends of the parking spaces), the automatic parking assistance system may misunderstand the space that is not the parking space as the parking space, and finally the purpose of assisting parking may not be achieved, which affects user experience.
In view of the above problem, referring to fig. 1, an embodiment of the present application provides a vehicle 01, where the vehicle includes a parking space determining device 02, which may be an automatic parking assistance system in the vehicle or a part thereof, and may also be composed of a part of a main control computer in the vehicle and related devices (an environment sensor and a camera module, etc.) arranged on the vehicle, and this is not limited specifically here. This parking stall determination device 01 specifically includes: an environment sensor 11, a camera module 12 and a processing module 13.
The environment sensor 11 is disposed on the vehicle, and is configured to detect roads and fault objects in a preset range around the vehicle (in this embodiment, a rectangular area including the vehicle is taken as the preset range, and practically, any other feasible manner may also be used) so as to obtain the first road information and the first obstacle information. Illustratively, the environmental sensor 11 is at least any one or more of: sonar, millimeter wave radar, laser radar.
Specifically, taking sonar as an example, sonar may obtain the approximate shape of an object according to the reflection of sound waves by the object, and then determine whether the object is an obstacle or a road according to the identification information of the obstacle and the identification information of the road stored in the sonar.
The camera module 12 is disposed on the vehicle, and configured to take a picture of the preset range and identify a road and an obstacle therein, so as to obtain second road information and second obstacle information.
The processing module 13 is arranged on the vehicle, connected with the environment sensor 11 and the camera module 12, and used for generating an environment image corresponding to a preset range according to the first road information, the first obstacle information, the second road information and the second obstacle information; the processing module 13 is further configured to determine available parking spaces of the vehicle according to the environment image.
Based on the scheme, the road information and the obstacle information in the surrounding environment of the vehicle are respectively acquired through the environment sensor and the camera module, and then the environment image in a certain range around the vehicle is obtained through combination of the road information and the obstacle information acquired according to the road sensor and the camera module. And then the available parking space of the vehicle can be accurately determined through the environment image.
Optionally, in order to more accurately acquire obstacle information and road information of the environment around the vehicle, referring to fig. 2, the camera module 12 at least includes a left camera 121, a right camera 122, a front camera 123, a rear camera 124 and a general control unit 125;
the left camera 121 is arranged on the left side of the vehicle and is used for photographing a region 21 belonging to the left side of the vehicle in a preset range to obtain a first image; the right camera 122 is arranged on the left side of the vehicle and is used for photographing the area 22 belonging to the right side of the vehicle in the preset range to obtain a second image; the front camera 123 is arranged at the front end of the vehicle and is used for photographing an area 23 in the preset range, which belongs to the front of the vehicle, so as to obtain a third image; the rear camera 124 is arranged at the rear end of the vehicle and is used for photographing the area 24 in the preset range, which belongs to the rear of the vehicle, so as to obtain a fourth image;
the general control unit 125 is connected to the left camera 121, the right camera 122, the front camera 123 and the rear camera 124, and is configured to identify a road and an obstacle in a first image obtained by the left camera 121, a second image obtained by the right camera 122, a third image obtained by the front camera 123 and a fourth image obtained by the rear camera 124, so as to obtain second road information and second obstacle information.
The general control unit may identify the road and the obstacle in the first image, the second image, the third image and the fourth image by using any feasible image identification method, and the application is not particularly limited. In addition, in practice, the left camera 121, the right camera 122, the front camera 123 and the rear camera 124 can directly perform picture recognition on the pictures shot by the cameras, and accordingly the road information and the fault object information are obtained and sent to the master control unit.
Optionally, referring to fig. 3 in combination with fig. 2, when the left camera 121, the right camera 122, the front camera 123, and the rear camera 124 are all fisheye cameras, the general control unit 125 is specifically configured to:
recognizing roads and obstacles in the image in the first target area 301 corresponding to the first image obtained by the left camera 121 to obtain first sub-road information and first sub-obstacle information; the first target area is an area other than a predetermined edge area in the area 21 on the left side of the vehicle corresponding to the left camera 121;
recognizing roads and obstacles in the image corresponding to the second target area 312 in the second image obtained by the right camera 122 to obtain second sub-road information and second sub-obstacle information; the second target area is an area other than the predetermined edge area in the area 22 on the right side of the vehicle corresponding to the right camera 122;
recognizing roads and obstacles in the image corresponding to the third target area 321 in the third image obtained by the front camera 123 to obtain third sub-road information and third sub-obstacle information; the third target area is an area other than the predetermined edge area in the area 23 in front of the vehicle corresponding to the front camera 123;
recognizing roads and obstacles in the image corresponding to the fourth target area 331 in the fourth image obtained by the rear camera 124 to obtain fourth sub-road information and fourth sub-obstacle information; the fourth target area is an area other than the predetermined edge area in the area 24 behind the vehicle corresponding to the rear camera 124;
fusing the first sub-road information, the second sub-road information, the third sub-road information and the fourth sub-road information to obtain second road information;
and fusing the first sub-obstacle information, the second sub-obstacle information, the third sub-obstacle information and the fourth sub-obstacle information to obtain second obstacle information.
For example, referring to fig. 3, the first target area, the second target area, the third target area and the fourth target area may be determined by first making a left body line, a right body line, a front body line and a rear body line of the vehicle, then making L0 with an included angle of 45 degrees with the front body line and the right body line, L1 with an included angle of 45 degrees with the right body line and the front body line, L2 with an included angle of 45 degrees with the right body line and the rear body line, and L3 with an included angle of 45 degrees with the left body line and the rear body line; finally, the region between L0 and L3 may be determined as the first target region, the region between L2 and L1 as the second target region, the region between L0 and L1 as the third target region, and the region between L2 and L3 as the fourth target region. Of course, any other feasible dividing manner may be used in practice, and this is merely an example and is not limited in particular.
Optionally, the processing module 13 is specifically configured to: generating a fifth image which corresponds to a preset range and comprises an obstacle area according to a union set of a first area corresponding to the first obstacle information and a second area corresponding to the second obstacle information; the barrier area is a union of the first area and the second area; generating a sixth image which corresponds to the preset range and comprises the road area according to the intersection of a third area corresponding to the second road information and a fourth area corresponding to the second road information; the barrier area is the intersection of the third area and the fourth area; and fusing the fifth image and the sixth image to generate an environment image.
The reason why the obstacle region in the fifth image is the union of the first region corresponding to the first obstacle information and the second region corresponding to the second obstacle information is that the obstacle may be identified by the environment sensor and the camera module in a missing manner, and therefore, in order to ensure that no obstacle is absolutely present in the parking space, the obstacle identified by either of the two is considered to be a real obstacle. The reason why the road area in the sixth image is the intersection of the third area corresponding to the second road information and the fourth area corresponding to the second road information is that, in order to prevent a road recognition error, only the area in which both the environment sensor and the camera module recognize the road is determined as the real road area.
In an implementation manner, as shown in fig. 4, the processing module 13 may perform the following steps for generating the environment image:
and S1, generating an initial sensor image and an initial camera image corresponding to the preset range.
Wherein all pixel values in an initial sensor image and the initial camera image are free area values.
S21, determining a first road pixel value according to the first road information, determining a first obstacle pixel value according to the first obstacle information, and replacing the pixel value of the corresponding pixel in the initial sensor image with the first road pixel value and the first obstacle pixel value to obtain a first sensor image.
Wherein the first road pixel value, the first obstacle pixel value and the free area value are different. The pixel corresponding to the first road pixel value is a road pixel, and the pixel corresponding to the first obstacle pixel value is an obstacle pixel.
In one implementation, for example, pixel values of 0-255, the first road pixel value > free area value > first obstacle pixel value > 0.
Illustratively, the process of obtaining the first sensor image from the initial sensor image is shown as a in fig. 5.
And S22, determining a second road pixel value according to the second road information, determining a second obstacle pixel value according to the second obstacle information, and replacing the pixel value of the corresponding pixel in the initial image with the second road pixel value and the second obstacle pixel value to obtain the first image.
Wherein the second road pixel value, the second obstacle pixel value and the free area value are different. The pixel corresponding to the second road pixel value is a road pixel, and the pixel corresponding to the second obstacle pixel value is an obstacle pixel.
In an implementation, the pixel value is 0-255 for example, and the second road pixel value > free area value > second obstacle pixel value > 0.
Illustratively, the process of obtaining the first captured image from the initial captured image is shown as b in fig. 5.
S31, fusing the first sensor image and the first camera image to form a first obstacle image; the set of positions of the obstacle pixels in the first obstacle image is a union of the sets of positions of the obstacle pixels in the first sensor image and the first captured image.
When the pixel at the first position in the first sensor image is an obstacle pixel, the pixel at the first position in the first obstacle image is also an obstacle pixel, when the pixel at the second position in the first captured image is an obstacle pixel, the pixel at the second position in the first obstacle image is also an obstacle pixel, and the pixel value of the obstacle pixel in the first obstacle image is determined according to the pixel value of the pixel at the same position in the first sensor image and the first captured image.
Taking the first road pixel value > free area value > first obstacle pixel value > 0, and the second road pixel value > free area value > second obstacle pixel value > 0 as an example, the specific generation mode of the first obstacle image may be: the smallest value among the pixel values of the same position in the first sensor image and in the first captured image is determined as the pixel value of the pixel of the same position in the first obstacle image. In this way, it can be ensured that the area corresponding to the obstacle pixel in the first obstacle image is the intersection of the areas of the obstacle pixel in the first sensor image and the first captured image, that is, the first obstacle image is the fifth image in the foregoing embodiment. Of course, in practice, the other cases may be adopted, and different processing may be performed in different cases as long as it is ensured that the pixels representing the obstacle in the finally obtained fifth image include all the pixels representing the obstacle in the first sensor image and the first camera image.
S32, fusing the first sensor image and the first camera image to form a first road image; the set of positions of road pixels in the first road image is the intersection of the sets of positions of road pixels in the first sensor image and the first camera image.
When the pixel at the third position in the first sensor image is a road pixel and the pixel at the third position in the first camera image is a road pixel, the pixel at the third position in the first road image is a road pixel, and the pixel value of the road pixel in the first road image is determined according to the pixel value of the road pixel at the same position in the first sensor image and the first camera image.
Taking the first road pixel value > free area value > first obstacle pixel value > 0, and the second road pixel value > free area value > second obstacle pixel value > 0 as an example, as shown in fig. 6, the step S32 may include steps S321 to S324:
s321, a maximum value of pixel values at the same position in the first sensor image and the first captured image is determined as an initial pixel, and a first sub-road image is generated.
Thus, the set of positions of the pixels representing the road in the first sub-road image (the pixel values being larger than the free area value) is the union of the set of positions of all road pixels in the first sensor image and the set of positions of all road pixels in the first captured image.
And S322, carrying out binarization on the first sensor image and the first camera image to obtain a second sensor image and a second camera image.
Taking pixel values of 0 to 255 as an example, in the binarized second sensor image and second captured image, the value of the pixel representing the road is 255, and the pixel value of the pixel representing the other road is 0.
S323, the second sensor image and the second captured image are subjected to a bit and operation on pixels at the same position to generate a second sub-road image.
The position of the pixel is the same as the position of the pixel of the road pixel in both the first sensor image and the first captured image, and the pixel value of the pixel of the second sub-road image is 255. In this way, the area occupied by the pixel having the pixel value of 255 in the second sub-road image is the intersection of the area occupied by the road pixel in the first sensor image and the area occupied by the road pixel in the first captured image.
And S324, performing bit AND operation on the pixels at the same positions in the first sub-road image and the second sub-road image to obtain a first road image.
Illustratively, for example, if the pixel value of a pixel representing a link in the first sub-link image is 222 (binary number is 11011110), and the pixel value of a pixel at the same position in the second sub-link image is 255 (binary number is 11111111), the bit-and operation results in a pixel value of 222 (binary number is 11011110). The first road image may be regarded as the sixth image in the foregoing embodiment because the area occupied by the road pixels is the intersection of the third area corresponding to the second road information and the fourth area corresponding to the second road information.
It should be noted that the second sub-link image may actually satisfy the requirements of the sixth image, but since the pixel value of the link pixel is obtained according to the actually detected or identified link information (the first link information or the second link information), which may include the interferent information (the first interferent information and/or the second interferent information) that needs to be used subsequently, the steps S321 and S324 need to be performed here.
And S4, fusing the first obstacle image and the first road image to form an environment image.
The pixel value of the target position in the environment image is the largest one of the pixel value of the target position in the first obstacle image and the pixel value of the target position in the first road image. The pixel values of the other pixels are 0 except that the road pixel value representing the road in the first road image is greater than 0, and the pixel value of the obstacle pixel representing the obstacle in the first obstacle image is greater than 0, so that the obstacle pixel and the road pixel exist in the environment image after the two pixels are maximized, the area occupied by the obstacle pixel is a union of a first area corresponding to the first obstacle information and a second area corresponding to the second obstacle information, and the area occupied by the road pixel is an intersection of a third area corresponding to the second road information and a fourth area corresponding to the second road information.
For example, a schematic diagram of a process of obtaining an environment image from a first sensor image and a first camera image may be shown in fig. 7.
In another implementable manner, the processing module 13 may generate: an image in which only obstacle information detected by the environment sensor is represented (only pixels representing obstacles detected by the environment sensor exist in the image and the pixel values are first obstacle pixel values, and pixels at the other positions are free area values), an image in which only road information detected by the environment sensor is represented (only pixels representing roads detected by the environment sensor exist in the image and the pixel values are first road pixel values, and pixels at the other positions are free area values), an image in which only obstacle information recognized by the image pickup module is represented (only pixels representing obstacles recognized by the image pickup module exist in the image and the pixel values are second obstacle pixel values, and pixels at the other positions are free area values), and an image in which only road information recognized by the image pickup module exists (only pixels representing roads recognized by the image pickup module exist in the image and the pixel values are second road pixel values, pixels at the other positions are free area values), and then according to the correlation principle in the foregoing S1-S4, an image that can only represent obstacle information detected by the environment sensor and an image that can only represent obstacle information recognized by the camera module are combined to obtain a second obstacle image that can represent all possible obstacles (the occupied area is a union of the image that can only represent obstacle information detected by the environment sensor and the corresponding area of the obstacle in the image that can only represent obstacle information recognized by the camera module); then, collecting an image which only represents road information detected by an environment sensor and an image which only represents road information identified by a camera module to obtain a second road image which can represent a real road (the environment sensor and the camera module are both determined as roads) (the area occupied by road pixels is the intersection of the image which only represents the road information detected by the environment sensor and the corresponding area of the road in the image which only represents the road information identified by the camera module); and finally combining the second obstacle image and the second road image to obtain an environment image.
Of course, the above-mentioned environment image may be obtained in any other practical manner, and the present application is not limited to this.
Optionally, the processing module 13 is specifically configured to: the method comprises the steps of performing rectangularization on an obstacle region in an environment image to obtain a plurality of rectangular obstacle regions; when it is determined that a target area where a parking space can exist exists among the plurality of rectangular obstacle areas, determining a parking space base point according to any end point close to the target area in a second rectangular obstacle area which is closest to the vehicle in a first rectangular obstacle area corresponding to the target area; and generating the parking space in the target area according to the characteristic parameters of the vehicle and the parking space base point.
Further optionally, referring to fig. 9, taking the image shown in a in fig. 8 as an environment image as an example, the specific step of the processing module 13 performing rectangularization on the obstacle area in the environment image includes X1-X3:
and X1, establishing an arbitrary rectangular coordinate system in the environment image.
For example, a specific scenario may be illustrated with reference to b in fig. 8. Of course, any other feasible coordinate system may be established in practice.
And X2, judging whether the distance between the two first obstacle areas in the X-axis direction is not more than a set value.
Performing X4 when it is determined that the distance in the X-axis direction between the two first obstacle regions is not greater than the set value; when it is determined that there are no two obstacle regions whose distance in the X-axis direction is greater than the set value, X3 is executed.
For example, referring to fig. 8, in practice, the obstacle area may be a small area (may be a square) in the environment image.
Illustratively, the set value may be 100 cm.
And X3, judging whether the distance between the two second obstacle areas in the y-axis direction is not more than a set value.
Performing X5 when it is determined that the distance in the y-axis direction between the two second obstacle regions is not greater than the set value; when it is determined that there are no two obstacle regions whose distance in the y-axis direction is greater than the set value, X6 is executed.
And X4, combining the two first barrier areas to generate a new barrier area.
For example, referring to a and b in fig. 8, if the distance between the obstacle region 1 and the obstacle region 2 in the x-axis direction is smaller than the set value, a new obstacle region 12 is obtained after integration, and appears as a rectangle on the environment image.
X5, combining the two second barrier regions to create a new barrier region.
For example, referring to a and b in fig. 8, if the distance between the obstacle area 3 and the obstacle area 4 in the y-axis direction is smaller than the set value, a new obstacle area 34 is obtained after integration, and appears as a rectangle on the environment image. In practice, all the closely spaced obstacle regions are combined into one.
And X6, and finishing.
Illustratively, the X1-X6 steps result in a new environment image as shown in c of fig. 8, in which there are a plurality of rectangular obstacle areas.
Further optionally, when the processing module 13 determines that a target area where a parking space may exist exists among the plurality of rectangular obstacle areas 131(131-1, 131-2, 131-3, and 131-4), as shown in fig. 8 c, it needs to determine whether a rectangular area exists among the plurality of rectangular obstacle areas, where the length is greater than or equal to +100cm (a set value, which may be determined according to actual requirements) of the vehicle and the width is greater than or equal to +100cm (a set value, which may be determined according to actual requirements) of the vehicle, and when it is determined that the target area exists, as shown in fig. 8 d, after a parking space base point O is determined, parking spaces 81(81-1 and 81-2) may be set according to the length of the vehicle and the width of the vehicle; for example, the coordinates of the other three end points of the parking space can be determined according to the coordinates of the base point of the parking space, and then the final parking space is determined.
Further optionally, when a parking space base point is determined at any end point close to the target area in a second rectangular obstacle area closest to the vehicle in the first rectangular obstacle area corresponding to the target area, if a parking space is directly generated from the parking space base point, a phenomenon that the vehicle is not lifted well or scratched when the vehicle parks according to the parking space may be caused by some errors in practice, so the processing module 13 is specifically configured to: and determining a parking space base point according to any end point close to the target area in the second rectangular obstacle area and a preset experience vector. For example, the preset experience vector may be determined by an error of an environment sensor detecting an obstacle, an error of a camera module recognizing the obstacle, an error of a current position of the vehicle, an error of a route calculation (a route of the vehicle entering the parking space from the current position), and the like.
Optionally, in practice, a certain backward movement phenomenon may occur in the process of braking after parking, which causes a scratch phenomenon (good observation of the obstacle at the front end and scratch basically not occurring) to be generated on the obstacle which is not easy to observe at the rear end of the parking space, so that in practice, when the parking space is determined, it is required to ensure that the vehicle keeps a proper distance from the obstacle right at the rear end of the parking space, and the scratch phenomenon does not easily occur, and no obstacle is caused on parking or driving of other vehicles, so the processing module 13 is further configured to: adjusting the parking space according to the distance between the rear end of the parking space and the target rectangular obstacle region; the rear end of the parking space is one end of the parking space for placing the tail part of the vehicle; the target rectangular barrier is opposite to the rear end of the parking space and is positioned on one side of the parking space far away from the front end of the parking space; the front end of the parking space is one end of the parking space for placing the head of the vehicle. The distance from the rear end of the parking space to the obstacle opposite to the rear end of the parking space is adjusted to be a proper distance. In addition, the distance may be the minimum distance between the line at the rear end of the parking space and each point on the surface of the obstacle directly opposite to the rear end of the parking space. Of course, other possible distance determination means (e.g., average distance between the line at the rear end of the parking space and each point on the face of the obstacle directly opposite to the rear end of the parking space, etc.) are also possible.
Optionally, in an actual road environment, besides a fault that a road and a vehicle cannot collide with each other during parking, there may also be some interferents (the vehicle may press through (e.g., some road garbage (mineral water bottles) with a small volume) or objects (a wheel blocking rod, a wheel blocking device, etc.) that may touch after parking), so in order to plan a parking space better and ensure that the vehicle stops in the following parking space more smoothly, the interferents need to be identified and adjusted according to the interferents, and therefore in the parking space determining apparatus, the environment sensor 11 is further configured to detect the parking interferents within a preset range to obtain first interferent information; the camera module 12 is further configured to photograph the preset range and identify a parking interfering object therein to obtain second interfering object information; the processing module 13 is further configured to: and adjusting the parking space according to the first interference object information acquired by the environment sensor 11 and the second interference object information acquired by the camera module 12. The specific adjustment process is adjusted based on the fact that the vehicle can be made to be smooth and good, and the application does not specifically limit the process. In addition, in practice, the first interfering object information may be included in the first road information and the first obstacle information, and the second interfering object information may be included in the second road information and the second obstacle information.
In an implementation manner, based on the above-mentioned step shown in fig. 4 to obtain the environment image, although the first road information, the first obstacle information, the second road information and the second obstacle information may exist in the interferent information (the first interferent information and the second interferent information), and the generated pixel values may also include the interferent information, after the processing module 13 performs the processing step shown in fig. 4, because the discarding of a part of the pixel values may result in only a part of the interferent information existing in the environment image and may not include all the interferent information, as shown in fig. 10 in conjunction with fig. 4, the pixel values are taken as 0-225, and the first road pixel value > the free area value > the first obstacle pixel value > 0, and the step S4 further includes steps S5-S8:
and S5, carrying out bit OR operation on the pixel values of the same position in the first sensor image and the second sensor image to obtain a first interference image.
Since the variable value represented by each bit when the pixel value is converted into the binary value is considered when setting the pixel value according to the road information, for example, the seventh bit indicates whether the pixel value is a wheel stopper, a "1" indicates that the pixel value is a wheel stopper, and a "0" indicates that the pixel value is not a wheel stopper, the pixel value may include the information of the interfering object. All possible interferer information is retained after bitoring the co-located pixels.
S6, generating a second interference object image according to the pixel generation rule; the pixel value of each pixel in the second interfering object image is the same, and after the second interfering object image is converted into a binary system, only the data position "1" representing the interfering object information is set, and the rest are all set to "0".
In fig. 4, a rule for generating a pixel value based on one type of information (first road information/first obstacle information/second road information/second obstacle information) is a pixel generation rule.
And S7, performing a bit AND operation on the pixels at the same positions in the first interfering object image and the second interfering object image to obtain a third interfering object image.
It can be understood that each pixel in the third interfering object image can only represent the interfering object information corresponding to the pixel, and no other information is doped.
And S8, inserting the pixel value of the pixel of the target position in the third interferent image into the target position in the environment image, and performing bit OR operation with the pixel of the target position in the environment image to update the environment image.
In this way, in the finally updated environment image, each pixel can embody the interference object information acquired by the environment sensor and the image pickup module, except that the area occupied by the obstacle pixel is the union of the first area corresponding to the first obstacle information and the second area corresponding to the second obstacle information, and the area occupied by the road pixel is the intersection of the third area corresponding to the second road information and the fourth area corresponding to the second road information. Then, the processing module may adjust the determined parking space according to the interference object information (including the first interference object information and the second interference object information) in the updated environment image, that is, "adjust the parking space according to the first interference object information acquired by the environment sensor 11 and the second interference object information acquired by the camera module 12" in the foregoing embodiment.
The parking stall determination device that this application embodiment provided because the device includes: the device comprises an environment sensor, a camera module and a processing module. The system comprises an environment sensor, a first road information acquisition unit, a first obstacle information acquisition unit and a second obstacle information acquisition unit, wherein the environment sensor is arranged on a vehicle and used for detecting roads and fault objects in a preset range around the vehicle so as to acquire the first road information and the first obstacle information; the camera module is arranged on the vehicle and used for taking pictures of the preset range and identifying roads and obstacles in the preset range so as to acquire second road information and second obstacle information; the processing module is arranged on the vehicle, connected with the environment sensor and the camera module and used for generating an environment image corresponding to a preset range according to the first road information, the first obstacle information, the second road information and the second obstacle information; the processing module is also used for determining the available parking spaces of the vehicles according to the environment images. Therefore, when the parking space needs to be determined, the road information and the obstacle information in the environment around the vehicle can be acquired through the environment sensor and the camera module respectively, and then the environment image in a certain range around the vehicle can be obtained according to the combination of the road information and the obstacle information acquired by the environment sensor and the camera module. And then the available parking space of the vehicle can be accurately determined through the environment image. Compared with the prior art, the technical scheme provided by the embodiment of the application is more accurate when determining the parking space.
Referring to fig. 11, based on the above implementation of providing a high parking space determining device 01, an embodiment of the present application further provides a parking space determining method, including 101-104:
101. and detecting roads and fault objects in a preset range around the vehicle to acquire first road information and first obstacle information.
102. And photographing the preset range and identifying roads and obstacles in the preset range to acquire second road information and second obstacle information.
Optionally, when the camera module in the parking space determining device 01 includes the left camera, the right camera, the front camera, the rear camera and the master control unit mentioned in the foregoing embodiment, referring to fig. 12, the step 102 specifically includes 1021 and 1025:
1021. and photographing an area belonging to the left side of the vehicle in the preset range to obtain a first image.
1022. And photographing an area on the right side of the vehicle in the preset range to obtain a second image.
1023. And photographing an area in the preset range, which belongs to the front of the vehicle, so as to obtain a third image.
1024. And photographing an area in the preset range, which belongs to the rear of the vehicle, so as to obtain a fourth image.
1025. And recognizing roads and obstacles in the first image, the second image, the third image and the fourth image to acquire the second road information and the second obstacle information.
Further optionally, when the left camera, the right camera, the front camera and the rear camera are all fish-eye cameras, referring to fig. 13, the 1025 step specifically includes 10251-:
10251. identifying roads and obstacles in the image corresponding to the first target area in the first image to acquire first sub-road information and first sub-obstacle information; the first target area is an area other than a predetermined edge area in an area on the left side of the vehicle corresponding to the first image.
10252. Identifying roads and obstacles in the image corresponding to the second target area in the second image to acquire second sub-road information and second sub-obstacle information; the second target area is an area other than a predetermined edge area in an area on the right side of the vehicle corresponding to the second image.
10253. Identifying roads and obstacles in the image corresponding to the third target area in the third image to acquire third sub-road information and third sub-obstacle information; the third target area is an area other than a predetermined edge area in an area in front of the vehicle corresponding to the third image.
10254. Identifying roads and obstacles in the image corresponding to the fourth target area in the fourth image to acquire fourth sub-road information and fourth sub-obstacle information; the fourth target area is an area other than a predetermined edge area in an area behind the vehicle corresponding to the fourth image.
10255. Fusing the first sub-road information, the second sub-road information, the third sub-road information and the fourth sub-road information to obtain the second road information; and fusing the first sub-obstacle information, the second sub-obstacle information, the third sub-obstacle information and the fourth sub-obstacle information to obtain the second obstacle information.
103. And generating an environment image corresponding to the preset range according to the first road information, the first obstacle information, the second road information and the second obstacle information.
Optionally, as shown in fig. 14, the step 103 specifically includes 1031-1033:
1031. generating a fifth image corresponding to the preset range and including the obstacle area according to a union of a first area corresponding to the first obstacle information and a second area corresponding to the second obstacle information; the barrier region is a union of the first region and the second region.
1032. Generating a sixth image which corresponds to the preset range and comprises a road area according to the intersection of a third area corresponding to the second road information and a fourth area corresponding to the second road information; the obstacle region is an intersection of the third region and the fourth region.
1033. And fusing the fifth image and the sixth image to generate the environment image.
104. And determining the available parking spaces of the vehicle according to the environment image.
Optionally, referring to fig. 15, the step 104 specifically includes 1041-1043:
1041. and rectangularizing the obstacle regions in the environment image to obtain a plurality of rectangular obstacle regions.
1042. When it is determined that a target area where a parking space can exist exists among the plurality of rectangular obstacle areas, a parking space base point is determined according to any end point close to the target area in a second rectangular obstacle area closest to the vehicle in a first rectangular obstacle area corresponding to the target area.
Further optionally, referring to fig. 16, 1042 specifically includes: and determining a parking space base point according to any end point close to the target area in the second rectangular obstacle area and a preset experience vector.
1043. And generating a parking space in the target area according to the characteristic parameters of the vehicle and the parking space base point.
Further optionally, in order to enable the vehicle to better enter the determined parking space without rubbing against a target obstacle at the rear end of the parking space, as shown in fig. 17, the method further includes, 1044 after the step 1043:
1044. and adjusting the parking space according to the distance between the rear end of the parking space and the target rectangular obstacle area.
The rear end of the parking space is one end of the parking space for placing the tail part of the vehicle; the target rectangular barrier is opposite to the rear end of the parking space and is positioned on one side of the parking space far away from the front end of the parking space; the front end of the parking space is one end of the parking space used for placing the head of the vehicle.
As a further alternative, in order to enable the vehicle to better enter a determined parking space without rubbing against a target obstacle at the rear end of the parking space, referring to fig. 18, the method further includes M1-M3:
m1, detecting the parking interferent within the preset range to acquire first interferent information.
M2, photographing the preset range and identifying the parking interferent therein to acquire second interferent information.
M3, adjusting the parking space according to the first interference object information acquired by the environment sensor and the second interference object information acquired by the camera module.
M1 and M2 can be performed at any time before step 1043, and step M3 follows step 1043.
In the parking space determining method provided by the embodiment of the application, because the parking space determining device is based on the parking space determining device provided by the embodiment, the road information and the obstacle information in the environment around the vehicle can be respectively obtained through the environment sensor and the camera module, and then the environment image in a certain range around the vehicle can be obtained according to the combination of the road information and the obstacle information obtained through the environment sensor and the camera module. And then the available parking space of the vehicle can be accurately determined through the environment image. Compared with the prior art, the technical scheme provided by the embodiment of the application is more accurate when determining the parking space.
Referring to fig. 19, an embodiment of the present application further provides another parking space determining device, which includes an environment sensor 44, a camera module 45, a peripheral interface 46, a memory 41, a processor 42, a bus 43, and a communication interface 44; the environment sensor 44 and the camera module 45 are connected with a peripheral interface 46 through a bus 43, the memory 41 is used for storing computer execution instructions, and the peripheral interface 46, the processor 42 and the memory 41 are connected through the bus 43; when the space determination device is operated, the processor 42 executes the computer execution instructions stored in the memory 41 to make the space determination device execute the space determination method provided in the above embodiment.
In particular implementations, processor 42(42-1 and 42-2) may include one or more CPUs, such as CPU0 and CPU1 shown in FIG. 19, for example, as one embodiment. And as an example, the space determining device may include a plurality of processors 42, such as processor 42-1 and processor 42-2 shown in fig. 19. Each of the processors 42 may be a Single-core processor (Single-CPU) or a Multi-core processor (Multi-CPU). Processor 42 may refer herein to one or more devices, circuits, and/or processing cores that process data (e.g., computer program instructions).
The Memory 41 may be a Read-Only Memory 41 (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable Read-Only Memory (EEPROM), a compact disc Read-Only Memory (CD-ROM) or other optical disc storage, optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), a magnetic disc storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 41 may be self-contained and coupled to the processor 42 via a bus 43. The memory 41 may also be integrated with the processor 42.
In a specific implementation, the memory 41 is used for storing data in the present application and computer-executable instructions corresponding to software programs for executing the present application. Processor 42 may perform various functions of the space determining device by running or executing software programs stored in memory 41 and invoking data stored in memory 41.
The communication interface 44 is any device, such as a transceiver, for communicating with other devices or communication networks, such as a control system, a Radio Access Network (RAN), a Wireless Local Area Network (WLAN), and the like. The communication interface 44 may include a receiving unit implementing a receiving function and a transmitting unit implementing a transmitting function.
The bus 43 may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an extended ISA (enhanced industry standard architecture) bus, or the like. The bus 43 may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 19, but it is not intended that there be only one bus or one type of bus.
The environmental sensor 44 may be a sonar or the like that detects objects in the vehicle environment. The camera module 45 may include a camera and a certain image analysis device. The peripheral interface 46 may be used to connect at least one peripheral device (the environment sensor 44 and the camera module 45, etc.) related to I/O (input/output) to the processor 42 and the memory 41. In some embodiments, processor 42, memory 41 and peripheral interface 46 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 42, the memory 41, and the peripheral interface 46 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The embodiment of the application further provides a computer-readable storage medium, which comprises computer execution instructions, and when the computer execution instructions are run on a computer, the computer is enabled to execute the parking space determination method provided by the embodiment.
The embodiment of the application further provides a computer program, the computer program can be directly loaded into the memory and contains software codes, and the computer program can be loaded and executed through the computer to realize the parking space determination method provided by the embodiment.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in this invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer-readable storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules or units is only one logical function division, and there may be other division ways in actual implementation. For example, various elements or components may be combined or may be integrated into another device, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form. Units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed to a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (20)

1. The utility model provides a parking stall determination device, is applied to the vehicle, its characterized in that includes:
the environment sensor is arranged on the vehicle and used for detecting roads and fault objects in a preset range around the vehicle so as to acquire first road information and first obstacle information;
the camera module is arranged on the vehicle and used for taking pictures of the preset range and identifying roads and obstacles in the preset range so as to acquire second road information and second obstacle information;
the processing module is arranged on the vehicle, connected with the environment sensor and the camera module and used for generating an environment image corresponding to the preset range according to the first road information, the first obstacle information, the second road information and the second obstacle information;
the processing module is further used for determining available parking spaces of the vehicle according to the environment image.
2. The vehicle position determining apparatus according to claim 1, wherein the environment sensor is at least one or more of the following: sonar, millimeter wave radar, lidar.
3. The parking space determination device according to claim 1, wherein the camera module at least comprises a left camera, a right camera, a front camera, a rear camera and a master control unit;
the left camera is arranged on the left side of the vehicle and used for photographing an area, belonging to the left side of the vehicle, in the preset range to obtain a first image;
the right camera is arranged on the left side of the vehicle and used for photographing an area, belonging to the right side of the vehicle, in the preset range to obtain a second image;
the front camera is arranged at the front end of the vehicle and used for photographing an area in the preset range, which belongs to the front of the vehicle, so as to obtain a third image;
the rear camera is arranged at the rear end of the vehicle and used for photographing a region in the preset range, which belongs to the rear of the vehicle, so as to obtain a fourth image;
the master control unit is connected with the left camera, the right camera, the front camera and the rear camera and used for identifying roads and obstacles in a first image obtained by the left camera, a second image obtained by the right camera, a third image obtained by the front camera and a fourth image obtained by the rear camera so as to obtain the second road information and the second obstacle information.
4. The parking space determination device according to claim 3, wherein when the left camera, the right camera, the front camera and the rear camera are all fisheye cameras, the master control unit is specifically configured to:
identifying roads and obstacles in an image corresponding to a first target area in a first image obtained by the left camera to obtain first sub-road information and first sub-obstacle information; the first target area is an area except a preset edge area in an area on the left side of the vehicle corresponding to the left camera;
identifying roads and obstacles in the image corresponding to the second target area in the second image obtained by the right camera to obtain second sub-road information and second sub-obstacle information; the second target area is an area except a preset edge area in an area on the right side of the vehicle corresponding to the right camera;
identifying roads and obstacles in an image corresponding to a third target area in a third image obtained by the front camera to obtain third sub-road information and third sub-obstacle information; the third target area is an area except a preset edge area in an area in front of the vehicle corresponding to the front camera;
identifying roads and obstacles in an image corresponding to a fourth target area in a fourth image obtained by the rear camera to obtain fourth sub-road information and fourth sub-obstacle information; the fourth target area is an area except a preset edge area in an area behind the vehicle corresponding to the rear camera;
the combination of the first target area, the second target area, the third target area and the fourth target is the preset range;
fusing the first sub-road information, the second sub-road information, the third sub-road information and the fourth sub-road information to obtain the second road information;
and fusing the first sub-obstacle information, the second sub-obstacle information, the third sub-obstacle information and the fourth sub-obstacle information to obtain the second obstacle information.
5. The parking space determination device according to claim 1, wherein the processing module is specifically configured to:
generating a fifth image corresponding to the preset range and including the obstacle area according to a union of a first area corresponding to the first obstacle information and a second area corresponding to the second obstacle information; the barrier region is a union of the first region and the second region;
generating a sixth image which corresponds to the preset range and comprises a road area according to the intersection of a third area corresponding to the second road information and a fourth area corresponding to the second road information; the barrier region is an intersection of the third region and the fourth region;
and fusing the fifth image and the sixth image to generate the environment image.
6. The parking space determination device according to claim 5, wherein the processing module is specifically configured to:
rectangularizing the obstacle regions in the environment image to obtain a plurality of rectangular obstacle regions;
when it is determined that a target area where a parking space can exist exists among the plurality of rectangular obstacle areas, determining a parking space base point according to any end point close to the target area in a second rectangular obstacle area closest to the vehicle in a first rectangular obstacle area corresponding to the target area;
and generating a parking space in the target area according to the characteristic parameters of the vehicle and the parking space base point.
7. The parking space determination device according to claim 6, wherein the processing module is specifically configured to:
and determining a parking space base point according to any end point close to the target area in the second rectangular obstacle area and a preset experience vector.
8. The vehicle position determining device of claim 6, wherein the processing module is further configured to:
adjusting the parking space according to the distance between the rear end of the parking space and the target rectangular obstacle area; the rear end of the parking space is one end of the parking space for placing the tail part of the vehicle; the target rectangular barrier is opposite to the rear end of the parking space and is positioned on one side of the parking space far away from the front end of the parking space; the front end of the parking space is one end of the parking space used for placing the head of the vehicle.
9. The parking space determination device according to claim 6,
the environment sensor is also used for detecting the parking interferent within the preset range to acquire first interferent information;
the camera module is further used for photographing the preset range and identifying parking interferents in the preset range so as to acquire second interferent information;
the processing module is further configured to: and adjusting the parking space according to the first interference object information acquired by the environment sensor and the second interference object information acquired by the camera module.
10. A space determination method applied to the space determination device according to any one of claims 1 to 9, comprising:
detecting roads and fault objects in a preset range around the vehicle to acquire first road information and first obstacle information;
photographing the preset range and identifying roads and obstacles in the preset range to acquire second road information and second obstacle information;
generating an environment image corresponding to the preset range according to the first road information, the first obstacle information, the second road information and the second obstacle information;
and determining the available parking spaces of the vehicle according to the environment image.
11. The parking space determination method according to claim 10, wherein the photographing the preset range and identifying roads and obstacles therein to obtain the second road information and the second obstacle information comprises:
photographing an area belonging to the left side of the vehicle in the preset range to obtain a first image;
photographing an area belonging to the right side of the vehicle in the preset range to obtain a second image;
photographing an area in the preset range, which belongs to the front of the vehicle, so as to obtain a third image;
photographing a region in the preset range, which belongs to the rear of the vehicle, so as to obtain a fourth image;
and recognizing roads and obstacles in the first image, the second image, the third image and the fourth image to acquire the second road information and the second obstacle information.
12. The parking space determination method according to claim 11, wherein the identifying roads and obstacles in the first image, the second image, the third image and the fourth image to obtain the second road information and the second obstacle information comprises:
identifying roads and obstacles in the image corresponding to the first target area in the first image to acquire first sub-road information and first sub-obstacle information; the first target area is an area except a preset edge area in an area on the left side of the vehicle corresponding to the first image;
identifying roads and obstacles in the image corresponding to the second target area in the second image to acquire second sub-road information and second sub-obstacle information; the second target area is an area except a preset edge area in an area on the right side of the vehicle corresponding to the second image;
identifying roads and obstacles in the image corresponding to the third target area in the third image to acquire third sub-road information and third sub-obstacle information; the third target area is an area other than a predetermined edge area in an area in front of the vehicle corresponding to the third image;
identifying roads and obstacles in the image corresponding to the fourth target area in the fourth image to acquire fourth sub-road information and fourth sub-obstacle information; the fourth target area is an area except a preset edge area in an area behind the vehicle corresponding to the fourth image;
fusing the first sub-road information, the second sub-road information, the third sub-road information and the fourth sub-road information to obtain the second road information;
and fusing the first sub-obstacle information, the second sub-obstacle information, the third sub-obstacle information and the fourth sub-obstacle information to obtain the second obstacle information.
13. The parking space determination method according to claim 10, wherein the generating an environment image corresponding to the preset range according to the first road information, the first obstacle information, the second road information, and the second obstacle information includes:
generating a fifth image corresponding to the preset range and including the obstacle area according to a union of a first area corresponding to the first obstacle information and a second area corresponding to the second obstacle information; the barrier region is a union of the first region and the second region;
generating a sixth image which corresponds to the preset range and comprises a road area according to the intersection of a third area corresponding to the second road information and a fourth area corresponding to the second road information; the barrier region is an intersection of the third region and the fourth region;
and fusing the fifth image and the sixth image to generate the environment image.
14. The parking space determination method according to claim 13, wherein the determining the available parking space of the vehicle according to the environment image comprises:
rectangularizing the obstacle regions in the environment image to obtain a plurality of rectangular obstacle regions;
when it is determined that a target area where a parking space can exist exists among the plurality of rectangular obstacle areas, determining a parking space base point according to any end point close to the target area in a second rectangular obstacle area closest to the vehicle in a first rectangular obstacle area corresponding to the target area;
and generating a parking space in the target area according to the characteristic parameters of the vehicle and the parking space base point.
15. The parking space determination method according to claim 14, wherein the determining a parking space base point according to any end point close to the target area in a second rectangular obstacle area closest to the vehicle in the first rectangular obstacle area corresponding to the target area comprises:
and determining a parking space base point according to any end point close to the target area in the second rectangular obstacle area and a preset experience vector.
16. The method for determining parking space according to claim 14, wherein after generating the parking space in the target area according to the characteristic parameter of the vehicle and the base point of the parking space, the method further comprises:
adjusting the parking space according to the distance between the rear end of the parking space and a target rectangular obstacle region; the rear end of the parking space is one end of the parking space for placing the tail part of the vehicle; the target rectangular barrier is opposite to the rear end of the parking space and is positioned on one side of the parking space far away from the front end of the parking space; the front end of the parking space is one end of the parking space used for placing the head of the vehicle.
17. The parking space determination method according to claim 14, further comprising:
detecting the parking interferent within the preset range to acquire first interferent information;
photographing the preset range and identifying parking interferents in the preset range to acquire second interferent information;
after the generating of the parking space in the target area according to the characteristic parameters of the vehicle and the parking space base point, the method further includes: and adjusting the parking space according to the first interference object information acquired by the environment sensor and the second interference object information acquired by the camera module.
18. A parking space determining device is characterized by comprising an environment sensor, a camera module, an external interface, a memory, a processor, a bus and a communication interface; the environment sensor and the camera module are connected with the peripheral interface through a bus, the memory is used for storing computer execution instructions, and the peripheral interface, the processor and the memory are connected through the bus; when the space determination device is operated, the processor executes the computer-executable instructions stored in the memory to cause the space determination device to perform the space determination method according to any one of claims 10 to 17.
19. A computer-readable storage medium comprising computer-executable instructions that, when executed on a computer, cause the computer to perform the method of any one of claims 10-17.
20. A vehicle comprising a space determination device as claimed in any one of claims 1 to 9.
CN202011149473.XA 2020-10-23 2020-10-23 Vehicle and parking space determining device and method thereof Pending CN114495572A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011149473.XA CN114495572A (en) 2020-10-23 2020-10-23 Vehicle and parking space determining device and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011149473.XA CN114495572A (en) 2020-10-23 2020-10-23 Vehicle and parking space determining device and method thereof

Publications (1)

Publication Number Publication Date
CN114495572A true CN114495572A (en) 2022-05-13

Family

ID=81470697

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011149473.XA Pending CN114495572A (en) 2020-10-23 2020-10-23 Vehicle and parking space determining device and method thereof

Country Status (1)

Country Link
CN (1) CN114495572A (en)

Similar Documents

Publication Publication Date Title
CN109343061B (en) Sensor calibration method and device, computer equipment, medium and vehicle
JP6799169B2 (en) Combining 3D object detection and orientation estimation by multimodal fusion
CN109188457B (en) Object detection frame generation method, device, equipment, storage medium and vehicle
CN107636680B (en) Obstacle detection method and device
US10789719B2 (en) Method and apparatus for detection of false alarm obstacle
US9736460B2 (en) Distance measuring apparatus and distance measuring method
CN109657638B (en) Obstacle positioning method and device and terminal
CN110246183B (en) Wheel grounding point detection method, device and storage medium
CN109741241B (en) Fisheye image processing method, device, equipment and storage medium
CN103745452A (en) Camera external parameter assessment method and device, and camera external parameter calibration method and device
KR101995223B1 (en) System, module and method for detecting pedestrian, computer program
CN115147333A (en) Target detection method and device
CN115496923A (en) Multi-modal fusion target detection method and device based on uncertainty perception
US11519715B2 (en) Method, device, apparatus and storage medium for detecting a height of an obstacle
CN111832347B (en) Method and device for dynamically selecting region of interest
CN115447568A (en) Data processing method and device
CN114495572A (en) Vehicle and parking space determining device and method thereof
CN115856874A (en) Millimeter wave radar point cloud noise reduction method, device, equipment and storage medium
CN115346184A (en) Lane information detection method, terminal and computer storage medium
CN115457506A (en) Target detection method, device and storage medium
CN112364693B (en) Binocular vision-based obstacle recognition method, device, equipment and storage medium
CN114386481A (en) Vehicle perception information fusion method, device, equipment and storage medium
CN113721240A (en) Target association method and device, electronic equipment and storage medium
JP2003294416A (en) Stereoscopic image processor
Miljković et al. Vehicle Distance Estimation Based on Stereo Camera System with Implementation on a Real ADAS Board

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination