CN115131762A - Vehicle parking method, system and computer readable storage medium - Google Patents
Vehicle parking method, system and computer readable storage medium Download PDFInfo
- Publication number
- CN115131762A CN115131762A CN202110292105.9A CN202110292105A CN115131762A CN 115131762 A CN115131762 A CN 115131762A CN 202110292105 A CN202110292105 A CN 202110292105A CN 115131762 A CN115131762 A CN 115131762A
- Authority
- CN
- China
- Prior art keywords
- parking space
- information
- feature extraction
- camera
- central point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000001514 detection method Methods 0.000 claims abstract description 239
- 230000004927 fusion Effects 0.000 claims abstract description 22
- 238000003062 neural network model Methods 0.000 claims abstract description 15
- 238000000605 extraction Methods 0.000 claims description 188
- 238000013528 artificial neural network Methods 0.000 claims description 11
- 230000000007 visual effect Effects 0.000 claims description 10
- 238000009434 installation Methods 0.000 claims description 8
- 238000004458 analytical method Methods 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 5
- 238000013507 mapping Methods 0.000 claims description 4
- 238000004422 calculation algorithm Methods 0.000 abstract description 5
- 238000013135 deep learning Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 101100329389 Neurospora crassa (strain ATCC 24698 / 74-OR23-1A / CBS 708.71 / DSM 1257 / FGSC 987) cre-1 gene Proteins 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000012800 visualization Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 238000013136 deep learning model Methods 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000016776 visual perception Effects 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/06—Automatic manoeuvring for parking
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2420/00—Indexing codes relating to the type of sensors based on the principle of their operation
- B60W2420/40—Photo, light or radio wave sensitive means, e.g. infrared sensors
- B60W2420/403—Image sensing, e.g. optical camera
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention provides a vehicle parking method, a system and a computer readable storage medium, wherein the method comprises the following steps: the method comprises the steps of acquiring images collected by a front-end camera, a rear-end camera, a left-end camera and a right-end camera of a vehicle in real time, inputting the images into a trained neural network model for processing, respectively obtaining parking space detection information and driving area detection information, analyzing and fusing the parking space detection information and the driving area detection information to obtain fusion information of a parking space and a driving area, generating a control instruction according to the fusion information, sending the control instruction to a vehicle parking system, and controlling the vehicle to park according to the fusion information. The method greatly reduces the requirement of model reasoning on the computational power of a computing platform, and improves the real-time performance, accuracy and stability of the algorithm.
Description
Technical Field
The present invention relates to the field of automatic parking technologies, and in particular, to a method and a system for parking a vehicle, and a computer-readable storage medium.
Background
The parking space detection is the basis in the automatic parking technology, the sensors are used for sensing the empty parking spaces around the vehicle body, the relative positions of the empty parking spaces are determined, the driving-capable areas are identified for sensing the driving-capable areas of the vehicles around the vehicle body, and the basis is provided for vehicle driving path planning, vehicle control and vehicle avoidance.
In the prior art, most of the traditional methods for non-deep learning use traditional characteristics such as edges and colors to perform related detection tasks, and are easily interfered by factors such as illumination and weather, at present, the application of a deep learning network model structure is only designed for one of parking space detection and driving area detection, and a single model can only realize the detection of a single task, so that the scale of a deep learning model is large. At present, a deep learning model which can simultaneously realize parking space detection and driving area detection and has the model scale compressed as much as possible is urgently needed.
Disclosure of Invention
In order to solve the above technical problems, the present invention provides a vehicle parking method, system and computer readable storage medium, so as to solve the defects that in the prior art, a single model can only realize single task detection when a vehicle is parked, and a deep learning model is large in scale.
One aspect of the present invention provides a vehicle parking method, including the steps of:
acquiring images collected by a front camera, a rear camera, a left camera and a right camera of a vehicle in real time;
inputting the image into a trained neural network model for processing to obtain parking space detection information and drivable area detection information respectively, wherein the trained neural network model comprises a shared layer, a parking space detection branch layer and a drivable area detection branch layer, the shared layer is used for carrying out feature extraction on the image and outputting at least one piece of intermediate feature extraction information and final feature extraction information, the parking space detection branch is used for processing the final feature extraction information to obtain the parking space detection information, and the drivable area detection branch layer is used for processing the final feature extraction information and the at least one piece of intermediate feature extraction information to obtain the drivable area detection information;
analyzing and fusing the parking space detection information and the travelable area detection information to obtain the fused information of the parking space and the travelable area;
and generating a control instruction according to the fusion information, sending the control instruction to a vehicle parking system, and controlling the vehicle to park according to the fusion information.
In a specific embodiment, the common layer includes a first common branch, a second common branch and a concatenation layer, the first common branch and the second common branch are connected in parallel, the first common branch is used for processing the image and outputting at least one of intermediate feature extraction information and final feature extraction information of the first common branch, the second common branch is used for processing the image and outputting at least one of intermediate feature extraction information and final feature extraction information of the second common branch, and the concatenation layer is used for concatenating the final feature extraction information of the first common branch and the final feature extraction information of the second common branch and outputting the final feature extraction information of the common layer;
the parking position detection branch is used for processing the final feature extraction information output by the splicing layer to obtain parking position detection information;
the travelable area detection branch layer is used for processing the final feature extraction information output by the splicing layer, at least one piece of intermediate feature extraction information output by the first common branch and at least one piece of intermediate feature extraction information output by the second common branch to obtain the travelable area detection information.
In a specific embodiment, the first common branch includes a plurality of convolutional layers, the first common branch layer sequentially outputs first intermediate feature extraction information, second intermediate feature extraction information, third intermediate feature extraction information, and final feature extraction information of the first common branch according to an order of convolutional layer operations, the second common branch includes a plurality of convolutional layers, and the second common branch layer sequentially outputs first intermediate feature extraction information, second intermediate feature extraction information, third intermediate feature extraction information, and final feature extraction information of the second common branch layer according to an order of convolutional layer operations;
and the travelable area detection branch layer processes the final feature extraction information output by the splicing layer, the first to third intermediate feature extraction information output by the first common branch and the first to third intermediate feature extraction information output by the second common branch to obtain the travelable area detection information.
In a specific embodiment, the processing, by the travelable region detection branch layer, the final feature extraction information output by the splice layer, the first to third intermediate feature extraction information output by the first common branch, and the first to third intermediate feature extraction information output by the second common branch to obtain the travelable region detection information specifically includes:
the travelable area detection branch processes final feature extraction information output by the splicing layer to obtain first intermediate feature extraction information of a travelable area detection branch, splices the first intermediate feature extraction information of the travelable area detection branch, the third intermediate feature extraction information of the first common branch and the third intermediate feature extraction information of the second common branch to obtain a first splicing result, the travelable area detection branch processes the first splicing result to obtain second intermediate feature extraction information of the travelable area detection branch, splices the second intermediate feature extraction information of the travelable area detection branch, the second intermediate feature extraction information of the first common branch and the second intermediate feature extraction information of the second common branch to obtain a second splicing result, and the travelable area detection branch processes the second splicing result to obtain third intermediate feature extraction information of the travelable area detection branch And information is acquired, third intermediate feature extraction information of the travelable region detection branch, first intermediate feature extraction information of the first common branch and first intermediate feature extraction information of the second common branch are spliced to acquire a third splicing result, and the third splicing result is processed to acquire the travelable region detection information.
In a specific embodiment, the analyzing and fusing the parking space detection information and the travelable area detection information to obtain fused information of the parking space and the travelable area specifically includes:
determining a parking space angle central point information set and an empty parking space central point information set corresponding to each camera according to parking space detection information output by each detection frame of each image at the same moment, and obtaining a pixel point set corresponding to each camera according to pixel point information output by each detection frame corresponding to each image at the same moment, wherein the parking space detection information output by the detection frames comprises a probability value of a parking space angle central point and a coordinate of the parking space angle central point in the detection frames, a probability value of an empty parking space central point in the detection frames and a coordinate of the empty parking space central point, the parking space angle central point information comprises a parking space angle central point coordinate and a probability value of the existence of the parking space angle central point, and the empty parking space central point information comprises the empty parking space central point coordinate and the probability value of the existence of the empty parking space central point, the pixel point information comprises pixel point coordinates and probability values of the pixel points;
respectively processing and correspondingly acquiring a parking space angle central point information set, an empty parking space central point information set and a drivable region pixel point set corresponding to each camera to obtain a final parking space angle central point information set, a final empty parking space central point information set and final drivable region information;
and respectively mapping the parking space angle central point in the final parking space angle central point information set and the empty parking space central point in the final empty parking space central point information set to the drivable area in a combined manner, and generating visual detection results of the parking space and the drivable area.
In a specific embodiment, the determining, according to the parking space detection information output by each detection frame of each image at the same time, a parking space angle center point information set corresponding to each camera specifically includes:
comparing the probability value of the parking space angle central point output by each detection frame corresponding to each image with the probability value of the set parking space angle central point, if the probability value of the parking space angle central point output by the detection frame is greater than the probability value of the set parking space angle central point, converting the parking space angle central point coordinate output by the detection frame into a parking space angle central point coordinate under an image coordinate system, and storing the probability value of the parking space angle central point and the converted parking space angle central point coordinate into a shooting mode in the parking space angle central point information set corresponding to the camera of the image, and correspondingly obtaining a left camera parking space angle central point information set, a right camera parking space angle central point information set, a front camera parking space angle central point information set and a rear camera parking space angle central point information set.
In a specific embodiment, the determining, according to the parking space detection information output by each detection frame of each image at the same time, the empty space center point information set corresponding to each camera specifically includes:
comparing the probability value of the empty parking space center point output by each detection frame corresponding to each image with the set empty parking space center point probability value, if the probability value of the empty parking space center point output by the detection frame is greater than the set empty parking space center point probability value, converting the empty parking space center point coordinate output by the detection frame into an empty parking space center point coordinate value under an image coordinate system, storing the probability value of the empty parking space center point and the converted empty parking space center point coordinate value into an empty parking space center point information set corresponding to a camera for shooting the image, and correspondingly obtaining a left camera empty parking space center point information set, a right camera empty parking space center point information set, a front camera empty parking space center point information set and a rear camera empty parking space center point information set.
In a specific embodiment, the obtaining a pixel point set corresponding to each camera according to pixel point information output by each detection frame corresponding to each image at the same time specifically includes:
comparing the probability value of the pixel output by each detection frame corresponding to each image with a set pixel probability value, if the probability value of the pixel output by the detection frame is greater than the set pixel probability value, converting the pixel coordinate corresponding to the detection frame into an image coordinate system, storing the probability value of the pixel and the converted pixel coordinate into a pixel set corresponding to a camera for shooting the image, and correspondingly obtaining a left camera pixel point set, a right camera pixel point set, a front camera pixel point set and a rear camera pixel point set.
In a specific embodiment, the processing the parking space angle center point information set corresponding to each camera to obtain a final parking space angle center point information set specifically includes:
projecting the parking space angle central point in the parking space angle central point information set corresponding to each camera to a vehicle coordinate system, and correspondingly obtaining the parking space angle central point information set corresponding to each camera under the vehicle coordinate system;
respectively judging whether two parking space angle central point information sets corresponding to two cameras adjacent to the installation position have the same parking space angle central point, if so, respectively acquiring probability values of the same parking space angle central point from the two parking space angle central point information sets, judging the size of two existence probability values corresponding to the same parking space angle central point, reserving the position coordinates and the corresponding existence probability values of the same parking space angle central point in the parking space angle central point information set with a larger existence probability value, deleting the position coordinates and the corresponding probability values of the same parking space angle central point in the parking space angle central point information set with a smaller probability value until the judgment of the same parking space angle central point in the parking space angle central point information sets corresponding to all adjacent cameras is completed, and acquiring a final parking space angle central point information set of a left camera, The system comprises a right camera final parking space angle central point information set, a front camera final parking space angle central point information set and a rear camera final parking space angle central point information set.
In a specific embodiment, the processing the empty space center point information set corresponding to each camera and correspondingly obtaining the final empty space center point information set specifically include
Projecting an empty parking space central point in an empty parking space central point information set corresponding to each camera to a vehicle coordinate system, and correspondingly obtaining an empty parking space central point information set corresponding to each camera in the vehicle coordinate system;
respectively judging whether two empty parking space center point information sets corresponding to two cameras adjacent to the installation position have the same empty parking space center point, if so, respectively acquiring probability values of the same empty parking space center point from the two empty parking space center point information sets, judging the sizes of two existing probability values corresponding to the same empty parking space center point, keeping the position coordinates and the corresponding existing probability values of the same empty parking space center point in the empty parking space center point information set with the larger existing probability value, deleting the position coordinates and the corresponding probability values of the same empty parking space center point in the empty parking space center point information set with the smaller probability value until the judgment of the same empty parking space center point in the empty parking space center point information sets corresponding to all adjacent cameras is completed, and obtaining a final empty parking space center point information set of a left camera, The system comprises a right camera final empty parking space central point information set, a front camera final empty parking space central point information set and a rear camera final empty parking space central point information set.
In a specific embodiment, the processing the pixel point set corresponding to each camera to obtain the final travelable area information specifically includes:
respectively converting the pixel points in the pixel point set corresponding to each camera into a camera coordinate system, and correspondingly obtaining the converted pixel point set of each camera;
generating a travelable area corresponding to each camera according to the converted pixel point set of each camera;
and superposing the drivable areas corresponding to each camera to obtain the final drivable area, wherein in the superposition process, if an overlapped area exists, the probability value of a pixel point in the overlapped area in a pixel point set corresponding to an adjacent camera is obtained, if any probability value of the pixel point is greater than the probability value of a set pixel point, the pixel point is reserved, and otherwise, the pixel point is deleted.
A second aspect of the present invention provides a vehicle parking system comprising:
the acquisition unit is used for acquiring images collected by a front camera, a rear camera, a left camera and a right camera of the vehicle in real time;
the system comprises a neural network processing unit, a parking space detection branch layer and a driving area detection branch layer, wherein the neural network processing unit is used for receiving and processing the image to respectively obtain parking space detection information and driving area detection information, the neural network processing unit comprises a sharing layer, a parking space detection branch layer and a driving area detection branch layer, the sharing layer is used for carrying out feature extraction on the image and outputting at least one piece of intermediate feature extraction information and final feature extraction information, the parking space detection branch layer is used for processing the final feature extraction information to obtain the parking space detection information, and the driving area detection branch layer is used for processing the final feature extraction information and the at least one piece of intermediate feature extraction information to obtain the driving area detection information;
the analysis and fusion unit is used for analyzing and fusing the parking space detection information and the travelable area detection information to obtain fusion information of the parking space and the travelable area;
and the control unit is used for generating a control instruction according to the fusion information, sending the control instruction to a vehicle parking system and controlling the vehicle to park according to the fusion information.
In a specific embodiment, the common layer includes a first common branch, a second common branch and a concatenation layer, the first common branch and the second common branch are connected in parallel, the first common branch is used for processing the image and outputting at least one of intermediate feature extraction information and final feature extraction information of the first common branch, the second common branch is used for processing the image and outputting at least one of intermediate feature extraction information and final feature extraction information of the second common branch, and the concatenation layer is used for concatenating the final feature extraction information of the first common branch and the final feature extraction information of the second common branch and outputting the final feature extraction information of the common layer;
the parking position detection branch is used for processing the final feature extraction information output by the splicing layer to obtain parking position detection information;
the travelable area detection branch layer is used for processing the final feature extraction information output by the splicing layer, at least one piece of intermediate feature extraction information output by the first common branch and at least one piece of intermediate feature extraction information output by the second common branch to obtain the travelable area detection information.
The invention also provides a computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the aforementioned method.
The embodiment of the invention has the beneficial effects that: the vehicle parking method realizes parking space detection and drivable area identification based on the convolutional network of deep learning, can better overcome the influence of illumination change, weather change and the like on sensing results, deeply optimizes the neural network model structure based on the deep learning, can realize parking space detection and drivable area identification only by carrying out model reasoning operation once, greatly reduces the requirement of model reasoning on the computational calculation capacity of a computing platform, improves the real-time performance of the algorithm on the basis of ensuring the accuracy of visual perception, and in addition, combines the characteristics of vehicle-mounted panoramic cameras, utilizes the overlapping of visual fields of different cameras and the visual difference between different cameras, fuses the reasoning results of the multiple cameras, and improves the accuracy and stability of the algorithm.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart illustrating a method for parking a vehicle according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating a specific structure of a trained neural network model of a vehicle parking method according to an embodiment of the present invention;
fig. 3(a) is a corresponding visual diagram before analysis of the parking space detectable information output by the parking space detectable branch of the vehicle parking method according to the embodiment of the present invention;
fig. 3(b) is a corresponding visualization diagram before analyzing the information of the travelable region output by the travelable region detection branch in the vehicle parking method according to the embodiment of the present invention;
fig. 3(c) is a corresponding visualization diagram after analysis of parking space detection information output by the parking space detection branch of the vehicle parking method according to the embodiment of the present invention;
fig. 3(d) is a visualization diagram corresponding to the analyzed travelable region information output by the travelable region detection branch in the vehicle parking method according to the embodiment of the present invention;
FIG. 4(a) is a top view of a final drivable region of a method for parking a vehicle in accordance with an embodiment of the present invention;
fig. 4(b) is a final parking space detection visualization diagram of a vehicle parking method according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments refers to the accompanying drawings, which are included to illustrate specific embodiments in which the invention may be practiced.
Referring to fig. 1, a parking method for a vehicle according to an embodiment of the present invention includes:
and S1, acquiring images collected by the front camera, the rear camera, the left camera and the right camera of the vehicle in real time.
Specifically, 4 paths of video streams are acquired through 180-degree wide-angle cameras respectively installed at the front end, the rear end, the left end and the right end of the vehicle, normalization operation is carried out on the size of each path of image, and the size of the image is scaled to 640 × 480.
And S2, inputting the image into a trained neural network model for processing, and respectively obtaining parking space detection information and travelable area detection information, wherein the trained neural network model comprises a common layer, a parking space detection branch layer and a travelable area detection branch layer, the common layer is used for extracting the characteristics of the image and outputting at least one piece of intermediate characteristic extraction information and final characteristic extraction information, the parking space detection branch is used for processing the final characteristic extraction information to obtain the parking space detection information, and the travelable area detection branch layer is used for processing the final characteristic extraction information and the at least one piece of intermediate characteristic extraction information to obtain the travelable area detection information.
Specifically, the acquired 4 paths of video images are input into a trained neural network model according to a set sequence, and the neural network model processes the input images in sequence to correspondingly obtain parking space detection information and driving area detection information corresponding to each image.
Specifically, the trained neural network model comprises a common layer, a parking space detection branch layer and a travelable area detection branch layer, wherein the common layer is used for carrying out feature extraction on an image input into the neural network model and outputting final feature extraction information and at least one piece of intermediate feature extraction information, the parking space detection layer is used for processing the final feature extraction information to obtain the parking space detection information, and the travelable area detection branch is used for processing the final feature extraction information and at least one piece of intermediate feature extraction information to obtain the travelable area detection information.
Specifically, the common layer includes a first common branch, a second common branch and a splicing layer, the first common branch and the second common branch are connected in parallel, the first common branch is used for processing an image input to the neural network and outputting at least one of intermediate feature extraction information and final feature extraction information of the first common branch, the second common branch is used for processing the image input to the neural network and outputting at least one of intermediate feature extraction information and final feature extraction information of the second common branch, the splicing layer is used for splicing the final feature extraction information of the first common branch and the final feature extraction information of the second common branch and outputting final feature extraction information of the common layer, the parking space detection branch is used for processing the final feature extraction information output by the splicing layer to obtain parking space detection information, and the drivable area detection branch layer is used for processing the final feature extraction information output by the splicing layer, And processing at least one piece of intermediate feature extraction information output by the first common branch and at least one piece of intermediate feature extraction information output by the second common branch to obtain travelable area detection information.
Specifically, the first common branch comprises a plurality of convolution layers, the first common branch layer sequentially outputs first intermediate feature extraction information, second intermediate feature extraction information, third intermediate feature extraction information and final feature extraction information of the first common branch according to the sequence of convolution layer operation, the second common branch comprises a plurality of convolution layers, and the second common branch layer sequentially outputs first intermediate feature extraction information, second intermediate feature extraction information, third intermediate feature extraction information and final feature extraction information of the second common branch layer according to the sequence of convolution layer operation. The drivable area detection branch processes the final feature extraction information output by the splicing layer to obtain first intermediate feature extraction information of the drivable area detection branch, splices the first intermediate feature extraction information of the drivable area detection branch, the third intermediate feature extraction information of the first common branch and the third intermediate feature extraction information of the second common branch to obtain a first splicing result, the drivable area detection branch processes the first splicing result to obtain second intermediate feature extraction information of the drivable area detection branch, splices the second intermediate feature extraction information of the drivable area detection branch, the second intermediate feature extraction information of the first common branch and the second intermediate feature extraction information of the second common branch to obtain a second splicing result, and the drivable area detection branch processes the second splicing result to obtain third intermediate feature extraction information of the drivable area detection branch, and splicing the third intermediate feature extraction information of the driving-capable area detection branch, the first intermediate feature extraction information of the first common branch and the first intermediate feature extraction information of the second common branch to obtain a third splicing result, and processing the third splicing result to obtain the driving-capable area detection information.
It should be noted that the first common branch and the second common branch sample convolution kernels of different cavity scales, so as to bring different receptive fields to the subsequent parking space detection branch and the travelable area detection branch.
As shown in fig. 2, the image scale of the trained neural network model is 640 x 480 x 3, the trained neural network model comprises a common layer, a parking space detection branch and a travelable region detection branch, the common layer comprises a first common branch and a second common branch, the first common branch and the second common branch each comprise 10 convolutional layers, each convolutional layer of the first common branch and the second common branch uses a linear activation function Relu, wherein each second convolutional layer, a fourth convolutional layer and a sixth convolutional layer of the first common branch uses a downsampling function Maxpool, the first common branch outputs first intermediate feature extraction information, second intermediate feature extraction information and third intermediate feature extraction information after each downsampling function Maxpool, the second convolutional layer, the fourth convolutional layer and the sixth convolutional layer of the second common branch use downsampling functions, the corresponding second common branch outputs first intermediate feature extraction information, second intermediate feature extraction information, and third intermediate feature extraction information. The drivable region detection branch comprises 9 convolutional layers, 3 upper sampling layers UpSample and 3 splicing layers Cat, wherein the 3 sampling layers are respectively arranged behind a 2 nd convolutional layer, a 4 th convolutional layer and a 6 th convolutional layer, the 3 splicing layers are correspondingly arranged behind the 3 upper sampling layers, in the figure, route1, route2 and route3 are labels, and the same labels have direct connection relation. The convolution kernels of the first common branch are convolution kernels of 3 x 3, the first common branch extracts the features with small scales, the convolution kernels of the second common branch adopt convolution kernels of 3 x 3 with the hole scales of 5, and the second common branch extracts the features with large scales. The two shared branches bring different receptive fields for the subsequent parking space detection branch and the drivable area detection branch, and more abundant characteristics can be extracted. The output scale of the detection information of the parking space detection branch is 40 × 30 × 6, which can be represented as 40 × 30 (cre1, x1, y1, cre2, x2 and y2), the output result divides the original image into 30 rows and 40 columns of rectangular blocks with equal size, the prediction result corresponding to the rectangular block in the ith row and the jth column in the original image is the 6-bit output (cre1, x1, y1, cre2, x2 and y2) in the ith row and the jth column of the parking space identification model inference result, wherein cre1 and cre2 respectively represent the probability that the rectangular block contains a parking space angle central point and the probability of an empty space central point, and x1, y1, x2 and y2 respectively represent the horizontal coordinate, the vertical coordinate of the parking space angle central point, the horizontal coordinate of the empty space central point and the vertical coordinate of the parking space central point after normalization with respect to the current block. The output scale of the detection information of the detection branches of the travelable area is 320 × 240 × 1, namely, one image is divided into 320 × 240 detection frames, and the confidence coefficient of the corresponding pixel point is output in each detection frame.
And S3, analyzing and fusing the parking space detection information and the travelable area detection information to obtain fused information of the parking space and the travelable area.
In one embodiment, step S3 specifically includes:
s31, determining a parking space angle center point information set and an empty space center point information set corresponding to each camera according to parking space detection information output by each detection frame of each image at the same moment, and obtaining a pixel point set corresponding to each camera according to pixel point information output by each detection frame corresponding to each image at the same moment, wherein the parking space detection information output by the detection frames comprises a probability value of a parking space angle center point and coordinates of the parking space angle center point in the detection frames, a probability value of an empty space center point in the detection frames and coordinates of the empty space center point, the parking space angle center point information comprises coordinates of the parking space angle center point and probability values of the parking space angle center point, and the empty space center point information comprises the coordinates of the empty space center point and the probability values of the empty space center point, the pixel point information comprises pixel point coordinates and probability values of the pixel points.
Specifically, the probability value of the parking space angle center point output by each detection frame corresponding to each image is compared with the set parking space angle center point probability value, if the probability value of the parking space angle center point output by the detection frame is greater than the set parking space angle center point probability value, the parking space angle center point coordinate output by the detection frame is converted into a parking space angle center point coordinate under an image coordinate system, the probability value of the parking space angle center point and the converted parking space angle center point coordinate are stored into a parking space angle center point information set corresponding to a camera for shooting the image, and a left camera parking space angle center point information set, a right camera parking space angle center point information set, a front camera parking space angle center point information set and a rear camera parking space angle center point information set are correspondingly obtained.
Specifically, the probability value of the empty parking space center point output by each detection frame corresponding to each image is compared with the set empty parking space center point probability value, if the probability value of the empty parking space center point output by the detection frame is greater than the set empty parking space center point probability value, the empty parking space center point coordinate output by the detection frame is converted into an empty parking space center point coordinate value under an image coordinate system, the probability value of the empty parking space center point and the converted empty parking space center point coordinate value are stored in an empty parking space center point information set corresponding to a camera for shooting the image, and a left camera empty parking space center point information set, a right camera empty parking space center point information set, a front camera empty parking space center point information set and a rear camera empty parking space center point information set are correspondingly obtained.
For example, assume that the current decision block is a rectangular block in the ith row and the jth column, and if cre1 is satisfied i,j >And 0.9, judging that the current block contains the parking space angle central point, or else, not containing the parking space angle central point. Similarly, if cre2 is satisfied i,j >And 0.9, judging that the current block contains the empty parking space central point, or else, not containing the empty parking space central point. Since the coordinate values x1, y1, x2, and y2 outputted by the parking space detection branch are the coordinate values normalized for the current block, the coordinate values need to be converted into coordinate values in the corresponding image coordinate system in the analysis stage, and the conversion formula is as follows:
wherein, x and y are horizontal and vertical coordinate values of a central point of a parking space angle or a central point of an empty parking space in the rectangular block before conversion, x 'and y' are horizontal and vertical coordinate values of the central point of the parking space angle or the central point of the empty parking space in the image coordinate system after conversion, w and h are width and height of the input image, and i and j respectively correspond to a row number and a column number of the current block. Storing the converted parking space angle central point coordinates and the converted empty parking space central point coordinates to Set in the form of (x, y, cre) corner,cam And Set place,cam In (1). Wherein cam represents a camera head number, takes values of front, back, left and right and respectively corresponds to a front camera, a rear camera, a left camera and a right camera; x and y are coordinate values of the corresponding image coordinate system respectively; cre is the credibility of the current parking space angle central point or the empty parking space central point, and cre ranges from 0 to 1, and the closer to 1, the higher the credibility of the point is. The analyzed parking space detection branch reasoning result is visualized as shown in fig. 3(a) and fig. 3 (c).
Specifically, the probability value of the pixel output by each detection box corresponding to each image is compared with the set pixel probability value, if the probability value of the pixel output by the detection box is greater than the set pixel probability value, the pixel coordinate corresponding to the detection box is converted into an image coordinate system, the probability value of the pixel and the converted pixel coordinate are stored into a pixel set corresponding to a camera for shooting the image, and a left camera pixel set, a right camera pixel set, a front camera pixel set and a rear camera pixel set are correspondingly obtained.
For example, in the travelable region detection branch, the output scale is 320 × 240 × 1, and the length and width are 1/2, which are the sizes of the model input pictures. Firstly, the output result of the branch of the travelable area is up-sampled, namely the length and the width are enlarged by 1 time, then the inference result with the same size as the input image can be obtained, and each pixel point of the output result represents the feasibility cre of the travelable area corresponding to the pixel point of the original image d The value is between 0 and 1, cre d The larger the probability that the current pixel point is the travelable area is. Specifically cre will be satisfied d >The pixel point of 0.9 is judged as drivable. The converted images are shown in fig. 3(b) and 3 (d).
And S32, respectively processing and correspondingly acquiring a parking space angle central point information set, an empty parking space central point information set and a drivable area pixel point set corresponding to each camera to obtain a final parking space angle central point information set, a final empty parking space central point information set and final drivable area information.
Specifically, a parking space angle center point in a parking space angle center point information set corresponding to each camera is projected into a vehicle coordinate system, a parking space angle center point information set corresponding to each camera under the vehicle coordinate system is correspondingly obtained, whether the two parking space angle center point information sets corresponding to two cameras adjacent to the installation position have the same parking space angle center point is respectively judged, if the two parking space angle center points have the same parking space angle center point, probability values existing in the same parking space angle center point are respectively obtained from the two parking space angle center point information sets, the size of two existing probability values corresponding to the same parking space angle center point is judged, the position coordinate and the corresponding existing probability value of the same parking space angle center point in the parking space angle center point information set with the larger existing probability value are reserved, the position coordinate and the corresponding existing probability value of the same parking space angle center point in the parking space angle center point information set with the larger existing probability value are deleted, and the position coordinate and the corresponding probability value of the same parking space angle center point in the smaller parking space angle center point information set are deleted, and obtaining a final parking space angle central point information set of the left camera, a final parking space angle central point information set of the right camera, a final parking space angle central point information set of the front camera and a final parking space angle central point information set of the rear camera until the judgment of the same parking space angle central point in the parking space angle central point information sets corresponding to all the adjacent cameras is completed.
Specifically, projecting an empty space center point in an empty space center point information set corresponding to each camera into a vehicle coordinate system, correspondingly obtaining an empty space center point information set corresponding to each camera under the vehicle coordinate system, respectively judging whether two empty space center point information sets corresponding to two cameras adjacent to an installation position have the same empty space center point, if so, respectively obtaining probability values of the same empty space center point from the two empty space center point information sets, judging the size of two existing probability values corresponding to the same empty space center point, keeping the position coordinate of the same empty space center point and the corresponding existing probability value in the empty space center point information set with a larger existing probability value, deleting the position coordinate of the same empty space center point and the corresponding probability value in the empty space center point information set with a smaller probability value, and obtaining a final empty parking space center point information set of the left camera, a final empty parking space center point information set of the right camera, a final empty parking space center point information set of the front camera and a final empty parking space center point information set of the rear camera until the judgment of the same empty parking space center point in the empty parking space center point information sets corresponding to all the adjacent cameras is completed.
By way of example, Set will be described separately corner,cam And Set place,cam The identification structure of (2) is mapped on a vehicle coordinate system, and the origin of the current vehicle coordinate system is selected as the center of a rear axle of the vehicle on the groundThe projection point of the direction is selected from the direction from the center of the rear axle of the vehicle to the center of the front axle in the direction of the x-axis, the y-axis is vertical to the x-axis and horizontal to the right, the coordinate unit is centimeter, and the coordinate sets after mapping are respectively Set ″ corner,cam Set- place,cam . According to the installation position relationship of the cameras, overlapping areas exist between the front camera and the right camera, between the right camera and the rear camera, between the rear camera and the left camera, and between the left camera and the front camera, whether different recognition results of the same parking space angle central point exist or not is judged in corresponding 2 coordinate sets respectively, and whether the same judgment basis exists between the two points is as follows:
sqrt(pow(x 1 -x 2 ,2)-pow(y 1 -y 2 ,2))<thresh
wherein x 1 、x 2 The abscissa, y, representing the center point of two parking spaces respectively 1 And y 2 Respectively representing the vertical coordinates of the center points of the two parking space angles; thresh is a distance threshold, preferably 5. When the two points are judged to be the same parking space angle central point, the accuracy of the two parking space angle central points needs to be measured, and a coordinate value with higher reliability is selected as a coordinate value of the current parking space angle central point and added into a final parking space angle central point detection result set. And generating a final empty parking space central point detection result set according to the same strategy.
Specifically, pixel points in a pixel point set corresponding to each camera are converted to a camera coordinate system, a pixel point set converted by each camera is correspondingly obtained, a travelable region corresponding to each camera is generated according to the pixel point set converted by each camera, the travelable regions corresponding to each camera are superposed, the final travelable region is obtained in the superposition process, if an overlapped region exists, the probability value of the pixel point in the overlapped region in the pixel point set corresponding to the adjacent camera is obtained, if any probability value of the pixel point is greater than the set probability value of the pixel point, the pixel point is reserved, and if not, the pixel point is deleted. The final travelable region is shown in fig. 4 (a).
And S33, respectively mapping the parking space angle central point in the final parking space angle central point information set and the empty parking space central point in the final empty parking space central point information set to the drivable area in a combined manner, and generating visual detection results of the parking space and the drivable area.
Respectively projecting the parking space angle central point information set of the final parking space angle central point of the left camera, the parking space angle central point information set of the final parking space angle central point of the right camera, the parking space angle central point information set of the final parking space angle central point of the front camera and the parking space angle central point information set of the final parking space angle central point of the rear camera, the hollow parking space central point information set of the final empty parking space of the left camera, the final empty parking space central point information set of the right camera, the final empty parking space central point information set of the front camera and the final empty parking space central point information set of the rear camera to a travelable area to form a visual detection result, wherein the visual detection result is specifically shown in fig. 4 (b).
And S4, generating a control instruction according to the fusion information, sending the control instruction to a vehicle parking system, and controlling the vehicle to park according to the fusion information.
According to the vehicle parking method, detection of parking spaces and identification of drivable areas are achieved based on the convolutional network of deep learning, influences on perception results caused by illumination changes, weather changes and the like can be well overcome, deep optimization is conducted on the model structure based on the deep learning, parking space detection and drivable area identification can be achieved only through one-time model reasoning operation, the requirement of the model reasoning on computing power of a computing platform is greatly reduced, the real-time performance of an algorithm is improved on the basis of guaranteeing visual perception accuracy, in addition, the characteristics of installation of vehicle-mounted panoramic cameras are combined, overlapping of visual fields of different cameras and visual differences among different cameras are utilized, the reasoning results of the multiple cameras are fused, and accuracy and stability of the algorithm are improved.
Based on the first embodiment of the present invention, a second embodiment of the present invention provides a vehicle parking system, including: the parking space detection system comprises an acquisition unit, a neural network processing unit, an analysis fusion unit and a control unit, wherein the acquisition unit is used for acquiring images acquired by a front-end camera, a rear-end camera, a left-end camera and a right-end camera of a vehicle in real time, the neural network processing unit is used for receiving the images and processing the images to respectively obtain parking space detection information and travelable area detection information, the neural network processing unit comprises a sharing layer, a parking space detection branch layer and a travelable area detection branch layer, the sharing layer is used for carrying out feature extraction on the images and outputting at least one piece of intermediate feature extraction information and final feature extraction information, the parking space detection branch is used for processing the final feature extraction information to obtain the parking space detection information, and the travelable area detection branch layer is used for processing the final feature extraction information and the at least one piece of intermediate feature extraction information to obtain the parking space detection information And the analysis and fusion unit is used for analyzing and fusing the parking space detection information and the drivable area detection information to obtain the fusion information of the parking space and the drivable area, and the control unit is used for generating a control instruction according to the fusion information, sending the control instruction to a vehicle parking system and controlling the vehicle to park according to the fusion information.
In a specific embodiment, the common layer includes a first common branch, a second common branch, and a concatenation layer, the first common branch and the second common branch are connected in parallel, the first common branch is used for processing the image and outputting at least one of intermediate feature extraction information and final feature extraction information of the first common branch, the second common branch is used for processing the image and outputting at least one of intermediate feature extraction information and final feature extraction information of the second common branch, the concatenation layer is used for concatenating the final feature extraction information of the first common branch and the final feature extraction information of the second common branch and outputting the final feature extraction information of the common layer, and the parking space detection branch is used for processing the final feature extraction information output by the concatenation layer to obtain the parking space detection information, the travelable area detection branch layer is used for processing the final feature extraction information output by the splicing layer, at least one piece of intermediate feature extraction information output by the first common branch and at least one piece of intermediate feature extraction information output by the second common branch to obtain the travelable area detection information.
Based on the first embodiment of the present invention, the present invention further provides a computer readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the foregoing method.
Specifically, the computer-readable storage medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like.
For the working principle and the advantageous effects of the present embodiment, please refer to the description of the first embodiment of the present invention, which is not repeated herein.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.
Claims (14)
1. A method of parking a vehicle, comprising:
acquiring images collected by a front camera, a rear camera, a left camera and a right camera of a vehicle in real time;
inputting the image into a trained neural network model for processing to obtain parking space detection information and drivable area detection information respectively, wherein the trained neural network model comprises a shared layer, a parking space detection branch layer and a drivable area detection branch layer, the shared layer is used for carrying out feature extraction on the image and outputting at least one piece of intermediate feature extraction information and final feature extraction information, the parking space detection branch is used for processing the final feature extraction information to obtain the parking space detection information, and the drivable area detection branch layer is used for processing the final feature extraction information and the at least one piece of intermediate feature extraction information to obtain the drivable area detection information;
analyzing and fusing the parking space detection information and the travelable area detection information to obtain the fused information of the parking space and the travelable area;
and generating a control instruction according to the fusion information, sending the control instruction to a vehicle parking system, and controlling the vehicle to park according to the fusion information.
2. The method of claim 1, wherein:
the common layer comprises a first common branch, a second common branch and a splicing layer, wherein the first common branch and the second common branch are connected in parallel, the first common branch is used for processing the image and outputting at least one piece of intermediate feature extraction information and final feature extraction information of the first common branch, the second common branch is used for processing the image and outputting at least one piece of intermediate feature extraction information and final feature extraction information of the second common branch, and the splicing layer is used for splicing the final feature extraction information of the first common branch and the final feature extraction information of the second common branch and outputting the final feature extraction information of the common layer;
the parking position detection branch is used for processing the final feature extraction information output by the splicing layer to obtain parking position detection information;
the travelable area detection branch layer is used for processing the final feature extraction information output by the splicing layer, at least one piece of intermediate feature extraction information output by the first common branch and at least one piece of intermediate feature extraction information output by the second common branch to obtain the travelable area detection information.
3. The method of claim 2, wherein:
the first common branch comprises a plurality of convolution layers, the first common branch layer sequentially outputs first intermediate feature extraction information, second intermediate feature extraction information, third intermediate feature extraction information and final feature extraction information of the first common branch according to the sequence of convolution layer operation, the second common branch comprises a plurality of convolution layers, and the second common branch layer sequentially outputs first intermediate feature extraction information, second intermediate feature extraction information, third intermediate feature extraction information and final feature extraction information of the second common branch layer according to the sequence of convolution layer operation;
and the travelable area detection branch layer processes the final feature extraction information output by the splicing layer, the first to third intermediate feature extraction information output by the first common branch and the first to third intermediate feature extraction information output by the second common branch to obtain the travelable area detection information.
4. The method of claim 3, wherein: the step of processing, by the travelable area detection branch layer, the final feature extraction information output by the splice layer, the first to third intermediate feature extraction information output by the first common branch, and the first to third intermediate feature extraction information output by the second common branch to obtain travelable area detection information specifically includes:
the travelable region detection branch processes final feature extraction information output by the splice layer to obtain first intermediate feature extraction information of a travelable region detection branch, splices the first intermediate feature extraction information of the travelable region detection branch, third intermediate feature extraction information of a first common branch, and third intermediate feature extraction information of a second common branch to obtain a first splice result, processes the first splice result to obtain second intermediate feature extraction information of the travelable region detection branch, splices the second intermediate feature extraction information of the travelable region detection branch, the second intermediate feature extraction information of the first common branch, and the second intermediate feature extraction information of the second common branch to obtain a second splice result, and processes the second splice result to obtain third intermediate feature extraction information of the travelable region detection branch And information is acquired, third intermediate feature extraction information of the travelable area detection branch, first intermediate feature extraction information of the first common branch and first intermediate feature extraction information of the second common branch are spliced to acquire a third splicing result, and the third splicing result is processed to acquire travelable area detection information.
5. The method according to claim 1, wherein the analyzing and fusing the parking space detection information and the travelable region detection information to obtain fused information of the parking space and the travelable region specifically comprises:
determining a parking space angle central point information set and an empty space central point information set corresponding to each camera according to parking space detection information output by each detection frame of each image at the same moment, and obtaining a pixel point set corresponding to each camera according to pixel point information output by each detection frame corresponding to each image at the same moment, wherein the parking space detection information output by the detection frames comprises a probability value of a parking space angle central point and a coordinate of the parking space angle central point in the detection frames, a probability value of an empty space central point and a coordinate of the empty space central point in the detection frames, the parking space angle central point information comprises a parking space angle central point coordinate and a probability value of the existence of the parking space angle central point, and the empty space central point information comprises the empty space central point coordinate and the probability value of the existence of the empty space central point, the pixel point information comprises pixel point coordinates and probability values of the pixel points;
respectively processing and correspondingly acquiring a final parking space angle central point information set, a final empty parking space central point information set and final travelable area information by a parking space angle central point information set, an empty parking space central point information set and a travelable area pixel point set corresponding to each camera;
and respectively mapping the parking space angle central points in the final parking space angle central point information set and the empty parking space central points in the final empty parking space central point information set to the travelable area in a combined manner, and generating visual detection results of the parking spaces and the travelable area.
6. The method according to claim 5, wherein the determining the parking space angle center point information set corresponding to each camera according to the parking space detection information output by each detection frame of each image at the same time specifically comprises:
comparing the probability value of the parking space angle central point output by each detection frame corresponding to each image with the probability value of the set parking space angle central point, if the probability value of the parking space angle central point output by the detection frame is greater than the probability value of the set parking space angle central point, converting the parking space angle central point coordinate output by the detection frame into a parking space angle central point coordinate under an image coordinate system, and storing the probability value of the parking space angle central point and the converted parking space angle central point coordinate into a shooting mode in the parking space angle central point information set corresponding to the camera of the image, and correspondingly obtaining a left camera parking space angle central point information set, a right camera parking space angle central point information set, a front camera parking space angle central point information set and a rear camera parking space angle central point information set.
7. The method according to claim 6, wherein the determining the empty space center point information set corresponding to each camera according to the parking space detection information output by each detection frame of each image at the same time specifically comprises:
comparing the probability value of the empty parking space center point output by each detection frame corresponding to each image with the set empty parking space center point probability value, if the probability value of the empty parking space center point output by the detection frame is greater than the set empty parking space center point probability value, converting the empty parking space center point coordinate output by the detection frame into an empty parking space center point coordinate value under an image coordinate system, storing the probability value of the empty parking space center point and the converted empty parking space center point coordinate value into an empty parking space center point information set corresponding to a camera for shooting the image, and correspondingly obtaining a left camera empty parking space center point information set, a right camera empty parking space center point information set, a front camera empty parking space center point information set and a rear camera empty parking space center point information set.
8. The method according to claim 7, wherein the obtaining of the pixel point set corresponding to each camera according to the pixel point information output by each detection frame corresponding to each image at the same time specifically comprises:
comparing the probability value of the pixel output by each detection frame corresponding to each image with a set pixel probability value, if the probability value of the pixel output by the detection frame is greater than the set pixel probability value, converting the pixel coordinate corresponding to the detection frame into an image coordinate system, storing the probability value of the pixel and the converted pixel coordinate into a pixel set corresponding to a camera for shooting the image, and correspondingly obtaining a left camera pixel point set, a right camera pixel point set, a front camera pixel point set and a rear camera pixel point set.
9. The method according to claim 5, wherein the processing the parking space angle center point information set corresponding to each camera to obtain the final parking space angle center point information set specifically comprises:
projecting the parking space angle central point in the parking space angle central point information set corresponding to each camera to a vehicle coordinate system, and correspondingly obtaining the parking space angle central point information set corresponding to each camera in the vehicle coordinate system;
respectively judging whether two parking space angle central point information sets corresponding to two cameras adjacent to the installation position have the same parking space angle central point, if so, respectively acquiring probability values of the same parking space angle central point from the two parking space angle central point information sets, judging the size of two existence probability values corresponding to the same parking space angle central point, reserving the position coordinate and the corresponding existence probability value of the same parking space angle central point in the parking space angle central point information set with a larger existence probability value, deleting the position coordinate and the corresponding probability value of the same parking space angle central point in the parking space angle central point information set with a smaller probability value until the judgment of the same parking space angle central point in the parking space angle central point information sets corresponding to all the adjacent cameras is completed, and acquiring a final parking space angle central point information set of a left camera, and a left camera, The system comprises a final parking space angle central point information set of a right camera, a final parking space angle central point information set of a front camera and a final parking space angle central point information set of a rear camera.
10. The method according to claim 5, wherein the processing the empty space center point information set corresponding to each camera to obtain the final empty space center point information set comprises
Projecting an empty parking space central point in an empty parking space central point information set corresponding to each camera to a vehicle coordinate system, and correspondingly obtaining an empty parking space central point information set corresponding to each camera in the vehicle coordinate system;
respectively judging whether two empty parking space center point information sets corresponding to two cameras adjacent to the installation position have the same empty parking space center point, if so, respectively acquiring probability values of the same empty parking space center point from the two empty parking space center point information sets, judging the sizes of two existing probability values corresponding to the same empty parking space center point, keeping the position coordinates and the corresponding existing probability values of the same empty parking space center point in the empty parking space center point information set with the larger existing probability value, deleting the position coordinates and the corresponding probability values of the same empty parking space center point in the empty parking space center point information set with the smaller probability value until the judgment of the same empty parking space center point in the empty parking space center point information sets corresponding to all adjacent cameras is completed, and obtaining a final empty parking space center point information set of a left camera, The system comprises a right camera final empty parking space center point information set, a front camera final empty parking space center point information set and a rear camera final empty parking space center point information set.
11. The method according to claim 5, wherein the processing the pixel point set corresponding to each camera to obtain the final travelable region information specifically comprises:
respectively converting the pixel points in the pixel point set corresponding to each camera into a camera coordinate system, and correspondingly obtaining the converted pixel point set of each camera;
generating a travelable area corresponding to each camera according to the converted pixel point set of each camera;
and superposing the drivable areas corresponding to each camera to obtain the final drivable area, wherein in the superposition process, if an overlapped area exists, the probability value of a pixel point in the overlapped area in a pixel point set corresponding to an adjacent camera is obtained, if any probability value of the pixel point is greater than the probability value of a set pixel point, the pixel point is reserved, and otherwise, the pixel point is deleted.
12. A vehicle parking system, comprising:
the acquisition unit is used for acquiring images collected by a front camera, a rear camera, a left camera and a right camera of the vehicle in real time;
the system comprises a neural network processing unit, a parking space detection branch layer and a driving area detection branch layer, wherein the neural network processing unit is used for receiving and processing the image to respectively obtain parking space detection information and driving area detection information, the neural network processing unit comprises a sharing layer, a parking space detection branch layer and a driving area detection branch layer, the sharing layer is used for carrying out feature extraction on the image and outputting at least one piece of intermediate feature extraction information and final feature extraction information, the parking space detection branch layer is used for processing the final feature extraction information to obtain the parking space detection information, and the driving area detection branch layer is used for processing the final feature extraction information and the at least one piece of intermediate feature extraction information to obtain the driving area detection information;
the analysis and fusion unit is used for analyzing and fusing the parking space detection information and the travelable area detection information to obtain fusion information of the parking space and the travelable area;
and the control unit is used for generating a control instruction according to the fusion information, sending the control instruction to a vehicle parking system and controlling the vehicle to park according to the fusion information.
13. The system of claim 12,
the common layer comprises a first common branch, a second common branch and a splicing layer, wherein the first common branch and the second common branch are connected in parallel, the first common branch is used for processing the image and outputting at least one piece of intermediate feature extraction information and final feature extraction information of the first common branch, the second common branch is used for processing the image and outputting at least one piece of intermediate feature extraction information and final feature extraction information of the second common branch, and the splicing layer is used for splicing the final feature extraction information of the first common branch and the final feature extraction information of the second common branch and outputting the final feature extraction information of the common layer;
the parking position detection branch is used for processing the final feature extraction information output by the splicing layer to obtain parking position detection information;
the travelable area detection branch layer is used for processing the final feature extraction information output by the splicing layer, at least one piece of intermediate feature extraction information output by the first common branch and at least one piece of intermediate feature extraction information output by the second common branch to obtain the travelable area detection information.
14. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110292105.9A CN115131762B (en) | 2021-03-18 | 2021-03-18 | Vehicle parking method, system and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110292105.9A CN115131762B (en) | 2021-03-18 | 2021-03-18 | Vehicle parking method, system and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115131762A true CN115131762A (en) | 2022-09-30 |
CN115131762B CN115131762B (en) | 2024-09-24 |
Family
ID=83374649
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110292105.9A Active CN115131762B (en) | 2021-03-18 | 2021-03-18 | Vehicle parking method, system and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115131762B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118506329A (en) * | 2024-04-23 | 2024-08-16 | 江苏紫荆科技文化有限公司 | Probability reduction system based on visual big data analysis |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020175832A1 (en) * | 2001-04-24 | 2002-11-28 | Matsushita Electric Industrial Co., Ltd. | Parking assistance apparatus |
CN107886080A (en) * | 2017-11-23 | 2018-04-06 | 同济大学 | One kind is parked position detecting method |
CN109886209A (en) * | 2019-02-25 | 2019-06-14 | 成都旷视金智科技有限公司 | Anomaly detection method and device, mobile unit |
CN110969655A (en) * | 2019-10-24 | 2020-04-07 | 百度在线网络技术(北京)有限公司 | Method, device, equipment, storage medium and vehicle for detecting parking space |
CN111291650A (en) * | 2020-01-21 | 2020-06-16 | 北京百度网讯科技有限公司 | Automatic parking assistance method and device |
CN111369439A (en) * | 2020-02-29 | 2020-07-03 | 华南理工大学 | Panoramic view image real-time splicing method for automatic parking stall identification based on panoramic view |
CN111942372A (en) * | 2020-07-27 | 2020-11-17 | 广州汽车集团股份有限公司 | Automatic parking method and system |
WO2020238284A1 (en) * | 2019-05-29 | 2020-12-03 | 北京市商汤科技开发有限公司 | Parking space detection method and apparatus, and electronic device |
CN112201078A (en) * | 2020-09-30 | 2021-01-08 | 中国人民解放军军事科学院国防科技创新研究院 | Automatic parking space detection method based on graph neural network |
CN112417926A (en) * | 2019-08-22 | 2021-02-26 | 广州汽车集团股份有限公司 | Parking space identification method and device, computer equipment and readable storage medium |
CN112418186A (en) * | 2020-12-15 | 2021-02-26 | 苏州挚途科技有限公司 | Driving region detection method and device |
-
2021
- 2021-03-18 CN CN202110292105.9A patent/CN115131762B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020175832A1 (en) * | 2001-04-24 | 2002-11-28 | Matsushita Electric Industrial Co., Ltd. | Parking assistance apparatus |
CN107886080A (en) * | 2017-11-23 | 2018-04-06 | 同济大学 | One kind is parked position detecting method |
CN109886209A (en) * | 2019-02-25 | 2019-06-14 | 成都旷视金智科技有限公司 | Anomaly detection method and device, mobile unit |
WO2020238284A1 (en) * | 2019-05-29 | 2020-12-03 | 北京市商汤科技开发有限公司 | Parking space detection method and apparatus, and electronic device |
CN112417926A (en) * | 2019-08-22 | 2021-02-26 | 广州汽车集团股份有限公司 | Parking space identification method and device, computer equipment and readable storage medium |
CN110969655A (en) * | 2019-10-24 | 2020-04-07 | 百度在线网络技术(北京)有限公司 | Method, device, equipment, storage medium and vehicle for detecting parking space |
CN111291650A (en) * | 2020-01-21 | 2020-06-16 | 北京百度网讯科技有限公司 | Automatic parking assistance method and device |
CN111369439A (en) * | 2020-02-29 | 2020-07-03 | 华南理工大学 | Panoramic view image real-time splicing method for automatic parking stall identification based on panoramic view |
CN111942372A (en) * | 2020-07-27 | 2020-11-17 | 广州汽车集团股份有限公司 | Automatic parking method and system |
CN112201078A (en) * | 2020-09-30 | 2021-01-08 | 中国人民解放军军事科学院国防科技创新研究院 | Automatic parking space detection method based on graph neural network |
CN112418186A (en) * | 2020-12-15 | 2021-02-26 | 苏州挚途科技有限公司 | Driving region detection method and device |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118506329A (en) * | 2024-04-23 | 2024-08-16 | 江苏紫荆科技文化有限公司 | Probability reduction system based on visual big data analysis |
Also Published As
Publication number | Publication date |
---|---|
CN115131762B (en) | 2024-09-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112287860B (en) | Training method and device of object recognition model, and object recognition method and system | |
CN115082674B (en) | Multi-mode data fusion three-dimensional target detection method based on attention mechanism | |
CN113095152B (en) | Regression-based lane line detection method and system | |
CN113657409A (en) | Vehicle loss detection method, device, electronic device and storage medium | |
CN108428248B (en) | Vehicle window positioning method, system, equipment and storage medium | |
CN111681259B (en) | Vehicle tracking model building method based on Anchor mechanism-free detection network | |
CN111860072A (en) | Parking control method and device, computer equipment and computer readable storage medium | |
CN116188999B (en) | Small target detection method based on visible light and infrared image data fusion | |
CN111967396A (en) | Processing method, device and equipment for obstacle detection and storage medium | |
CN117173399A (en) | Traffic target detection method and system of cross-modal cross-attention mechanism | |
CN115953744A (en) | Vehicle identification tracking method based on deep learning | |
CN115131762B (en) | Vehicle parking method, system and computer readable storage medium | |
CN117893990B (en) | Road sign detection method, device and computer equipment | |
CN114120260A (en) | Method and system for identifying travelable area, computer device, and storage medium | |
CN118038409A (en) | Vehicle drivable region detection method, device, electronic equipment and storage medium | |
CN117372991A (en) | Automatic driving method and system based on multi-view multi-mode fusion | |
CN111460854A (en) | Remote target detection method, device and system | |
CN115082897A (en) | Monocular vision 3D vehicle target real-time detection method for improving SMOKE | |
CN115565155A (en) | Training method of neural network model, generation method of vehicle view and vehicle | |
CN114972945A (en) | Multi-machine-position information fusion vehicle identification method, system, equipment and storage medium | |
CN112232272A (en) | Pedestrian identification method based on fusion of laser and visual image sensor | |
US20230230385A1 (en) | Method for generating at least one bird's eye view representation of at least a part of the environment of a system | |
CN118397602B (en) | Intelligent guideboard recognition vehicle-mounted camera system | |
CN118298184B (en) | Hierarchical error correction-based high-resolution remote sensing semantic segmentation method | |
CN115661577B (en) | Method, apparatus and computer readable storage medium for object detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |