CN111098850A - Automatic parking auxiliary system and automatic parking method - Google Patents

Automatic parking auxiliary system and automatic parking method Download PDF

Info

Publication number
CN111098850A
CN111098850A CN201811245559.5A CN201811245559A CN111098850A CN 111098850 A CN111098850 A CN 111098850A CN 201811245559 A CN201811245559 A CN 201811245559A CN 111098850 A CN111098850 A CN 111098850A
Authority
CN
China
Prior art keywords
parking
road image
semantic information
image
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811245559.5A
Other languages
Chinese (zh)
Inventor
张家旺
汪路超
谢国富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Chusudu Technology Co ltd
Original Assignee
Beijing Chusudu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chusudu Technology Co ltd filed Critical Beijing Chusudu Technology Co ltd
Priority to CN201811245559.5A priority Critical patent/CN111098850A/en
Publication of CN111098850A publication Critical patent/CN111098850A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/06Automatic manoeuvring for parking

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of intelligent driving, in particular to an automatic parking auxiliary detection system and an automatic parking method; the automatic parking system in the prior art has weak characteristic expression capability or cannot directly provide complete structural information of a parking space; the invention provides an automatic parking auxiliary system which is used for obtaining semantic information of a road image by inputting an image to be detected into a road image model and carrying out real-time map construction and high-precision positioning according to semantic features in the road image. Meanwhile, the visual information and the ultrasonic waves are utilized to detect the blank parking space, and the parking path of the vehicle is accurately controlled by relying on a high-precision positioning technology to finish the parking process.

Description

Automatic parking auxiliary system and automatic parking method
Technical Field
The invention relates to the field of intelligent driving, in particular to an automatic parking auxiliary detection system and an automatic parking method.
Background
With the development of science and technology, new concepts such as automatic driving, unmanned vehicles and the like are developed. The automatic parking system is an indispensable part in the automatic driving technology, and the interest of the automobile industry in developing the automatic parking system is gradually increased. In fact, the intelligent parking assist system is applied to some vehicles as early as 2003, and the related technology is continuously developed in recent years. Specifically, a camera or an ultrasonic sensor of the vehicle can sense the current environment of the vehicle, the parking space detection technology is adopted to process the surrounding environment information of the vehicle acquired by the sensor, the position information of a nearby blank parking space can be acquired, and then the parking route is automatically planned for parking. How to accurately and effectively detect and locate a parking space near a vehicle is a key problem for such a system, and further research is still needed.
One type of existing parking systems is based on an ultrasonic sensor, and such methods generally use the ultrasonic sensor to detect and locate an empty parking space after the parking space is very close to the empty parking space, and then plan a path to park the vehicle. The method can only process a vertical parking space or a horizontal parking space due to the limitation of ultrasonic positioning, and meanwhile, a driver needs to park a vehicle beside the parking space and then carries out rough positioning by means of ultrasonic waves, so that parking is realized.
In addition, there are methods for guiding the parking process by analyzing images of the ground near the vehicle and extracting parking spaces from the images based on images captured by a camera on the vehicle, which rely on parking space detection algorithms. For a general parking space detection algorithm, a parking space frame is usually extracted based on a manual construction rule by utilizing bottom layer edge and corner features, the feature expression capability of the parking space frame is weak, and the rule-based method is difficult to extend to various parking space forms; the other method is based on a target detection technology, a square frame is used for detecting and extracting the parking space, the method cannot process the inclined parking space, and the positioning of the boundary frame of the parking space is not accurate; in another method, the position of a parking space is found by integrating detection information of separation points and separation lines of the parking space, and the method cannot directly provide complete structure information of one parking space and is difficult to give information about whether the parking space can be stopped or not.
Disclosure of Invention
In view of the above, the present application provides an automatic parking assist system based on multi-source sensor fusion. According to the invention, the look-around aerial view is utilized, the semantic features are extracted by using a deep learning method, and real-time map construction and high-precision positioning are carried out according to the semantic features. Meanwhile, the visual information and the ultrasonic waves are utilized to detect the blank parking space, and the parking path of the vehicle is accurately controlled by relying on a high-precision positioning technology to finish the parking process.
The invention provides an automatic parking auxiliary system, which is characterized in that: the system comprises a road image detection model, wherein the road image detection model is a neural network trained by a road sample image;
the system inputs an image to be detected into the road image model to obtain semantic information of the road image;
the system also comprises a map construction module, wherein the map construction module tracks the semantic information, estimates the pose of the vehicle through an image optimization method and constructs a map;
the system also comprises a positioning module, and the positioning module performs positioning according to the matching of the current observation semantic information and the map.
Preferably, the semantic information includes lane lines, parking space lines, and obstacles.
Preferably, the neural network is reflonenet.
Preferably, the positioning module utilizes the characteristics of different sensors to perform matching positioning by adopting a visual and wheel speed meter fusion method.
Preferably, the tracking of the semantic information is specifically represented as: the following relationships are satisfied at different times:
Figure BDA0001840462340000021
wherein, PiIs the vehicle pose at the time i,
Figure BDA0001840462340000022
position of observed visual feature at time i, XjFor observation data at time i
Figure BDA0001840462340000023
A location in a map.
The invention also provides a method for automatic parking by using the automatic parking auxiliary system, which is characterized by comprising the following steps:
step S1: acquiring a current real-time road image;
step S2: inputting the real-time road image into the road image detection model to obtain semantic information of the road image;
step S3: tracking semantic information at different moments, estimating the pose of the vehicle by an image optimization method and constructing a map;
step S4: positioning according to the matching of the current observation semantic features and the map;
step S5: and judging the free state of the parking space, and planning a path to automatically park.
Preferably, the semantic information includes lane lines, parking space lines, and obstacles.
Preferably, the neural network is reflonenet.
Preferably, in step S5, the various sensors are used to determine the idle status of the parking space, obtain candidate parking spaces, and plan the parking path using the map, thereby finally completing automatic parking.
Preferably, in step S4, matching and positioning are performed by using the characteristics of different sensors and using a visual and wheel speed meter fusion method.
Preferably, the road sample image is input into an initial neural network model, and the road sample image is used for fine tuning the initial neural network model in a supervised learning manner to obtain a road image detection model.
The invention is characterized by, but not limited to, the following aspects:
(1) based on a deep convolutional neural network, a pre-trained road image semantic information detection model is utilized to perform semantic segmentation and recognition on a road image acquired by a vehicle camera in real time, and information such as lane lines, parking space lines and obstacles in the real-time road image is extracted. Here, semantic segmentation, in which we need to classify visual input into different semantically interpretable categories, refers to segmenting an image from the pixel level and identifying its content. The interpretability of semantics, i.e. classification categories, is meaningful in the real world. For example, we may need to distinguish all pixels in the image that belong to cars and paint these pixels in blue. The road image semantic information detection model can extract real-time road image semantic features in an end-to-end mode and learn the real-time road image semantic features, and can exert the effect of big data to the maximum extent;
(2) vision and other multi-source sensors are fully utilized, real-time drawing construction and high-precision matching positioning can be realized, and parking spaces at any angle can be processed; the existing mapping scheme utilizes a multi-source sensor, but is not combined with visual image information in the application for use, and even the information of the multi-source sensor is not matched with the visual image semantic information for use. The invention uses semantic information tracking, consisting of
Figure BDA0001840462340000031
Obtaining observation data
Figure BDA0001840462340000032
And matching and positioning the position in the map by combining the obtained position with a multi-source sensor such as a wheel speed meter and the like.
(3) The reflonenet neural network is employed because it can be fine-tuned after modifying its partial structure. The application of the RefineNet neural network brings great flexibility, and the calculation model can be adjusted timely to adapt to calculation under various actual parking conditions.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention. In the drawings:
FIG. 1 is a flowchart illustrating a method for training a road image semantic information detection model according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of semantic segmentation labeling performed on a road image;
FIG. 3 is a schematic view of an aerial view captured by a vehicle and labeled;
FIG. 4 is a flowchart illustrating an automatic parking method according to an embodiment of the present application;
fig. 5 is a flowchart of an algorithm for constructing a map and positioning a vehicle according to an embodiment of the present application.
DETAILED DESCRIPTION OF EMBODIMENT (S) OF INVENTION
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the following embodiments and accompanying drawings. The exemplary embodiments and descriptions of the present invention are provided to explain the present invention, but not to limit the present invention.
The application example provides a training method of a road image semantic information detection model and an automatic parking method based on the road image semantic information detection model. The road image semantic information detection model and the automatic parking method can be applied to a terminal, a server or the combination of the terminal and the server. Wherein a terminal may be any user device now known, developing or developed in the future that is capable of interacting with a server via any form of wired and/or wireless connection (e.g., Wi-Fi, LAN, cellular, coaxial, etc.), including but not limited to: existing, developing, or future developing smartphones, non-smartphones, tablets, laptop personal computers, desktop personal computers, minicomputers, midrange computers, mainframe computers, and the like. The server in the embodiment of the present application may be an example of an existing device, a device under development, or a device developed in the future, which is capable of providing an application service for information recommendation to a user. The embodiments of the present application are not limited in any way in this respect.
The following describes a specific implementation of the embodiments of the present application with reference to the drawings.
Firstly, a specific implementation manner of the training method for the road image semantic information detection model provided in the embodiment of the present application is introduced.
Fig. 1 is a flowchart illustrating a training method for a road image semantic information detection model provided in an embodiment of the present application, which is applied to the field of automatic driving, and referring to fig. 1, the method includes:
step 101: acquiring a road sample image, wherein the road sample image is marked with semantic feature information.
The road sample image may be regarded as a sample image for training a road image semantic information detection model. In the embodiment of the application, the training model adopts a supervised training mode, so that semantic feature information is marked in the road sample image. By marking the semantic feature information, the model training speed can be increased, and the accuracy of model detection can be improved.
To explain the semantic features, we first introduce semantic segmentation. Semantic segmentation is a fundamental task in computer vision where we need to separate visual input into different semantically interpretable categories, "interpretability of semantics", i.e. classification categories are meaningful in the real world. The key to image understanding is to break down an entire scene into several separate entities, which also helps us to infer the different behavior of the target. The object detection method can help us to draw a border of some certain entities, but human understanding of the scene can detect each entity with pixel level granularity and mark exact boundaries. We have begun to develop autonomous cars and intelligent robots, both of which require a deep understanding of the surrounding environment, so that accurate segmentation of entities becomes increasingly important. For example, we may need to distinguish all pixels in the image that belong to cars and paint these pixels in blue. An example of semantic feature labeling of road images is shown in FIG. 2.
In some possible implementations of the embodiments of the present application, the images processed by us may be all-around overhead views obtained by stitching images acquired by cameras located at the front, left, rear, right, and the like of the vehicle body. The camera can be a specific fish-eye camera, and the camera system is calibrated in advance, so that images acquired by the four fish-eye cameras can be spliced into an all-around overhead view, the central position of the images is the position of the vehicle, and the rest positions are potential parking areas, as shown in fig. 3. In some possible implementations of the present application example, the vehicle location line, the lane line, the obstacle, etc. may be labeled in a semantic masking manner. Some possible implementations of the embodiments of the present application may also be labeled in other ways.
In the embodiment of the present application, a sample library may be established in advance, and a sample image may be obtained from the sample library. The sample library can adopt public images in a data set, and can also acquire images collected by a camera of the vehicle from storage equipment of the vehicle, and mark parking space areas in the images, so that the sample library is established. In some cases, the sample image may also be directly obtained, for example, an image collected by a camera of the vehicle in real time is directly obtained, the parking space area of the image is labeled, and the labeled image is used as the sample image.
Step 102: and inputting the road image into a pre-established initial neural network model.
After the road sample image is acquired, the road sample image may be input to a pre-established initial neural network model, so that the initial neural network model is trained by using the road sample image.
In some possible implementations of the embodiments of the present application, the road sample image may be further scaled to a preset size before being input into the pre-established initial neural network model. Therefore, the initial neural network model can learn the road sample images with the same size, so that the road samples can be processed more quickly and accurately, and the training efficiency of the model is improved.
Step 103: and training the neural network model by using the sample image to obtain a road image semantic information detection model.
For ease of understanding, the concept of a neural network model is first briefly introduced. A neural network is a network system formed by a large number of simple processing units widely interconnected, which is a highly complex nonlinear dynamical learning system with massive parallelism, distributed storage and processing, self-organization, self-adaptation and self-learning capabilities. The neural network model is a mathematical model established based on the neural network, and is widely applied in many fields based on the strong learning capacity of the neural network model.
In the field of image processing and pattern recognition, a convolutional neural network model is often used for pattern recognition. Due to the characteristics of partial connection of convolution layers and weight sharing in the convolutional neural network model, parameters needing to be trained are greatly reduced, the network model is simplified, and the training efficiency is improved. Through the rapid development in recent years, the convolutional neural network also has a series of breakthrough progresses in the field of semantic segmentation at present, and the segmentation at the pixel level can be realized. For multiple similar objects in an image, semantic segmentation predicts all pixels of the multiple objects as a whole into the same class.
In one road image, there may be various categories such as lane lines, parking space lines, obstacles, and the like as described above. Through semantic segmentation, different semantic features can be extracted, so that different semantic features have different labeling information.
Specifically, in this embodiment, a network that obtains a better result in the semantic segmentation field may be used as an initial neural network model, such as RefineNet, PSPNet, and the like, to modify the number of output classes and the structures of other parts that may need to be modified, and the neural network model may be trained by using the road sample image and using a fine-tuning method. The semantic features in the road sample image are fully learned by the convolutional layer in the initial neural network model, the full-link layer in the initial neural network model can map the relevant features according to the learned relevant features of the road sample image to obtain segmentation results with different semantics, the recognition results of the semantic segmentation are compared with the semantic features labeled in advance in the road sample image, parameters of the initial neural network model can be optimized, and the road image semantic information detection model can be obtained after the initial neural network model is subjected to iterative training of more training samples.
From the above, the application provides a training method of a road image semantic information detection model. Acquiring a road sample image, marking semantic features in the road sample image, inputting the road sample image into an initial neural network model, and finely adjusting the initial neural network model by using the road sample image in a supervised learning mode to obtain a road image semantic information detection model. The initial neural network model is trained by adopting the road sample image marked with the parking space area, and the road image semantic information detection model obtained by training has higher accuracy and efficiency when the parking space area is predicted by adopting a large amount of road sample images.
Based on the training method for the road image semantic information detection model provided in the above embodiment, the embodiment of the present application further provides a road image semantic information detection method based on the road image semantic information detection model.
Next, an automatic parking method provided in an embodiment of the present application will be described in detail with reference to the accompanying drawings.
Fig. 4 is a flowchart of an automatic parking method provided in an embodiment of the present application, where the method is applied to the field of automatic driving, and referring to fig. 4, the method includes:
step 401: and acquiring a current road image.
The current road image refers to an image of the surroundings of the current location of the vehicle, because in practice this automatic parking method is always used when the vehicle is ready for parking, at which time parking spaces should be present around the location of the vehicle.
It is to be understood that the current road image may be a road image acquired in real time. In some possible implementation manners of the embodiment of the application, distortion removal and splicing operations can be performed on images shot by front-view, left-view, rear-view and right-view cameras of a vehicle, and an obtained all-round overhead view is used as a current road image. In some possible implementations, there may be more or fewer cameras, or the road image near the position of the vehicle may be captured by the around-looking camera of the vehicle, so as to obtain the current road image.
The above is only some specific examples of obtaining the current road image, and the obtaining of the current road image is not limited in the present application, and different implementation manners may be adopted according to requirements.
Step 402: and inputting the current road image into a road image semantic information detection model to obtain semantic features such as lane lines, parking space lines, obstacles and the like.
The road image semantic information detection model is generated according to the training method of the road image semantic information detection model provided in the embodiment.
After the current road image is input into the road image semantic information detection model, the road image semantic information detection model can obtain a category mask image representing each semantic feature region by extracting the features of the current road image and mapping the extracted features, wherein the category mask is an output result of the road image semantic information detection model, the result is the pixel-level segmentation of the current road image, and one region represents a semantic category. The extracted features are mapped, category mask images representing all semantic feature regions are obtained, and the category mask images are output to provide a basis for semantic category division in the follow-up process, which is one of the innovation points of the invention.
Step 403: and tracking semantic information at different moments, estimating the vehicle pose by a graph optimization method and constructing a map.
In step 402, semantic feature information is obtained by inputting the current road image into the road image semantic information detection model. Further, we can solve the optimization problem to solve the current pose of the vehicle by converting the problem into an optimization problem.
Let PiIs the vehicle pose at the time i,
Figure BDA0001840462340000071
position of observed visual feature at time i, XjFor observation data at time i
Figure BDA0001840462340000072
A location in a map. It will be appreciated that these data satisfy the following relationship:
Figure BDA0001840462340000073
the observation data at different moments satisfy the following relations:
Figure BDA0001840462340000074
that is, the position of a feature point in a map should be the same at different times. We set up the following optimization problem:
Pi*Ai=Pi+1*Ai+1
Figure BDA0001840462340000075
Figure BDA0001840462340000076
Pi+1=argmin(||Pi*Ai-Pi+1*Ai+1||2)
changing phi to Pi*Ai-Pi+1*Ai+1||2Namely, the two norms of the actual error are used as an objective function to solve, and the pose at the current moment is used as an optimization variable. By changing the optimization variables, the sum of the squares of the errors becomes larger or smaller accordingly, and we can numerically find their gradients and second-order gradients, and then find the optimal value by using the gradient descent method:
Figure BDA0001840462340000077
the two matrices are the Yarespectively Bubu matrix and the Hessian matrix. Since each visual feature is not likely to appear in all motion processes, usually only in a small portion of the image, the two matrices are sparse matrices and can be solved by sparse algebraic methods. For the optimization problem, other methods may also be used to solve the optimization problem, and the solution method is not limited herein. The result obtained above is the current vehicle pose estimated from the vehicle pose at the previous time.
According to the vehicle running continuity, the co-view observation information at different moments in the time neighborhood provides the possibility of local positioning, and the non-co-view information completes the expansion of a local map. With the lapse of time, the area that the vehicle has traveled through expands, and different local maps are fused to form a global map.
Step 404: and positioning according to the matching of the current observation semantic features and the map.
After the map is constructed, the vehicle is matched and positioned with the constructed map according to the currently observed semantic information in the parking process, which is an iterative process. During the vehicle running process, the pose is constantly changed, the change of the observation data is reflected in the aspect of information input, and the semantic features acquired from the observation data can help people to match the semantic features in the map, so that the positioning is completed. The semantic features here refer to the features of a particular logo pattern in a map that are relevant for automatic driving, and belong to semantic information. Fig. 5 is a flow chart showing the positioning implemented by the present algorithm.
In the vehicle positioning process, the situation of insufficient visual semantic information is possibly encountered, and in order to ensure the positioning accuracy and the smoothness of the vehicle track, a scheme of fusing vision and wheel speed meters is adopted, and the characteristics of different sensors are fully utilized.
Step 405: and judging the free state of the parking space, and planning a path to automatically park.
According to the steps, a global map of the vehicle driving area is established, and the vehicle position is accurately positioned. The method comprises the steps of constructing a global map through visual information, accurately positioning, and judging the idle condition of a parking space area. There are various methods for determining the parking space vacancy condition. For example, by performing joint analysis on road image semantic features, if no obstacle exists in the parking space area, the parking space area is judged to be idle; obstacle detection may also be performed by an ultrasonic sensor. Here we do not limit the method of parking space free condition detection.
When the parking space is free and the position of the parking space meets the parking requirement, the parking space is identified as a candidate parking space. And after the candidate parking spaces are found, path planning is carried out in the established map, and automatic parking is carried out by utilizing an automatic driving technology. And in the parking process, the path is dynamically adjusted in real time by utilizing high-precision positioning, and finally the vehicle parking is finished.
In view of the above, the embodiment of the present application provides an automatic parking method, which may determine semantic features in a current road image by inputting the current road image into a pre-trained road image semantic information detection model and based on an output result of the road image semantic information detection model. According to the semantic features of the road image, the current pose of the vehicle can be obtained by establishing an optimization problem, and a global map is constructed. And judging the idle state of the parking space by using various sensors, acquiring candidate parking spaces, planning a parking path by using a global map, and finally finishing automatic parking.
In the embodiment, the convolutional neural network model is mainly used as the neural network model to be trained to obtain the road image semantic information detection model, and the semantic features in the current road image are detected based on the road image semantic information detection model. With the continuous development of machine learning, a convolutional neural network model is also continuously developed. In particular, different types of convolutional neural networks may be employed as the initial neural network based on the function of the model to be trained and the data to be processed by the model. Common convolutional neural networks for the field of semantic segmentation include FCN, SegNet, RefineNet, PSPNet, DFN, and the like. In some possible implementation manners, preferably, the RefineNet is adopted as the initial neural network model, because the road image semantic information detection model can be obtained by fine tuning after modifying a partial structure of the model. Other neural networks may be employed or suitable neural networks may be designed themselves.
Therefore, the automatic parking method based on the multi-source sensor fusion is provided. The method comprises the steps of obtaining a road sample image, marking a parking space area in the road sample image, inputting the road sample image into an initial neural network model, and finely adjusting the initial neural network model by utilizing the road sample image in a supervised learning mode to obtain a road image semantic information detection model. The initial neural network model is trained by adopting the road sample images marked with the semantic features, and the road image semantic information detection model obtained by training has higher accuracy and efficiency when segmenting the semantic features by adopting a large number of road sample images. The semantic features in the current road image can be determined based on the output result of the road image semantic information detection model by inputting the current road image into the pre-trained road image semantic information detection model. According to the semantic features of the road image, the current pose of the vehicle can be obtained by establishing an optimization problem, and a global map is constructed. And judging the idle state of the parking space by using various sensors, acquiring candidate parking spaces, planning a parking path by using a global map, and finally finishing automatic parking.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, e.g., "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.

Claims (10)

1. An automatic parking assist system characterized in that: the system comprises a road image detection model, wherein the road image detection model is a neural network trained by a road sample image;
the system inputs an image to be detected into the road image model to obtain semantic information of the road image;
the system also comprises a map construction module, wherein the map construction module tracks the semantic information, estimates the pose of the vehicle through an image optimization method and constructs a map;
the system also comprises a positioning module, and the positioning module is used for positioning according to the semantic information of the current observation and map matching.
2. The system of claim 1, wherein: the semantic information comprises lane lines, parking space lines and obstacles.
3. The system according to any one of claims 1-2, wherein: the neural network is reflonenet.
4. The system according to any one of claims 1-3, wherein: the positioning module utilizes the characteristics of different sensors to carry out matching positioning by adopting a method of fusing vision and wheel speed meters.
5. The system according to any one of claims 1-4, wherein: the semantic information tracking is specifically represented as: the following relationships are satisfied at different times:
Figure FDA0001840462330000011
wherein, PiIs the vehicle pose at the time i,
Figure FDA0001840462330000012
position of observed visual feature at time i, XjFor observation data at time i
Figure FDA0001840462330000013
A location in a map.
6. Method for automatic parking with an automatic parking assistance system according to any one of claims 1 to 5, characterised in that it comprises the following steps:
step S1: acquiring a current real-time road image;
step S2: inputting the current real-time road image into the road image detection model to obtain semantic information of the road image;
step S3: tracking the semantic information at different moments, estimating the pose of the vehicle by an image optimization method and constructing a map;
step S4: positioning according to the semantic information which is observed currently and map matching;
step S5: and judging the free state of the parking space, and planning a path to automatically park.
7. The method of claim 6, wherein: the semantic information comprises lane lines, parking space lines and obstacles.
8. The method of claim 6, wherein: the neural network is reflonenet.
9. The method of claim 6, wherein: in step S5, the various sensors are used to determine the vacant status of parking spaces, candidate parking spaces are obtained, and the map is used to plan parking routes, thereby completing automatic parking.
10. The method of claim 6, wherein: in step S4, matching and positioning are performed by using the characteristics of different sensors and using a visual and wheel speed meter fusion method.
CN201811245559.5A 2018-10-25 2018-10-25 Automatic parking auxiliary system and automatic parking method Pending CN111098850A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811245559.5A CN111098850A (en) 2018-10-25 2018-10-25 Automatic parking auxiliary system and automatic parking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811245559.5A CN111098850A (en) 2018-10-25 2018-10-25 Automatic parking auxiliary system and automatic parking method

Publications (1)

Publication Number Publication Date
CN111098850A true CN111098850A (en) 2020-05-05

Family

ID=70417484

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811245559.5A Pending CN111098850A (en) 2018-10-25 2018-10-25 Automatic parking auxiliary system and automatic parking method

Country Status (1)

Country Link
CN (1) CN111098850A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111942372A (en) * 2020-07-27 2020-11-17 广州汽车集团股份有限公司 Automatic parking method and system
CN112284402A (en) * 2020-10-15 2021-01-29 广州小鹏自动驾驶科技有限公司 Vehicle positioning method and device
CN114141055A (en) * 2020-08-13 2022-03-04 纵目科技(上海)股份有限公司 Parking space detection device and detection method of intelligent parking system
CN114724398A (en) * 2022-03-30 2022-07-08 重庆长安汽车股份有限公司 Parking appointment method and system based on automatic driving and readable storage medium
CN115214629A (en) * 2022-07-13 2022-10-21 小米汽车科技有限公司 Automatic parking method, device, storage medium, vehicle and chip

Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2436577A2 (en) * 2010-09-30 2012-04-04 Valeo Schalter und Sensoren GmbH Device and method for detecting free parking spots
CN105015419A (en) * 2015-07-17 2015-11-04 中山大学 Automatic parking system and method based on stereoscopic vision localization and mapping
CN105856230A (en) * 2016-05-06 2016-08-17 简燕梅 ORB key frame closed-loop detection SLAM method capable of improving consistency of position and pose of robot
CN105946853A (en) * 2016-04-28 2016-09-21 中山大学 Long-distance automatic parking system and method based on multi-sensor fusion
CN106406338A (en) * 2016-04-14 2017-02-15 中山大学 Omnidirectional mobile robot autonomous navigation apparatus and method based on laser range finder
CN106802954A (en) * 2017-01-18 2017-06-06 中国科学院合肥物质科学研究院 Unmanned vehicle semanteme cartographic model construction method and its application process on unmanned vehicle
CN107292949A (en) * 2017-05-25 2017-10-24 深圳先进技术研究院 Three-dimensional rebuilding method, device and the terminal device of scene
CN107330357A (en) * 2017-05-18 2017-11-07 东北大学 Vision SLAM closed loop detection methods based on deep neural network
CN107341814A (en) * 2017-06-14 2017-11-10 宁波大学 The four rotor wing unmanned aerial vehicle monocular vision ranging methods based on sparse direct method
CN107424116A (en) * 2017-07-03 2017-12-01 浙江零跑科技有限公司 Position detecting method of parking based on side ring depending on camera
CN107600067A (en) * 2017-09-08 2018-01-19 中山大学 A kind of autonomous parking system and method based on more vision inertial navigation fusions
CN107610235A (en) * 2017-08-21 2018-01-19 北京精密机电控制设备研究所 A kind of mobile platform navigation method and apparatus based on deep learning
US20180068564A1 (en) * 2016-09-05 2018-03-08 Panasonic Intellectual Property Corporation Of America Parking position identification method, parking position learning method, parking position identification system, parking position learning device, and non-transitory recording medium for recording program
CN107792062A (en) * 2017-10-16 2018-03-13 北方工业大学 Automatic parking control system
CN107808407A (en) * 2017-10-16 2018-03-16 亿航智能设备(广州)有限公司 Unmanned plane vision SLAM methods, unmanned plane and storage medium based on binocular camera
CN107844769A (en) * 2017-11-01 2018-03-27 济南浪潮高新科技投资发展有限公司 Vehicle checking method and system under a kind of complex scene
CN107862720A (en) * 2017-11-24 2018-03-30 北京华捷艾米科技有限公司 Pose optimization method and pose optimization system based on the fusion of more maps
CN107871119A (en) * 2017-11-01 2018-04-03 西安电子科技大学 A kind of object detection method learnt based on object space knowledge and two-stage forecasting
CN108263376A (en) * 2016-12-30 2018-07-10 现代自动车株式会社 Automated parking system and automatic parking method
CN108280866A (en) * 2016-12-30 2018-07-13 乐视汽车(北京)有限公司 Road Processing Method of Point-clouds and system
CN108390706A (en) * 2018-01-30 2018-08-10 东南大学 A kind of extensive mimo channel state information feedback method based on deep learning
CN108416385A (en) * 2018-03-07 2018-08-17 北京工业大学 It is a kind of to be positioned based on the synchronization for improving Image Matching Strategy and build drawing method
CN108407805A (en) * 2018-03-30 2018-08-17 中南大学 A kind of vehicle automatic parking method based on DQN
CN108426581A (en) * 2018-01-08 2018-08-21 深圳市易成自动驾驶技术有限公司 Vehicle pose determines method, apparatus and computer readable storage medium
CN108460983A (en) * 2017-02-19 2018-08-28 泓图睿语(北京)科技有限公司 Parking stall condition detection method based on convolutional neural networks
CN108665496A (en) * 2018-03-21 2018-10-16 浙江大学 A kind of semanteme end to end based on deep learning is instant to be positioned and builds drawing method

Patent Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2436577A2 (en) * 2010-09-30 2012-04-04 Valeo Schalter und Sensoren GmbH Device and method for detecting free parking spots
CN105015419A (en) * 2015-07-17 2015-11-04 中山大学 Automatic parking system and method based on stereoscopic vision localization and mapping
CN106406338A (en) * 2016-04-14 2017-02-15 中山大学 Omnidirectional mobile robot autonomous navigation apparatus and method based on laser range finder
CN105946853A (en) * 2016-04-28 2016-09-21 中山大学 Long-distance automatic parking system and method based on multi-sensor fusion
CN105856230A (en) * 2016-05-06 2016-08-17 简燕梅 ORB key frame closed-loop detection SLAM method capable of improving consistency of position and pose of robot
US20180068564A1 (en) * 2016-09-05 2018-03-08 Panasonic Intellectual Property Corporation Of America Parking position identification method, parking position learning method, parking position identification system, parking position learning device, and non-transitory recording medium for recording program
CN108263376A (en) * 2016-12-30 2018-07-10 现代自动车株式会社 Automated parking system and automatic parking method
CN108280866A (en) * 2016-12-30 2018-07-13 乐视汽车(北京)有限公司 Road Processing Method of Point-clouds and system
CN106802954A (en) * 2017-01-18 2017-06-06 中国科学院合肥物质科学研究院 Unmanned vehicle semanteme cartographic model construction method and its application process on unmanned vehicle
CN108460983A (en) * 2017-02-19 2018-08-28 泓图睿语(北京)科技有限公司 Parking stall condition detection method based on convolutional neural networks
CN107330357A (en) * 2017-05-18 2017-11-07 东北大学 Vision SLAM closed loop detection methods based on deep neural network
CN107292949A (en) * 2017-05-25 2017-10-24 深圳先进技术研究院 Three-dimensional rebuilding method, device and the terminal device of scene
CN107341814A (en) * 2017-06-14 2017-11-10 宁波大学 The four rotor wing unmanned aerial vehicle monocular vision ranging methods based on sparse direct method
CN107424116A (en) * 2017-07-03 2017-12-01 浙江零跑科技有限公司 Position detecting method of parking based on side ring depending on camera
CN107610235A (en) * 2017-08-21 2018-01-19 北京精密机电控制设备研究所 A kind of mobile platform navigation method and apparatus based on deep learning
CN107600067A (en) * 2017-09-08 2018-01-19 中山大学 A kind of autonomous parking system and method based on more vision inertial navigation fusions
CN107808407A (en) * 2017-10-16 2018-03-16 亿航智能设备(广州)有限公司 Unmanned plane vision SLAM methods, unmanned plane and storage medium based on binocular camera
CN107792062A (en) * 2017-10-16 2018-03-13 北方工业大学 Automatic parking control system
CN107844769A (en) * 2017-11-01 2018-03-27 济南浪潮高新科技投资发展有限公司 Vehicle checking method and system under a kind of complex scene
CN107871119A (en) * 2017-11-01 2018-04-03 西安电子科技大学 A kind of object detection method learnt based on object space knowledge and two-stage forecasting
CN107862720A (en) * 2017-11-24 2018-03-30 北京华捷艾米科技有限公司 Pose optimization method and pose optimization system based on the fusion of more maps
CN108426581A (en) * 2018-01-08 2018-08-21 深圳市易成自动驾驶技术有限公司 Vehicle pose determines method, apparatus and computer readable storage medium
CN108390706A (en) * 2018-01-30 2018-08-10 东南大学 A kind of extensive mimo channel state information feedback method based on deep learning
CN108416385A (en) * 2018-03-07 2018-08-17 北京工业大学 It is a kind of to be positioned based on the synchronization for improving Image Matching Strategy and build drawing method
CN108665496A (en) * 2018-03-21 2018-10-16 浙江大学 A kind of semanteme end to end based on deep learning is instant to be positioned and builds drawing method
CN108407805A (en) * 2018-03-30 2018-08-17 中南大学 A kind of vehicle automatic parking method based on DQN

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
周彦等: "视觉同时定位与地图创建综述", 《智能系统学报》 *
梁明杰等: "基于图优化的同时定位与地图创建综述", 《机器人》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111942372A (en) * 2020-07-27 2020-11-17 广州汽车集团股份有限公司 Automatic parking method and system
CN114141055A (en) * 2020-08-13 2022-03-04 纵目科技(上海)股份有限公司 Parking space detection device and detection method of intelligent parking system
CN114141055B (en) * 2020-08-13 2024-04-16 纵目科技(上海)股份有限公司 Parking space detection device and method of intelligent parking system
CN112284402A (en) * 2020-10-15 2021-01-29 广州小鹏自动驾驶科技有限公司 Vehicle positioning method and device
CN112284402B (en) * 2020-10-15 2021-12-07 广州小鹏自动驾驶科技有限公司 Vehicle positioning method and device
CN114724398A (en) * 2022-03-30 2022-07-08 重庆长安汽车股份有限公司 Parking appointment method and system based on automatic driving and readable storage medium
CN115214629A (en) * 2022-07-13 2022-10-21 小米汽车科技有限公司 Automatic parking method, device, storage medium, vehicle and chip

Similar Documents

Publication Publication Date Title
CN111169468B (en) Automatic parking system and method
CN108171112B (en) Vehicle identification and tracking method based on convolutional neural network
CN111098850A (en) Automatic parking auxiliary system and automatic parking method
CN107808123B (en) Image feasible region detection method, electronic device, storage medium and detection system
Cheng et al. Curb detection for road and sidewalk detection
CN111259706B (en) Lane line pressing judgment method and system for vehicle
Paz et al. Probabilistic semantic mapping for urban autonomous driving applications
CN111738032B (en) Vehicle driving information determination method and device and vehicle-mounted terminal
CN111738033B (en) Vehicle driving information determination method and device based on plane segmentation and vehicle-mounted terminal
CN111259710B (en) Parking space structure detection model training method adopting parking space frame lines and end points
CN111274926B (en) Image data screening method, device, computer equipment and storage medium
Balaska et al. Enhancing satellite semantic maps with ground-level imagery
Han et al. Recognition and location of steel structure surface corrosion based on unmanned aerial vehicle images
CN113903011A (en) Semantic map construction and positioning method suitable for indoor parking lot
Ma et al. Boundarynet: extraction and completion of road boundaries with deep learning using mobile laser scanning point clouds and satellite imagery
Choi et al. Methods to detect road features for video-based in-vehicle navigation systems
CN113743163A (en) Traffic target recognition model training method, traffic target positioning method and device
CN110909656A (en) Pedestrian detection method and system with integration of radar and camera
CN111696147B (en) Depth estimation method based on improved YOLOv3 model
Golovnin et al. Video processing method for high-definition maps generation
Huang et al. Deep learning–based autonomous road condition assessment leveraging inexpensive rgb and depth sensors and heterogeneous data fusion: Pothole detection and quantification
CN111260955B (en) Parking space detection system and method adopting parking space frame lines and end points
CN111259709B (en) Elastic polygon-based parking space structure detection model training method
Imad et al. Navigation system for autonomous vehicle: A survey
Zheng et al. Exploring OpenStreetMap availability for driving environment understanding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200505

RJ01 Rejection of invention patent application after publication