CN111814657A - Unmanned vehicle parking method and system based on image recognition and storage medium - Google Patents

Unmanned vehicle parking method and system based on image recognition and storage medium Download PDF

Info

Publication number
CN111814657A
CN111814657A CN202010640859.4A CN202010640859A CN111814657A CN 111814657 A CN111814657 A CN 111814657A CN 202010640859 A CN202010640859 A CN 202010640859A CN 111814657 A CN111814657 A CN 111814657A
Authority
CN
China
Prior art keywords
image
identification
recognition
traffic
unmanned vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010640859.4A
Other languages
Chinese (zh)
Inventor
章哲祥
尹胜成
张庆昕
梅雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Tech University
Original Assignee
Nanjing Tech University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Tech University filed Critical Nanjing Tech University
Priority to CN202010640859.4A priority Critical patent/CN111814657A/en
Publication of CN111814657A publication Critical patent/CN111814657A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/586Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of parking space
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention discloses an image recognition-based unmanned vehicle parking method, system and storage medium, and belongs to the field of unmanned vehicle technology vision guidance. The parking method of the invention comprises the following steps: acquiring images of the surrounding environment to obtain an image to be identified; detecting whether a traffic identification or a director exists in the image to be recognized, if so, generating a corresponding image, otherwise, continuing to travel according to a preset route; carrying out corresponding image recognition on the generated image, and judging whether the recognition result is parking information or not; changing the driving route according to the recognition result; and acquiring images of the surrounding environment, searching a parking position closest to the traffic sign or the commander in the images, and parking. Aiming at the problem of poor recognition effect when the traffic identification and the gesture coexist in the prior art, the method and the device can simultaneously recognize the traffic identification and the gesture and process according to the priority, solve the problem of recognition of the coexistence of the traffic identification and the gesture, and improve the recognition accuracy.

Description

Unmanned vehicle parking method and system based on image recognition and storage medium
Technical Field
The invention relates to the field of unmanned automobile technology vision guidance, in particular to an unmanned automobile parking method and system based on image recognition and a storage medium.
Background
With the development of social economy and the continuous progress of scientific technology, the intelligent transportation system gradually becomes the development trend of the human transportation system in the future, and the unmanned technology plays an important role in the development trend. Industrial parks, campuses, tourist attractions, ports and docks are places where unmanned vehicles are more widely used at present.
At present, unmanned vehicles cruise and park in relevant parks, most of the unmanned vehicles recognize and process some parking instructions through program setting, such as parking according to gesture instructions or parking according to traffic identification information, but are easily confused when meeting people and traffic identifications or other complex environments.
The Chinese patent application, application No. CN201810979341.6, published 2020 and 3 months and 3 days, discloses a method and a device for recognizing traffic police gestures, a vehicle control unit and a storage medium, wherein the method comprises the following steps: acquiring a road condition image in front of a vehicle in real time; when a traffic police target is detected from the road condition image, acquiring a skeleton key point of the traffic police target and a coordinate position of the skeleton key point in a preset coordinate system according to a pre-established skeleton key point detection model; acquiring the body orientation of the traffic police target according to the coordinate position of the skeleton key point; when the body orientation of the traffic police target is towards the lane direction of the vehicle, recognizing the command gesture type of the traffic police target according to the skeleton key points and a pre-established traffic police command action recognition model; the invention realizes the function of automatically recognizing the command action meaning of the road traffic police and improves the driving intelligentization level, and has the defects that the invention can only recognize the gesture of the traffic police and can not recognize the traffic identification.
Disclosure of Invention
1. Technical problem to be solved
Aiming at the problems that the existing unmanned vehicle parking scheme in the park is vacant and the recognition effect is poor when the traffic identification and the gesture coexist in the prior art, the invention provides the unmanned vehicle parking method, the system and the storage medium based on image recognition, which can simultaneously recognize the traffic identification and the gesture and process the traffic identification and the gesture according to the priority, solve the recognition problem of the coexistence of the traffic identification and the gesture and improve the recognition accuracy.
2. Technical scheme
The purpose of the invention is realized by the following technical scheme.
An unmanned vehicle parking method based on image recognition comprises the following steps:
step 1, carrying out image acquisition on the surrounding environment to obtain an image to be identified;
step 2, detecting whether a traffic sign or a director exists in the image to be recognized, if so, generating a corresponding image, and entering step 3, otherwise, continuing to travel according to a preset route;
step 3, carrying out corresponding image recognition on the generated image, judging whether the recognition result is parking information, if not, entering step 4, otherwise, entering step 5;
step 4, changing a driving route according to the recognition result, and returning to the step 1;
and 5, acquiring images of the surrounding environment, and searching a parking position closest to the traffic sign or the commander in the images for parking.
Further, detecting whether a traffic sign or a director exists in the image to be recognized in the step 2 specifically includes the following steps:
step 2.1, establishing an identification detection model and training the identification detection model;
and 2.2, performing target detection on the image to be recognized by using the trained identification detection model, judging whether a traffic identification or a director exists in the image to be recognized, if so, acquiring the area where the traffic identification or the director is located, and generating a corresponding traffic identification pattern or a portrait pattern.
Furthermore, in step 2.3, the trained identification detection model is used to perform target detection on the image to be recognized, and whether a traffic identification or a commander exists in the image to be recognized is judged, which specifically includes the following steps:
extracting feature maps of the images to be identified by using a backbone feature extraction network Conv Layers;
transmitting feature maps into a PRN area suggestion network, generating suggestion frames anchors, classifying the suggestion frames through a softmax layer to obtain positive anchors containing target images, correcting the suggestion frames by utilizing bounding box regression frames, and acquiring more accurate proposals;
sending the feature maps and the prosages obtained in the first two steps into a Rol Pooling layer to calculate the prosages feature maps, and sending the prosages feature maps into a subsequent network;
calculating the type of the propofol of each target detection frame through the full connection layer and the softmax, and correcting the position of the target detection frame by using bounding box regression frame regression again;
when the target is identified, the position vector [ x, y, h, w ] of the detection frame is sent to the corresponding identification model for identification according to the type of the target, wherein x and y are horizontal and vertical coordinates of the center point of the detection frame respectively, and h and w are the length and the width of the detection frame respectively.
Further, the step 3 of performing corresponding image recognition on the generated image specifically includes the following steps:
3.1, selecting a corresponding image recognition model according to the type of the image, and entering step 3.2 if the recognition result has a portrait pattern, or entering step 3.3;
step 3.2, performing gesture recognition on the image pattern to obtain a corresponding recognition result;
and 3.3, carrying out traffic identification on the traffic identification to obtain a corresponding identification result.
Furthermore, the model for performing traffic sign recognition on the traffic sign in the step 3 is a super-resolution reconstruction convolutional neural network model, and the model for performing gesture recognition on the image pattern is a 3D convolutional neural network model.
An image recognition based unmanned vehicle parking system comprising:
the image acquisition unit is used for acquiring images of the surrounding environment to obtain an image to be identified;
the identification detection unit is used for detecting whether a traffic identification or a director exists in the image to be identified, if so, generating a corresponding image, entering the image classification unit, and if not, continuing to advance according to a preset route;
the image classification and identification unit is used for performing image identification on the image to be identified and judging whether the identification result is parking information, if not, entering the route changing unit, otherwise, entering the parking unit;
the route changing unit is used for changing the driving route according to the recognition result and returning the driving route to the image acquisition unit;
and the parking unit is used for acquiring images of the surrounding environment and searching for a parking position closest to the traffic sign or the commander in the images for parking.
Further, the identification detection unit includes:
the model training module is used for establishing an identification detection model and training the identification detection model;
and the image detection module is used for carrying out target detection on the image to be recognized by using the trained identification detection model, judging whether a traffic identification or a director exists in the image to be recognized, if so, acquiring the area where the traffic identification or the director is located, and generating a corresponding traffic identification pattern or a portrait pattern.
Further, the image classification and identification unit includes:
the model selection module is used for selecting a corresponding image recognition model according to the type of the image, if a portrait pattern exists in a recognition result, entering the gesture recognition module, and if not, entering the traffic identification recognition module;
the gesture recognition module is used for performing gesture recognition on the image pattern to obtain a corresponding recognition result;
and the traffic identification module is used for identifying the traffic identification to obtain a corresponding identification result.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the parking method described above.
3. Advantageous effects
Compared with the prior art, the invention has the advantages that:
according to the invention, the first identification camera and the second identification camera are arranged on the unmanned vehicle, the parking mark is detected by the first identification camera, and the ground parking space line is identified and positioned by matching with the second identification camera, so that the unmanned vehicle can accurately park at the preset position; firstly, detecting an image acquired by a first recognition camera, and dividing a recognition frame into two stages based on target detection and fine classification in the process of detecting and recognizing a traffic sign or a gesture of a commander so as to ensure higher recall rate and accuracy; the target detection is used for detecting whether a traffic identification or a commander exists in an image, if so, the image is processed, the image is further subjected to fine classification and identification, a corresponding identification model is further selected according to whether the traffic identification or the commander exists in the image, the classification module is respectively provided with an independent fine classification network 3D CNN and an independent fine classification network SRCNN for the identification of gestures and traffic identifications, the relevant characteristic information of the gesture instruction image in the time dimension can be reserved, the traffic identification with lower resolution of a shooting result caused by various factors is converted into a high-resolution image, the identification accuracy is improved, the independent design mode is convenient for simultaneous research and development of multiple persons, and the research and development period is shortened. The gesture of the commander and the traffic identification instruction information are comprehensively recognized, so that the parking place of the unmanned vehicle can be changed at any time, and the parking place of the unmanned vehicle is more flexible and diversified in place; the invention can more efficiently finish the parking action of the unmanned vehicle under the complex environments of the coexistence of people and marks and the like by setting different recognition objects and the priority of command information in advance. Compared with the gesture recognition parking based on wearable equipment and the recognition technology based on manual extracted features, the gesture recognition parking system has the characteristics of maneuverability, universality, high efficiency and the like.
Drawings
FIG. 1 is a flow chart of a method for parking an unmanned vehicle according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a recognized gesture in an embodiment of the invention;
FIG. 3 is a schematic diagram of an identification mark in an embodiment of the present invention;
fig. 4 is a block diagram of an unmanned vehicle parking system in an embodiment of the present invention.
Detailed Description
The invention is described in detail below with reference to the drawings and specific examples.
As shown in fig. 1, an embodiment of the present invention provides an image recognition-based unmanned vehicle parking method, including the steps of:
step 1, the vehicle advances according to a preset line, and image acquisition is carried out on the surrounding environment to obtain an image to be identified.
Specifically, in this embodiment, the preset route is an optimal path set in advance according to a departure place and a destination, the vehicle is an unmanned vehicle in a garden, and two recognition cameras are pre-installed on the unmanned vehicle, wherein a first recognition camera is installed at the top in front of the unmanned vehicle and slightly tilts upwards for image acquisition of a surrounding environment; the second identification camera is a wide-angle camera, is arranged in the front of the unmanned vehicle, is inclined downwards and is used for searching the position of the stop line.
After the vehicle enters the park, the first recognition camera can collect the surrounding environment of the unmanned vehicle in real time to obtain video images, and image frames, namely to-be-recognized images, are extracted from the video images frame by frame and are used for subsequent image recognition.
And 2, detecting whether a traffic mark or a director exists in the image to be recognized, if so, generating a corresponding image, entering the step 3, otherwise, returning to the step 1, and continuing to travel according to a preset route.
Specifically, detecting whether a traffic sign or a director exists in an image to be identified specifically comprises the following steps:
step 2.1, establishing an identification detection model and training the identification detection model;
specifically, in this embodiment, the fast RCNN network model is used as the identification detection model, and the fast RCNN network model has the advantages of high identification precision and high speed, and can efficiently and quickly identify the commander and the traffic identification, and the identification detection model is trained through a large number of portrait patterns and traffic identification patterns, so as to obtain the detection model capable of quickly identifying the traffic identification patterns and the portrait patterns, wherein the types of the portrait patterns and the traffic identification patterns are shown in fig. 2 and 3.
And 2.2, performing target detection on the image to be recognized by using the trained identification detection model, judging whether a traffic identification or a director exists in the image to be recognized, if so, acquiring the area where the traffic identification or the director is located, and generating a corresponding traffic identification pattern or a portrait pattern.
Specifically, in this embodiment, the identifier detection model is used to identify whether a traffic identifier or a director exists in the image to be identified, so as to select a corresponding identification model according to the identification result, and when the traffic identifier or the director is identified, the area where the traffic identifier or the director is located is obtained, and a corresponding traffic identifier pattern or a portrait pattern of the director to be identified is generated.
In step 2.2, the trained identification detection model is used for carrying out target detection on the image to be recognized, and whether a traffic identification or a commander exists in the image to be recognized is judged, and the method specifically comprises the following steps:
(1) extracting feature maps of the video images by using a backbone feature extraction network Conv Layers;
(2) transmitting feature maps into a PRN area suggestion network, generating suggestion frames anchor, classifying the suggestion frames through a softmax layer to obtain positive anchors containing target images, correcting the suggestion frames by utilizing bounding box regression frames, and acquiring more accurate proposals;
(3) sending the feature maps and the prosages obtained in the first two steps into a Rol Pooling layer to calculate the prosages feature maps, and sending the feature maps into a subsequent network, wherein the subsequent network is a full connection layer;
(4) calculating the type of the propofol of each target detection frame through the full connection layer and the softmax, and correcting the position of the target detection frame by using bounding box regression frame regression again;
(5) when the target is identified, the position vector [ x, y, h, w ] of the detection frame is sent to the corresponding identification model for identification according to the type of the target, wherein x and y are respectively the horizontal and vertical coordinates of the central point of the detection frame, h and w are respectively the length and the width of the detection frame, and the corresponding identification model is the super-resolution reconstruction convolutional neural network model or the 3D convolutional neural network model in the step 3.
And 3, carrying out corresponding image recognition on the generated image, judging whether the recognition result is parking information, if not, entering the step 4, otherwise, entering the step 5.
Specifically, the corresponding image recognition of the generated image specifically includes the following steps:
and 3.1, selecting a corresponding image recognition model according to the type of the image, entering step 3.2 if the recognition result has the portrait pattern, and otherwise entering step 3.3.
Specifically, in step 2, the identification detection model performs identification detection on the image to be identified, generates a traffic identification or a portrait pattern, and needs to determine a corresponding image identification model before performing corresponding image identification on the image. Because the traffic sign and the director may exist at the same time in reality, if the corresponding processing is not performed at the moment, the situation of disordered recognition can occur, therefore, the priority of the instruction is preset in the method, the priority of the gesture image of the director is set to be higher than that of the traffic sign in the embodiment, when the director exists in the image, the gesture recognition model corresponding to the image of the director is directly selected for recognition, the traffic sign is not processed, on one hand, the problem of disordered recognition when the traffic sign and the director coexist is solved, on the other hand, the required calculation force is reduced, and the recognition speed is improved.
For the identification of the traffic sign, the super-resolution reconstruction convolutional neural network (SRCNN) model is adopted in the embodiment, and an SRCNN-based method is adopted for the identification of the traffic sign, so that the sign image with low shooting resolution caused by high vehicle running speed, dark light and the like can be reduced into a high-resolution image, and the influence on the identification accuracy of the low-resolution image can be effectively improved; for gesture image recognition of a commander, a 3D convolutional neural network (3D CNN) model is adopted in the embodiment, and a 2D convolutional neural network easily loses relevant feature information of a target in a time dimension during video recognition, so that the recognition accuracy is reduced, and therefore, a gesture recognition method based on the 3D convolutional neural network model is used. The independent design mode is convenient for multiple people to research and develop simultaneously, and the research and development period is shortened.
And 3.2, performing gesture recognition on the image pattern to obtain a corresponding recognition result.
Specifically, in this embodiment, the gesture recognition of the image pattern includes the following steps:
(1) carrying out normalization processing on the received video image, and unifying the frame number and the width and height of each frame to obtain an RGB input video with 32 frames;
(2) extracting optical flow characteristics from the RGB video by using an iDT algorithm to generate 32-frame optical flow video;
(3) respectively extracting features of the 32-frame RGB video and the optical flow video through a C3D model;
(4) and splicing and fusing the obtained RGB features and the optical flow features.
(5) And inputting the high-dimensional features obtained by fusion into an SVM classifier for classification to obtain a classification result.
And 3.3, carrying out traffic identification on the traffic identification to obtain a corresponding identification result.
Specifically, in this embodiment, the traffic identification for the traffic identification includes the following steps:
(1) performing super-resolution reconstruction on the traffic identification image through an SRCNN algorithm to obtain a reconstructed image;
(2) training the Lenet-5 convolutional neural network to obtain a Lenet-5 convolutional neural network model;
(3) and identifying the reconstructed image through the trained convolutional neural network model to obtain an identification result.
Step 4, changing the driving route according to the recognition result, and returning to the step 2;
specifically, in this embodiment, the form route is changed according to the recognition result obtained in step 3, and the recognition result includes a traffic identification instruction signal and a gesture instruction signal.
The traffic sign comprises: (1) the unmanned vehicle responds to the straight signal, and the result is that the unmanned vehicle continues to move straight; (2) the unmanned vehicle responds to the left turn signal and turns left; (3) and (4) turning right, wherein the response result of the unmanned vehicle is a signal for turning right (4) to stop, and the unmanned vehicle stops.
Gesture command signal: (1) following the signal, the unmanned vehicle responds to the result that the unmanned vehicle continues to drive along with the position of the person; (2) stopping following the signal, and waiting for the next step of instruction when the unmanned vehicle responds that the following is stopped; (3) the unmanned vehicle responds to a steering signal, performs steering according to the instruction, and moves straight after the steering is finished; (4) and (5) parking signals, and parking the unmanned vehicle.
And 5, acquiring images of the surrounding environment, and searching a parking position closest to the traffic sign or the commander in the images for parking.
Specifically, when the recognition result in step 3 is the parking information, the second recognition camera on the unmanned vehicle acquires an image of the surrounding environment, and the control algorithm of the existing automatic parking system is used for parking in this embodiment.
As shown in fig. 4, another embodiment of the present invention provides an image recognition-based unmanned vehicle parking system for implementing the above-mentioned unmanned vehicle parking method, the parking system including:
the image acquisition unit is used for acquiring images of the surrounding environment to obtain an image to be identified;
the identification detection unit is used for detecting whether a traffic identification or a director exists in the image to be identified, if so, generating a corresponding image, entering the image classification unit, and if not, continuing to advance according to a preset route;
the image classification and identification unit is used for performing image identification on the image to be identified and judging whether the identification result is parking information, if not, entering the route changing unit, otherwise, entering the parking unit;
the route changing unit is used for changing the driving route according to the recognition result and returning the driving route to the image acquisition unit;
and the parking unit is used for acquiring images of the surrounding environment and searching for a parking position closest to the traffic sign or the commander in the images for parking.
Specifically, the identifier detection unit includes:
the model training module is used for establishing an identification detection model and training the identification detection model;
and the image detection module is used for carrying out target detection on the image to be recognized by using the trained identification detection model, judging whether a traffic identification or a director exists in the image to be recognized, if so, acquiring the area where the traffic identification or the director is located, and generating a corresponding traffic identification pattern or a portrait pattern.
Specifically, the image classification and identification unit includes:
the model selection module is used for selecting a corresponding image recognition model according to the type of the image, if a portrait pattern exists in a recognition result, entering the gesture recognition module, and if not, entering the traffic identification recognition module;
the gesture recognition module is used for performing gesture recognition on the image pattern to obtain a corresponding recognition result;
and the traffic identification module is used for identifying the traffic identification to obtain a corresponding identification result.
The system in the embodiment realizes the traffic identification and the gesture recognition of the unmanned vehicle by using the unmanned vehicle parking method, thereby controlling the automatic steering and parking of the unmanned vehicle in the driving process of the park
Embodiments of the present application may also be implemented as a computer-readable storage medium having computer-readable instructions stored thereon that, when executed by a processor, may perform a method according to embodiments of the present application described with reference to the above figures. The computer-readable storage medium includes, but is not limited to, for example, volatile memory, which may include, for example, Random Access Memory (RAM), cache memory (or the like), and/or non-volatile memory, which may include, for example, Read Only Memory (ROM), a hard disk, flash memory, or the like.
The invention and its embodiments have been described above schematically, without limitation, and the invention can be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The representation in the drawings is only one of the embodiments of the invention, the actual construction is not limited thereto, and any reference signs in the claims shall not limit the claims concerned. Therefore, if a person skilled in the art receives the teachings of the present invention, without inventive design, a similar structure and an embodiment to the above technical solution should be covered by the protection scope of the present patent. Furthermore, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. Several of the elements recited in the product claims may also be implemented by one element in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (9)

1. An unmanned vehicle parking method based on image recognition is characterized by comprising the following steps:
step 1, carrying out image acquisition on the surrounding environment to obtain an image to be identified;
step 2, detecting whether a traffic sign or a director exists in the image to be recognized, if so, generating a corresponding image, and entering step 3, otherwise, continuing to travel according to a preset route;
step 3, carrying out corresponding image recognition on the generated image, judging whether the recognition result is parking information, if not, entering step 4, otherwise, entering step 5;
step 4, changing a driving route according to the recognition result, and returning to the step 1;
and 5, acquiring images of the surrounding environment, and searching a parking position closest to the traffic sign or the commander in the images for parking.
2. The image recognition-based unmanned vehicle parking method according to claim 1, wherein detecting whether a traffic sign or a director exists in the image to be recognized in step 2 specifically comprises the following steps:
step 2.1, establishing an identification detection model and training the identification detection model;
and 2.2, performing target detection on the image to be recognized by using the trained identification detection model, judging whether a traffic identification or a director exists in the image to be recognized, if so, acquiring the area where the traffic identification or the director is located, and generating a corresponding traffic identification pattern or a portrait pattern.
3. The image recognition-based unmanned vehicle parking method according to claim 2, wherein in step 2.3, a trained identification detection model is used for performing target detection on the image to be recognized, and whether a traffic identification or a director exists in the image to be recognized is judged, and the method specifically comprises the following steps:
extracting feature maps of the images to be identified by using a backbone feature extraction network;
transmitting feature maps into a PRN (vertical likelihood) area suggestion network, generating suggestion frames, classifying the suggestion frames through a softmax layer to obtain positive anchors containing target images, correcting the suggestion frames by utilizing border regression, and acquiring more accurate proposals;
sending the feature maps and the prosages obtained in the first two steps into a Rol Pooling layer to calculate the prosages feature maps, and sending the prosages feature maps into a subsequent network;
calculating the type of each target detection frame propofol through the full connection layer and the softmax, and correcting the position of each target detection frame by using frame regression again;
and when the target is identified, sending the position vector of the detection frame into the corresponding identification model for identification according to the type of the target.
4. The image recognition-based unmanned vehicle parking method according to claim 1 or 3, wherein the image recognition corresponding to the generated image in step 3 specifically comprises the following steps:
3.1, selecting a corresponding image recognition model according to the type of the image, and entering step 3.2 if the recognition result has a portrait pattern, or entering step 3.3;
step 3.2, performing gesture recognition on the image pattern to obtain a corresponding recognition result;
and 3.3, carrying out traffic identification on the traffic identification to obtain a corresponding identification result.
5. The unmanned vehicle parking method based on image recognition as claimed in claim 4, wherein: and 3, performing traffic identification on the traffic identification in the step 3, wherein the model is a super-resolution reconstruction convolutional neural network model, and the model for performing gesture identification on the image pattern is a 3D convolutional neural network model.
6. An unmanned vehicle parking system based on image recognition, which is used for realizing the unmanned vehicle parking method according to any one of claims 1-5, and is characterized by comprising the following steps:
the image acquisition unit is used for acquiring images of the surrounding environment to obtain an image to be identified;
the identification detection unit is used for detecting whether a traffic identification or a director exists in the image to be identified, if so, generating a corresponding image, entering the image classification unit, and if not, continuing to advance according to a preset route;
the image classification and identification unit is used for performing image identification on the image to be identified and judging whether the identification result is parking information, if not, entering the route changing unit, otherwise, entering the parking unit;
the route changing unit is used for changing the driving route according to the recognition result and returning the driving route to the image acquisition unit;
and the parking unit is used for acquiring images of the surrounding environment and searching for a parking position closest to the traffic sign or the commander in the images for parking.
7. The image recognition-based unmanned vehicle parking system of claim 6, wherein the identification detection unit comprises:
the model training module is used for establishing an identification detection model and training the identification detection model;
and the image detection module is used for carrying out target detection on the image to be recognized by using the trained identification detection model, judging whether a traffic identification or a director exists in the image to be recognized, if so, acquiring the area where the traffic identification or the director is located, and generating a corresponding traffic identification pattern or a portrait pattern.
8. The image recognition-based unmanned vehicle parking system according to claim 7, wherein the image classification recognition unit comprises:
the model selection module is used for selecting a corresponding image recognition model according to the type of the image, if a portrait pattern exists in a recognition result, entering the gesture recognition module, and if not, entering the traffic identification recognition module;
the gesture recognition module is used for performing gesture recognition on the image pattern to obtain a corresponding recognition result;
and the traffic identification module is used for identifying the traffic identification to obtain a corresponding identification result.
9. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program, when executed by a processor, causes the processor to perform the unmanned vehicle parking method of any of claims 1-5.
CN202010640859.4A 2020-07-06 2020-07-06 Unmanned vehicle parking method and system based on image recognition and storage medium Pending CN111814657A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010640859.4A CN111814657A (en) 2020-07-06 2020-07-06 Unmanned vehicle parking method and system based on image recognition and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010640859.4A CN111814657A (en) 2020-07-06 2020-07-06 Unmanned vehicle parking method and system based on image recognition and storage medium

Publications (1)

Publication Number Publication Date
CN111814657A true CN111814657A (en) 2020-10-23

Family

ID=72841599

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010640859.4A Pending CN111814657A (en) 2020-07-06 2020-07-06 Unmanned vehicle parking method and system based on image recognition and storage medium

Country Status (1)

Country Link
CN (1) CN111814657A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784857A (en) * 2021-01-29 2021-05-11 北京三快在线科技有限公司 Model training and image processing method and device
CN113264038A (en) * 2021-07-19 2021-08-17 新石器慧通(北京)科技有限公司 Unmanned vehicle parking method and device based on temporary event and electronic equipment
CN114297534A (en) * 2022-02-28 2022-04-08 京东方科技集团股份有限公司 Method, system and storage medium for interactively searching target object

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109389838A (en) * 2018-11-26 2019-02-26 爱驰汽车有限公司 Unmanned crossing paths planning method, system, equipment and storage medium
CN109711455A (en) * 2018-12-21 2019-05-03 贵州翰凯斯智能技术有限公司 A kind of traffic police's gesture identification method based on pilotless automobile
CN110598693A (en) * 2019-08-12 2019-12-20 浙江工业大学 Ship plate identification method based on fast-RCNN

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109389838A (en) * 2018-11-26 2019-02-26 爱驰汽车有限公司 Unmanned crossing paths planning method, system, equipment and storage medium
CN109711455A (en) * 2018-12-21 2019-05-03 贵州翰凯斯智能技术有限公司 A kind of traffic police's gesture identification method based on pilotless automobile
CN110598693A (en) * 2019-08-12 2019-12-20 浙江工业大学 Ship plate identification method based on fast-RCNN

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784857A (en) * 2021-01-29 2021-05-11 北京三快在线科技有限公司 Model training and image processing method and device
CN112784857B (en) * 2021-01-29 2022-11-04 北京三快在线科技有限公司 Model training and image processing method and device
CN113264038A (en) * 2021-07-19 2021-08-17 新石器慧通(北京)科技有限公司 Unmanned vehicle parking method and device based on temporary event and electronic equipment
CN114297534A (en) * 2022-02-28 2022-04-08 京东方科技集团股份有限公司 Method, system and storage medium for interactively searching target object
CN114297534B (en) * 2022-02-28 2022-07-22 京东方科技集团股份有限公司 Method, system and storage medium for interactively searching target object

Similar Documents

Publication Publication Date Title
JP6566145B2 (en) Driving support device and computer program
Li et al. Springrobot: A prototype autonomous vehicle and its algorithms for lane detection
JP3619628B2 (en) Driving environment recognition device
CN111814657A (en) Unmanned vehicle parking method and system based on image recognition and storage medium
CN111874006B (en) Route planning processing method and device
Pomerleau RALPH: Rapidly adapting lateral position handler
CN112069643B (en) Automatic driving simulation scene generation method and device
CN109426256A (en) The lane auxiliary system based on driver intention of automatic driving vehicle
CN108133484B (en) Automatic driving processing method and device based on scene segmentation and computing equipment
EP2383679A1 (en) Detecting and recognizing traffic signs
US20090034798A1 (en) Method for recognition of an object
US11776277B2 (en) Apparatus, method, and computer program for identifying state of object, and controller
JP2003123197A (en) Recognition device for road mark or the like
CN111931683B (en) Image recognition method, device and computer readable storage medium
US11829153B2 (en) Apparatus, method, and computer program for identifying state of object, and controller
JP7172441B2 (en) Travelable direction detection device and available direction detection method
JP4951481B2 (en) Road marking recognition device
JP2019164611A (en) Traveling support device and computer program
CN114495568A (en) Parking method, parking equipment, storage medium and parking system
JP2004265432A (en) Travel environment recognition device
JPH08306000A (en) Method for deciding degree of risk at intersection
CN114694108A (en) Image processing method, device, equipment and storage medium
US11900690B2 (en) Apparatus, method, and computer program for identifying state of signal light, and controller
KR20170022357A (en) Automatic parking method and system
US20230242145A1 (en) Mobile object control device, mobile object control method, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination