CN115408487A - Real-time panoramic autonomous recognition system for unmanned vehicle based on FPGA - Google Patents

Real-time panoramic autonomous recognition system for unmanned vehicle based on FPGA Download PDF

Info

Publication number
CN115408487A
CN115408487A CN202211362560.2A CN202211362560A CN115408487A CN 115408487 A CN115408487 A CN 115408487A CN 202211362560 A CN202211362560 A CN 202211362560A CN 115408487 A CN115408487 A CN 115408487A
Authority
CN
China
Prior art keywords
identification
unmanned vehicle
initial
area
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211362560.2A
Other languages
Chinese (zh)
Other versions
CN115408487B (en
Inventor
焦斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Junhan Information Technology Co ltd
Original Assignee
Hunan Junhan Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Junhan Information Technology Co ltd filed Critical Hunan Junhan Information Technology Co ltd
Priority to CN202211362560.2A priority Critical patent/CN115408487B/en
Publication of CN115408487A publication Critical patent/CN115408487A/en
Application granted granted Critical
Publication of CN115408487B publication Critical patent/CN115408487B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/3815Road data
    • G01C21/3819Road shape data, e.g. outline of a route
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2379Updates performed during online database operations; commit processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/90335Query processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses an unmanned vehicle real-time panoramic autonomous recognition system based on an FPGA (field programmable gate array), belonging to the technical field of unmanned vehicles and comprising a recognition area module, a recognition module and a server; the identification area module is used for intelligently delimiting a corresponding identification area, acquiring a navigation route of the unmanned vehicle, acquiring a corresponding traffic map based on the acquired navigation route, extracting road features in the traffic map, and establishing a feature information map according to the extracted road features and the traffic map; acquiring the position and speed of the unmanned vehicle in real time, updating the acquired position information and speed information in a characteristic information graph in real time, and analyzing the current characteristic information graph to acquire a corresponding identification area; the identification module is used for identifying the environment in the identification area, acquiring the corresponding identification area, acquiring the acquired data in the identification area, establishing a corresponding data identification base, and identifying the acquired data in real time according to the data identification base to acquire corresponding identification information.

Description

Real-time panoramic autonomous recognition system for unmanned vehicle based on FPGA
Technical Field
The invention belongs to the technical field of unmanned vehicles, and particularly relates to an FPGA-based real-time panoramic autonomous identification system for an unmanned vehicle.
Background
At present, the unmanned vehicle industry is rapidly developed, almost every vehicle is provided with a plurality of monitoring cameras for monitoring the surrounding environment of the vehicle and the running condition of the vehicle, which is equivalent to eyes of the unmanned vehicle, and the safe running of the unmanned vehicle can be guaranteed only by accurately and rapidly identifying the surrounding environment, so that the invention provides the real-time panoramic autonomous identification system of the unmanned vehicle based on the FPGA for realizing the rapid autonomous identification of the panoramic of the unmanned vehicle.
Disclosure of Invention
In order to solve the problems existing in the scheme, the invention provides an unmanned vehicle real-time panoramic autonomous recognition system based on an FPGA.
The purpose of the invention can be realized by the following technical scheme:
an FPGA-based unmanned vehicle real-time panoramic autonomous recognition system comprises a recognition area module, a recognition module and a server;
the identification area module is used for intelligently delimiting a corresponding identification area, acquiring a navigation route of the unmanned vehicle, acquiring a corresponding traffic map based on the acquired navigation route, extracting road features in the traffic map, and establishing a feature information map according to the extracted road features and the traffic map; acquiring the position and speed of the unmanned vehicle in real time, updating the acquired position information and speed information in a characteristic information graph in real time, and analyzing the current characteristic information graph to acquire a corresponding identification area;
the identification module is used for identifying the environment in the identification area, acquiring the corresponding identification area, acquiring the collected data in the identification area, establishing a corresponding data identification library, and identifying the acquired collected data in real time according to the data identification library to acquire corresponding identification information.
Further, the method for establishing the characteristic information map by combining the extracted road characteristics with the traffic map comprises the following steps:
and processing the traffic map to obtain an initial map, supplementing the extracted road characteristics into the initial map according to the corresponding road positions, and marking the supplemented initial map as a characteristic information map.
Furthermore, before extracting the road characteristics in the traffic map, checking the characteristic information graph, identifying the corresponding navigation route, inputting the identified navigation route into the characteristic information graph for road comparison, and determining the corresponding target road.
Further, the method for analyzing the current characteristic information graph comprises the following steps:
the method comprises the steps of identifying speed and position information of a current unmanned vehicle, generating a corresponding initial area according to the identified speed information of the unmanned vehicle, obtaining a corresponding road characteristic set in a characteristic information graph according to the position of the unmanned vehicle, adjusting an initial contour of the initial area according to the obtained road characteristic set, obtaining a corresponding identification area contour, and generating a corresponding identification area according to the obtained identification area contour.
Further, the method for generating the corresponding initial area according to the identified speed information of the unmanned vehicle comprises the following steps:
establishing an area profile library, identifying the speed of the corresponding unmanned vehicle, inputting the speed of the identified unmanned vehicle into the area profile library to obtain a corresponding initial profile, and generating a corresponding initial area according to the obtained initial wheel.
Further, the working method of the area profile library comprises the following steps: the method comprises the steps of identifying the speed of an input unmanned vehicle, matching a corresponding standard contour and a proportional adjustment coefficient algorithm according to the identified speed, calculating a corresponding proportional adjustment coefficient according to the proportional adjustment coefficient algorithm and the speed, and adjusting the standard contour according to the calculated proportional adjustment coefficient to obtain a corresponding initial contour.
Further, the method for adjusting the initial contour of the initial area according to the obtained road feature set comprises the following steps:
distributing the obtained road characteristic set according to the direction axes corresponding to the initial region to obtain a corresponding number of analysis sets, analyzing the analysis sets corresponding to the direction axes to obtain corresponding direction adjustment values, adjusting the initial contour on the corresponding direction axes according to the obtained direction adjustment values to obtain a transit contour, and adjusting the transit contour to obtain a corresponding identification region contour.
Compared with the prior art, the invention has the beneficial effects that: through mutually supporting between identification area module and the identification module, realize unmanned car to the quick discernment of all ring edge borders, through the identification area that intelligent generation corresponds, reduce the discernment analysis time, reduce data processing volume, carry out accurate discernment to the environment in the identification area, improve the security of traveling of unmanned car.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic block diagram of the present invention;
FIG. 2 is a schematic view of the initial profile of the present invention.
Detailed Description
The technical solution of the present invention will be described clearly and completely with reference to the following embodiments, and it should be understood that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments; all other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
As shown in fig. 1 to 2, an autonomous real-time panoramic identification system for unmanned vehicles based on FPGA includes an identification area module, an identification module, and a server;
the identification area module and the identification module are in communication connection with the server;
the identification area module is used for intelligently defining corresponding identification areas, for the identification of the peripheral environment of the unmanned vehicle, the effective areas of the unmanned vehicle, which need to be subjected to peripheral environment identification, are different under the conditions of different positions, different speeds and the like, the identification areas are too small and have potential safety hazards, the identification areas are too large and cause invalid analysis of a large amount of data, the identification analysis efficiency is reduced, and the identification areas also have certain potential safety hazards, for example, under the condition of high-speed driving, the identification areas are too small and cause that corresponding danger sources are not found timely and safety accidents are generated, the identification areas are too large and cause that the identification analysis time is increased, under the high-speed driving environment, the safety accidents are easily generated due to lack of sufficient coping time, so that the proper identification areas need to be intelligently generated according to actual operating conditions, the environment in the identification areas is accurately identified, and the driving safety of the unmanned vehicle is improved; specifically, the method for intelligently dividing the identification area comprises the following steps:
acquiring a navigation route of an unmanned vehicle, acquiring a corresponding traffic map based on the acquired navigation route, extracting road characteristics in the traffic map, and establishing a characteristic information map according to the extracted road characteristics and the traffic map;
and acquiring the position and the speed of the unmanned vehicle in real time, updating the acquired position information and speed information in the characteristic information graph in real time, and analyzing the current characteristic information graph to acquire a corresponding identification area.
The method for extracting the road features in the traffic map comprises the following steps:
and establishing a characteristic statistical table, and performing characteristic matching on various traffic road information in the traffic map according to the road characteristics in the characteristic statistical table to obtain corresponding road characteristics.
The statistics of the characteristic statistical table is that road characteristics set according to influence factors possibly encountered in the vehicle driving process, such as characteristics of personnel, non-motor vehicles, zebra crossings, speed limit and the like on the provincial road, can be summarized and established in a manual mode, or corresponding road characteristics are obtained based on big data analysis, and then corresponding summary establishment is carried out.
The characteristic matching is carried out on various traffic road information in the traffic map according to the road characteristics in the characteristic statistical table, namely, corresponding characteristic matching is carried out according to corresponding road characteristic meanings, if a section of road is a no-people section, the section of road has no characteristics of road personnel, otherwise the section of road has characteristics of road personnel, although the position has no zebra crossing, the personnel can be provided as well because no protective measures such as blocking and the like are provided, and in sum, the corresponding characteristic matching can be carried out through the existing matching method.
The method for establishing the characteristic information graph by combining the extracted road characteristics with the traffic map comprises the following steps:
and processing the traffic map to obtain an initial map, supplementing the extracted road features into the initial map according to the corresponding road positions, and marking the supplemented initial map as a feature information map.
The traffic map is processed, namely only a few related contents are reserved, other data are screened out or replaced, such as roads are reserved, other unnecessary image blocks are screened out, or codes are replaced, and map simplification is carried out.
In one embodiment, because the vehicle generally travels in a relatively fixed area, part of the roads are already in the feature information map, and therefore repeated processing is not required, feature information map checking is performed before extracting road features in the traffic map, a corresponding navigation route is identified, the identified navigation route is input into the feature information map for road comparison, and a corresponding target road is determined, wherein the target road is the road needing road feature extraction.
The method for analyzing the current characteristic information graph comprises the following steps:
identifying the speed and position information of the current unmanned vehicle, wherein the position information comprises the position of a road and the position of a lane; generating a corresponding initial area according to the identified speed information of the unmanned vehicle, acquiring a corresponding road characteristic set in a characteristic information map according to the position of the unmanned vehicle, adjusting an initial contour of the initial area according to the acquired road characteristic set to acquire a corresponding identification area contour, and generating a corresponding identification area according to the acquired identification area contour.
The method for generating the corresponding initial area according to the identified speed information of the unmanned vehicle comprises the following steps:
establishing an area profile library, identifying the speed of the corresponding unmanned vehicle, inputting the speed of the identified unmanned vehicle into the area profile library to obtain a corresponding initial profile, and generating a corresponding initial area according to the obtained initial wheel.
The system comprises an area contour library, a plurality of characteristic data acquisition modules and a plurality of characteristic data processing modules, wherein the area contour library is used for storing a plurality of initial contours, the initial contours are set according to the vehicle speed, specifically, corresponding vehicle speed intervals are set according to the vehicle speed, each vehicle speed interval is provided with a corresponding standard contour and a proportional adjustment coefficient algorithm, the standard contours are set only by referring to the vehicle speed, the area size which needs to be identified at the vehicle speed is set, a plurality of direction shafts are arranged on the standard contours, the number of the direction shafts is set according to the type number of subsequent characteristic adjustment, and the direction shafts are used for carrying out contour adjustment based on corresponding characteristic data, as shown in fig. 2; the scaling factor algorithm is to calculate how large the current standard profile should be scaled up or down according to the current vehicle speed, because the scaling factor algorithm is only related to the speed, the scaling factor algorithm can be simulated in the speed interval through the existing artificial experience and mathematical algorithm, and the specific standard profile and the scaling factor algorithm are set through discussion by an expert group.
The working method of the area contour library comprises the following steps: the method comprises the steps of identifying the speed of an input unmanned vehicle, matching a corresponding standard contour and a proportional adjustment coefficient algorithm according to the identified speed, calculating a corresponding proportional adjustment coefficient according to the proportional adjustment coefficient algorithm and the speed, and adjusting the standard contour according to the calculated proportional adjustment coefficient to obtain a corresponding initial contour.
The method comprises the steps of obtaining a corresponding road characteristic set in a characteristic information map according to the position of an unmanned vehicle, positioning a corresponding road section in the characteristic information map according to the position of the unmanned vehicle, obtaining the road characteristic corresponding to the road section, and generating a corresponding lane characteristic according to the lane where the unmanned vehicle is located, wherein the lane is located, and the corresponding situations of different lanes are possibly different, so that the road characteristic set also needs to be listed as the road characteristic to be integrated and obtained.
The method for adjusting the initial contour of the initial area according to the obtained road feature set comprises the following steps:
distributing the obtained road characteristic sets according to the direction axes corresponding to the initial region to obtain a corresponding number of analysis sets, namely the road characteristic sets corresponding to the direction axes, and performing corresponding matching distribution according to the corresponding type attributes; analyzing the analysis sets corresponding to all the direction axes to obtain corresponding direction adjustment values, adjusting the initial contour on the corresponding direction axes according to the obtained direction adjustment values to obtain a transit contour, and adjusting the transit contour to obtain a corresponding identification area contour.
Adjusting the initial contour on the corresponding direction axis according to the obtained direction adjustment value, namely adjusting the value corresponding to the initial contour on the current direction axis according to the adjustment value, and performing corresponding connection after adjustment to obtain a transfer contour; because the shape of the transfer contour is not reasonable, corresponding adjustment is carried out based on the current shape adjustment value method, so that the transfer contour meets the requirement of the identification area, if a corresponding artificial intelligence model can be established for intelligent adjustment, because the artificial intelligence model is conventional technology in the field, the transfer contour can be established.
The method for analyzing the analysis set corresponding to each direction axis to obtain the corresponding direction adjustment value comprises the following steps:
establishing a corresponding adjustment analysis model based on a CNN network or a DNN network, establishing a corresponding training set in a manual mode for training, analyzing the analysis set through the adjustment analysis model after the training is successful to obtain a corresponding adjustment value, wherein the specific establishment and training process is common knowledge in the field, so detailed description is not given.
The identification module is used for identifying the environment in the identification area, and the specific method comprises the following steps:
acquiring a corresponding identification area, and acquiring data in the identification area, wherein the acquired data comprises data acquired by an acquisition device loaded on the unmanned vehicle, such as image data, acquired data of a corresponding sensor and the like; and establishing a corresponding data identification library, and identifying the acquired data in real time according to the data identification library to acquire corresponding identification information.
The data identification library is used for storing various environment comparison information, such as various environment comparison information possibly encountered by vehicles, personnel, bicycles and the like, and the corresponding data identification library can be established by the existing unmanned vehicle technology.
The acquired data are identified in real time according to the data identification library, and corresponding functions can be realized through the existing information comparison technology and the unmanned technology.
The above formulas are all calculated by removing dimensions and taking numerical values thereof, the formula is a formula which is obtained by acquiring a large amount of data and performing software simulation to obtain the most approximate real condition, and the preset parameters and the preset threshold values in the formula are set by the technical personnel in the field according to the actual condition or obtained by simulating a large amount of data.
Although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the spirit and scope of the present invention.

Claims (7)

1. An FPGA-based real-time panoramic autonomous identification system for an unmanned vehicle is characterized by comprising an identification area module, an identification module and a server; the identification area module and the identification module are in communication connection with the server;
the identification area module is used for intelligently delimiting a corresponding identification area, acquiring a navigation route of the unmanned vehicle, acquiring a corresponding traffic map based on the acquired navigation route, extracting road features in the traffic map, and establishing a feature information map according to the extracted road features and the traffic map; acquiring the position and the speed of the unmanned vehicle in real time, updating the acquired position information and speed information in a characteristic information graph in real time, analyzing the current characteristic information graph and acquiring a corresponding identification area;
the identification module is used for identifying the environment in the identification area, acquiring the corresponding identification area, acquiring the acquired data in the identification area, establishing a corresponding data identification base, and identifying the acquired data in real time according to the data identification base to acquire corresponding identification information.
2. The FPGA-based real-time panoramic autonomous unmanned vehicle identification system of claim 1, wherein the method for establishing the characteristic information map by combining the extracted road characteristics with the traffic map comprises the following steps:
and processing the traffic map to obtain an initial map, supplementing the extracted road characteristics into the initial map according to the corresponding road positions, and marking the supplemented initial map as a characteristic information map.
3. The FPGA-based real-time panoramic autonomous identification system for unmanned vehicles, as recited in claim 1, is characterized in that feature information map checking is performed before extracting road features in a traffic map, corresponding navigation routes are identified, the identified navigation routes are input into the feature information map for road comparison, and corresponding target roads are determined.
4. The FPGA-based real-time panoramic autonomous identification system for unmanned vehicles according to claim 1, wherein the method for analyzing the current characteristic information graph comprises:
the method comprises the steps of identifying speed and position information of a current unmanned vehicle, generating a corresponding initial area according to the identified speed information of the unmanned vehicle, obtaining a corresponding road characteristic set in a characteristic information graph according to the position of the unmanned vehicle, adjusting an initial contour of the initial area according to the obtained road characteristic set, obtaining a corresponding identification area contour, and generating a corresponding identification area according to the obtained identification area contour.
5. The FPGA-based real-time panoramic autonomous unmanned vehicle identification system of claim 4, wherein the method for generating the corresponding initial region according to the identified speed information of the unmanned vehicle comprises:
establishing an area profile library, identifying the speed of the corresponding unmanned vehicle, inputting the speed of the identified unmanned vehicle into the area profile library to obtain a corresponding initial profile, and generating a corresponding initial area according to the obtained initial wheel.
6. The FPGA-based real-time panoramic autonomous unmanned aerial vehicle identification system of claim 5, wherein the working method of the area profile library comprises: the method comprises the steps of identifying the vehicle speed of an input unmanned vehicle, matching a corresponding standard contour and a proportional adjustment coefficient algorithm according to the identified vehicle speed, calculating a corresponding proportional adjustment coefficient according to the proportional adjustment coefficient algorithm and the vehicle speed, and adjusting the standard contour according to the calculated proportional adjustment coefficient to obtain a corresponding initial contour.
7. The FPGA-based real-time panoramic autonomous identification system for unmanned vehicles according to claim 4, wherein the method for adjusting the initial contour of the initial region according to the obtained road feature set comprises:
distributing the obtained road characteristic set according to the direction axes corresponding to the initial region to obtain a corresponding number of analysis sets, analyzing the analysis sets corresponding to the direction axes to obtain corresponding direction adjustment values, adjusting the initial contour on the corresponding direction axes according to the obtained direction adjustment values to obtain a transit contour, and adjusting the transit contour to obtain a corresponding identification region contour.
CN202211362560.2A 2022-11-02 2022-11-02 Real-time panorama autonomous recognition system of unmanned vehicle based on FPGA Active CN115408487B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211362560.2A CN115408487B (en) 2022-11-02 2022-11-02 Real-time panorama autonomous recognition system of unmanned vehicle based on FPGA

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211362560.2A CN115408487B (en) 2022-11-02 2022-11-02 Real-time panorama autonomous recognition system of unmanned vehicle based on FPGA

Publications (2)

Publication Number Publication Date
CN115408487A true CN115408487A (en) 2022-11-29
CN115408487B CN115408487B (en) 2023-03-24

Family

ID=84169276

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211362560.2A Active CN115408487B (en) 2022-11-02 2022-11-02 Real-time panorama autonomous recognition system of unmanned vehicle based on FPGA

Country Status (1)

Country Link
CN (1) CN115408487B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011048718A1 (en) * 2009-10-20 2011-04-28 パナソニック株式会社 Sign recognition device and sign recognition method
CN105069410A (en) * 2015-07-24 2015-11-18 深圳市佳信捷技术股份有限公司 Unstructured road recognition method and device
US20160231746A1 (en) * 2015-02-06 2016-08-11 Delphi Technologies, Inc. System And Method To Operate An Automated Vehicle

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011048718A1 (en) * 2009-10-20 2011-04-28 パナソニック株式会社 Sign recognition device and sign recognition method
US20160231746A1 (en) * 2015-02-06 2016-08-11 Delphi Technologies, Inc. System And Method To Operate An Automated Vehicle
CN105069410A (en) * 2015-07-24 2015-11-18 深圳市佳信捷技术股份有限公司 Unstructured road recognition method and device

Also Published As

Publication number Publication date
CN115408487B (en) 2023-03-24

Similar Documents

Publication Publication Date Title
CN108345822B (en) Point cloud data processing method and device
WO2022141910A1 (en) Vehicle-road laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field
CN111833598B (en) Automatic traffic incident monitoring method and system for unmanned aerial vehicle on highway
CN103324930A (en) License plate character segmentation method based on grey level histogram binaryzation
CN113705636B (en) Method and device for predicting track of automatic driving vehicle and electronic equipment
CN113033840B (en) Method and device for judging highway maintenance
CN111292439A (en) Unmanned aerial vehicle inspection method and inspection system for urban pipe network
CN112257772B (en) Road increase and decrease interval segmentation method and device, electronic equipment and storage medium
CN111723697A (en) Improved driver background segmentation method based on Mask-RCNN
CN111209880A (en) Vehicle behavior identification method and device
CN111931683A (en) Image recognition method, image recognition device and computer-readable storage medium
CN115112143A (en) Method for planning running track of accident rescue vehicle
CN115236694A (en) Obstacle detection method, obstacle detection device, electronic apparatus, and storage medium
CN115408487B (en) Real-time panorama autonomous recognition system of unmanned vehicle based on FPGA
CN113792106A (en) Road state updating method and device, electronic equipment and storage medium
CN116229396B (en) High-speed pavement disease identification and warning method
CN105243849A (en) Traffic data collecting method of crossing where signal lamp is located
CN116631176A (en) Control method and system for station passenger flow distribution state
CN116091777A (en) Point Yun Quanjing segmentation and model training method thereof and electronic equipment
CN113763326B (en) Pantograph detection method based on Mask scanning R-CNN network
CN115294757A (en) Recognition and release system for lane-level traffic flow and traffic incident
CN115457215A (en) Camera sensor modeling method applied to automatic driving
CN112419713A (en) Urban traffic monitoring system based on cloud computing
CN112861701A (en) Illegal parking identification method and device, electronic equipment and computer readable medium
Van Cuong et al. AI-Driven Vehicle Recognition for Enhanced Traffic Management: Implications and Strategies

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant