CN115951692B - Unmanned trajectory control system based on model predictive control - Google Patents
Unmanned trajectory control system based on model predictive control Download PDFInfo
- Publication number
- CN115951692B CN115951692B CN202310245289.2A CN202310245289A CN115951692B CN 115951692 B CN115951692 B CN 115951692B CN 202310245289 A CN202310245289 A CN 202310245289A CN 115951692 B CN115951692 B CN 115951692B
- Authority
- CN
- China
- Prior art keywords
- module
- vehicle
- video
- information
- running
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Traffic Control Systems (AREA)
- Control Of Driving Devices And Active Controlling Of Vehicle (AREA)
Abstract
The invention discloses an unmanned trajectory control system based on model predictive control, in particular to the field of unmanned safety, which is used for solving the problem that the conventional vehicle driving auxiliary navigation can not be used for a driver to quickly and intuitively observe and reference in time, and comprises a processor, an image acquisition module, an information processing module, a vehicle speed measurement module, a vehicle early warning module and a video display module, wherein the vehicle early warning module carries out early warning on various driving accidents in the driving of a vehicle according to information sent by the information processing module and information sent by the vehicle speed measurement module; the invention collects real-time video information of the whole body of the vehicle in real time, processes and analyzes the video information, and displays the video information on the video display module in real time, so that a driver can quickly and intuitively observe and reference the video information, the driver has enough time to process early warning information, the observation information beyond the visual range ensures the running safety of the vehicle, the running state of the vehicle is mastered in real time by matching with AIS information, and the running safety of the vehicle is improved.
Description
Technical Field
The invention relates to the technical field of unmanned aerial vehicles, in particular to an unmanned track control system based on model predictive control.
Background
Unmanned vehicles are one of the most common applications of unmanned trajectory control. In this case, the unmanned vehicle needs to track its predetermined route and travel along the predetermined route while adhering to traffic regulations and avoiding collisions with other vehicles or pedestrians.
Most of the existing unmanned schemes do not completely have the automatic driving capability of a full road section, and when the unmanned scheme passes through a complex road section, corresponding early warning prompt is carried out on a driver according to the requirement, so that the driver is reminded to cut in time to control running, and traffic accidents are avoided.
At present, early warning prompt for complex road sections is mostly carried out only temporarily during passing, and certain prospective is still lacking, and early warning can not be carried out when dangerous road sections are driven in advance.
The present invention proposes a solution to the above-mentioned problems.
Disclosure of Invention
In order to overcome the defects of the prior art, the embodiment of the invention provides an unmanned track control system based on model predictive control, which is used for acquiring real-time video information of the whole vehicle body in real time, processing and analyzing the video information through an information processing module, a vehicle speed measuring module and a vehicle early warning module, displaying the video information on a video display module in real time, enabling a driver to quickly and intuitively observe and reference the video information, enabling the driver to have enough time to process early warning information, ensuring the running safety of the vehicle by the observation information beyond the viewing distance, and mastering various running states of the vehicle in real time by matching with AIS information, so that the running safety of the vehicle is improved, and the problems in the background technology are solved.
In order to achieve the above purpose, the present invention provides the following technical solutions:
the unmanned trajectory control system based on model predictive control comprises a processor, an image acquisition module, an information processing module, a vehicle speed measurement module, a vehicle early warning module and a video display module;
the processor is in signal connection with the image acquisition module, the information processing module, the vehicle speed measurement module and the vehicle early warning module and is used for issuing control instructions and receiving related data results;
the image acquisition module is used for acquiring and storing panoramic images around the vehicle, and sending the acquired video data to the information processing module for analysis and processing through the processor;
the information processing module is used for analyzing the panoramic image acquired by the image acquisition module and sending an analysis result to the vehicle early warning module through the processor;
the vehicle speed measuring module is used for detecting the traveling direction and the vehicle speed of the vehicle in real time and sending the traveling direction and the vehicle speed to the vehicle early warning module and the vehicle early warning module through the processor;
the vehicle early warning module carries out early warning on various driving accidents in the driving process of the vehicle according to the information sent by the information processing module and the information sent by the vehicle speed measuring module;
the video display module is composed of a plurality of displays and is used for displaying videos after various driving information is overlapped.
In a preferred embodiment, the information processing module includes a video processing module and an AR data processing module;
the video processing module is used for optimizing the video information acquired by the image acquisition module and comprises a video optimizing module and a video anti-shake stabilizing module;
the video optimization module is used for improving the imaging quality of the fuzzy video in real time;
the video anti-shake stabilization module is used for carrying out real-time anti-shake on the shake strong video;
the AR data processing module comprises a driving information superposition module and a vehicle surrounding information superposition module;
the driving information superposition module superposes the width, longitude and latitude, the path and the non-driving area, the road sign, the boundary line, the lane line and the marked building of the path information path of the navigation chart on the video content, converts the information on the navigation chart on the video according to the position of the vehicle, and superposes and displays the information on the video in real time;
the vehicle surrounding information superposition module is used for converting AIS information around the vehicle, the distance and the direction of the position of the surrounding vehicle from the self vehicle into video data and superposing the video data on the video content in real time.
In a preferred embodiment, the video processing module further comprises a video quality assessment module;
the video quality evaluation module is used for analyzing and judging shooting conditions of different lane sections according to the optimization conditions of the video optimization module and the video anti-shake stabilization module, and the specific analysis process is as follows:
the method comprises the steps that a processor divides a preset lane of a vehicle into n running areas according to road sections, wherein n is a positive integer and is greater than or equal to 1;
the vehicle speed measuring module calculates the running time t passing through each running area according to the vehicle speed and sends the running time t to the vehicle early warning module and the video quality evaluation module through the processor;
the video quality evaluation module respectively acquires the ratio of the optimization processing times to the optimization processing time of the video optimization module in the n driving areas, and respectively acquires the ratio of the anti-shake processing times to the anti-shake processing time of the video anti-shake stabilization module in the n driving areas;
the video quality evaluation module calculates a running stability coefficient S according to the optimization processing times, the optimization processing time length occupation ratio, the anti-shake processing times and the anti-shake processing time length occupation ratio through a formula, compares the running stability coefficient S with a standard stability threshold value, and calculates a stability ratio S0; and meanwhile, the calculated stable ratio S0 is sent to a vehicle early warning module for early warning analysis through a processor.
In a preferred embodiment, after the vehicle early warning module receives the stable ratio S0 of each running area and the running time t of each running area sent by the video quality evaluation module, the running risk coefficient K of each running area is calculated through a formula;
the vehicle early warning module compares the running risk coefficient K with a preset risk threshold;
if the running risk coefficient K is greater than or equal to a preset risk threshold, the vehicle early warning module marks the running area as a risk running area, otherwise, the running area is marked as a normal running area, and the normal running area is stored.
In a preferred embodiment, the vehicle early warning module invokes its own history storage information when the vehicle starts to run, acquires the running risk coefficient K of each running area, and sends the running area information marked as the risk running area to the video display module, where the video display module highlights the risk running area.
In a preferred embodiment, when the vehicle early warning module retrieves the historical stored information of the vehicle early warning module, only the running risk coefficient K of each running area closest to the running time is retrieved.
The unmanned trajectory control system based on model predictive control has the technical effects and advantages that:
the invention collects real-time video information of the whole body of the vehicle in real time, processes and analyzes the video information through the information processing module, the vehicle speed measuring module and the vehicle early warning module, and displays the video information on the video display module in real time, so that a driver can quickly and intuitively observe and refer to the video information, the driver has enough time to process the early warning information, the observation information beyond the sight distance ensures the running safety of the vehicle, and various running states of the vehicle are mastered in real time in cooperation with AIS information, so that the running safety of the vehicle is improved;
according to the invention, the lane is divided into a plurality of running areas, the running stability of each area is determined for each running area through the processing state of the video information, the risk of each running area is comprehensively analyzed by combining the running time, and the running area with high risk is early-warning and prompted before the follow-up running, so that a driver can be reminded of running in the risk area in advance, and the attention of the driver is improved.
Drawings
FIG. 1 is a schematic diagram of the unmanned trajectory control system based on model predictive control of the present invention;
FIG. 2 is a schematic diagram showing an internal structure of an information processing module according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
According to the unmanned trajectory control system based on model predictive control, real-time video information of the whole body of a vehicle is collected in real time, the video information is processed and analyzed through the information processing module, the vehicle speed measuring module and the vehicle early warning module, the video information is displayed on the video display module in real time, a driver can quickly and intuitively observe and refer to the video information, the driver has enough time to process the early warning information, the driving safety of the vehicle is ensured by the observation information beyond the viewing distance, various driving states of the vehicle are mastered in real time by the aid of AIS information, and the driving safety of the vehicle is improved.
Examples
Fig. 1 shows a schematic structural diagram of an unmanned trajectory control system based on model predictive control, which specifically comprises a processor, an image acquisition module, an information processing module, a vehicle speed measurement module, a vehicle early warning module and a video display module.
The processor is in signal connection with the image acquisition module, the information processing module, the vehicle speed measurement module and the vehicle early warning module and is used for issuing control instructions and receiving related data results.
The image acquisition module is used for acquiring and storing panoramic images around the vehicle, an AI chip is arranged in the image acquisition module and used for realizing the splicing of acquired videos, and the spliced videos are sent to the information processing module through the processor to be analyzed and processed.
As shown in fig. 2, the information processing module is composed of a video processing module and an AR data processing module, and is configured to analyze the panoramic image collected by the image collecting module, and send the analysis result to the vehicle early warning module through the processor.
The vehicle speed measuring module is arranged on the vehicle body, consists of a radar and GPS positioning equipment, and is used for detecting the traveling direction and the vehicle speed of the vehicle in real time and sending the traveling direction and the vehicle speed to the vehicle early warning module through the processor.
And the vehicle early warning module carries out safety early warning on the running of the vehicle according to the data information obtained by the information processing module and the vehicle speed measuring module.
The image acquisition module is an eagle eye panoramic camera capable of shooting panorama, the eagle eye panoramic camera is arranged on a cradle head of a vehicle, and video images around the vehicle are shot in an uninterrupted rotation mode; or a plurality of high-definition cameras arranged around the vehicle are installed on the vehicle body, the lenses are shot outwards from the vehicle body, shot video information is synthesized into a video stream through a video splicing technology, the video stream is displayed on a video display module in real time, video splicing is realized through an AI chip, and the fact that the video splicing is realized through the AI technology is a conventional technical means in the field is not repeated here.
The video processing module is used for optimizing the video information acquired by the image acquisition module. The video data collected by the image collecting module may shake and blur due to the fact that the vehicle may bump, heavy rain, fog and snow during running. Therefore, the video processing module comprises a video quality evaluation module, a video optimization module and a video anti-shake stabilization module.
The video optimization module is used for superposing visible light data and infrared data on the fuzzy video, improving the video quality on the equipment perception imaging level, and adopting an AI algorithm to process video information in real time, including denoising, rain removing, defogging and low-illumination enhancement technologies to improve the video imaging quality, so as to obtain high-quality and high-definition video data.
The video anti-shake stabilizing module is used for carrying out real-time anti-shake on the shake strong video through an anti-shake technology, guaranteeing the stability of image acquisition in the running process of the vehicle and reducing the superposition deviation of AR information on video data.
It should be noted that, the definition processing such as noise reduction and defogging is performed on the video information by using the AI algorithm, and the real-time anti-shake of the video anti-shake stabilization module are conventional techniques in the art, and the relevant techniques may be selected according to actual needs, which are not described in detail herein.
The AR data processing module is used for realizing superposition of various navigation information on video data, achieving an beyond-sight viewing effect through a reality enhancement technology and realizing an auxiliary driving effect, and particularly comprises a driving information superposition module and a vehicle surrounding information superposition module, wherein the driving information superposition module is used for superposing the width, longitude and latitude, the path and the path information non-driving area, road signs, boundary lines, lane lines and a marked building on video content, converting information on the navigation map onto video according to the position of a vehicle, superposing and displaying the information on the video in real time, superposing information on the surrounding of the vehicle, converting AIS information on the surrounding of the vehicle and the distance azimuth of the surrounding vehicle from the vehicle on the video data, and superposing the information on the video content in real time.
The vehicle early warning module carries out early warning on various driving accidents in the driving process of the vehicle according to the information of the AR data processing module and the information of the vehicle speed measuring module, and comprises lane departure early warning, vehicle speed early warning and vehicle collision early warning, lane departure early warning values, vehicle speed ranges, minimum distances of vehicles around the vehicle, maximum number of vehicles around the vehicle and the like can be set autonomously, and early warning and voice broadcasting are carried out on the condition of overrun.
The video display module is composed of a plurality of displays and is used for displaying videos after various driving information is overlapped, so that a driver can intuitively observe the driving conditions around the vehicle and the vehicle driving early warning information is highlighted.
The invention discloses an unmanned track control system based on model predictive control, which comprises a 360-degree photoelectric acquisition module, a visible light camera, an infrared camera, a thermal imaging device, a video real-time processing module, an AR data processing module, a vehicle intelligent speed measuring module, a vehicle early warning module and a video display module, wherein the video display module is display equipment, and the display equipment consists of a plurality of displays.
The invention uses the reality augmentation technology to make the width, longitude and latitude, path and path information non-driving area, road sign, boundary line, lane line, mark building and AIS information of the navigation map: vehicle type, vehicle information, vehicle speed, direction, vehicle travel information: the traveling direction and the vehicle speed are superimposed on the video image and finally projected to the display, so that the navigation information necessary for safe traveling is comprehensively displayed, the visual field range beyond the visual field of human eyes is reached, and a driver can observe more traveling information in an intuitive mode. Meanwhile, under the scenes of bad weather and low visibility, the running information of the vehicle and other vehicles can be intuitively monitored, and the observation effect of a vehicle driver is enhanced.
Examples
The embodiment 2 of the present invention differs from the above embodiment in that the above embodiment mainly describes providing a visual projection image to a vehicle driver through a reality augmentation technology, so that running information of the vehicle and other vehicles can be intuitively monitored, and a lookout effect of the vehicle driver is enhanced. However, because the driving conditions of different road sections in the driving process are different, the attention of the driver is required to be different, and in the embodiment, the risk degree of the driving area of the driver can be further reminded by dividing the preset lane into a plurality of driving areas for targeted analysis.
Specifically, the optimization processes of the video optimization module and the video anti-shake stabilization module can reflect the running state of the vehicle. The video quality evaluation module is used for analyzing and judging shooting conditions of different lane sections according to the optimization conditions of the video optimization module and the video anti-shake stabilization module, and the specific analysis process is as follows:
the processor divides a predetermined lane of the vehicle into n traveling areas according to road segments, n is a positive integer, and n is greater than or equal to 1.
The video quality evaluation module respectively acquires the ratio of the optimization processing times to the optimization processing time of the video optimization module in the n driving areas, and respectively acquires the ratio of the anti-shake processing times to the anti-shake processing time of the video anti-shake stabilization module in the n driving areas.
The optimizing processing times are the total times of optimizing processing of the collected video information by the video optimizing module in each driving area, and the more the times are, the worse the driving state of the vehicle in a certain driving area is; the ratio of the optimization processing time length to the running time length of the running area is the ratio of the time length of the video optimization module for performing the optimization processing on the collected video information in each running area, and the larger the ratio is, the worse the initial state of the video of a certain running area is, namely the worse the running state is.
Similarly, the anti-shake processing times refer to the total times of the video anti-shake stabilizing modules in each driving area for optimizing the collected video information, and the more the times are, the worse the driving state of the vehicle in a certain driving area is; the anti-shake processing duration ratio is the ratio of the duration of the video anti-shake stabilizing module in each running area for optimizing the acquired video information to the running duration of the running area, and the running state of a certain running area is worse when the ratio is larger.
The video quality evaluation module respectively marks the optimization processing times, the optimization processing time length occupation ratio and the anti-shake processing times and the anti-shake processing time length occupation ratio as Ot, or, st and Sr; and calculating a running stability coefficient S according to the optimization processing times, the optimization processing time length occupation ratio and the anti-shake processing times and the anti-shake processing time length occupation ratio through a formula, wherein the specific calculation expression is as follows:
in the method, in the process of the invention,respectively optimizing the number of times, the ratio of the optimized treatment duration to the ratio of the anti-shake treatment number of times to the anti-shake treatment duration, and the pre-set ratio coefficient of the optimized treatment duration to the ratio of the anti-shake treatment duration to the anti-shake treatment duration>For correction of the coefficient, the magnitude of the driving stability coefficient S is in a suitable value range, +.>。
From the above, it can be seen that, when the ratio of the number of optimization treatments, the ratio of the length of optimization treatments, and the ratio of the number of anti-shake treatments to the length of anti-shake treatments is larger, the running stability coefficient S is smaller, i.e., the stability of the running area is worse.
The video quality evaluation module divides the running stability coefficient S and the standard stability threshold value, calculates the ratio of the running stability coefficient S and the standard stability threshold value, marks the ratio as a stability ratio and marks the ratio as S0. The greater the stability ratio S0, the better the stability of the running region.
And the video quality evaluation module sends the calculated stable ratio S0 to the vehicle early warning module for early warning analysis through the processor.
The vehicle speed measuring module calculates the running time t passing through each running area according to the vehicle speed and sends the running time t to the vehicle early warning module through the processor.
After the vehicle early warning module receives the stable ratio S0 of each running area and the running time t of each running area sent by the video quality evaluation module, the running risk coefficient K of each running area is calculated through a formula, the risk degree of each running area is comprehensively analyzed, and the specific calculation expression is as follows:
in the method, in the process of the invention,and->Respectively a stable ratio S0 and a preset ratio coefficient of the running time t of each running area, and。
as is clear from the above equation, the shorter the travel time t, the faster the traveling area is passed, and even if the traveling environment stability in a certain traveling area is poor, the risk is small due to the short passage time.
And the vehicle early warning module compares the running risk coefficient K with a preset risk threshold value to determine the risk degree of each running area.
If the running risk coefficient K is greater than or equal to a preset risk threshold, the risk degree of the running area is greater than a preset requirement, the vehicle early warning module marks the running area as a risk running area, otherwise, the running area is marked as a normal running area, and the normal running area is stored.
When the vehicle starts to run, the vehicle early warning module invokes the historical storage information of the vehicle early warning module to acquire the running risk coefficient K of each running area, and sends the running area information marked as the risk running area to the video display module, and the video display module highlights the risk running area to prompt the driver to pay attention.
Further, because the states of the running areas at different time points are different, when the vehicle early warning module invokes the historical storage information of the vehicle early warning module, only the running risk coefficient K of each running area closest to the running time is invoked, so that the timeliness of the running data is ensured.
The above formulas are all formulas with dimensions removed and numerical values calculated, the formulas are formulas with a large amount of data collected for software simulation to obtain the latest real situation, and preset parameters in the formulas are set by those skilled in the art according to the actual situation.
It will be apparent to those skilled in the art that, for convenience and brevity of description, the above method embodiments may refer to the specific working procedures of the system, apparatus and unit described above, and will not be described in detail herein.
The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any other combination. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer instructions or computer programs. When the computer instructions or computer program are loaded or executed on a computer, the processes or functions described in accordance with the embodiments of the present application are all or partially produced. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wired (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more sets of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a solid state disk.
Those of ordinary skill in the art will appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Finally: the foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and principles of the invention are intended to be included within the scope of the invention.
Claims (3)
1. Unmanned trajectory control system based on model predictive control, its characterized in that: the system comprises a processor, an image acquisition module, an information processing module, a vehicle speed measurement module, a vehicle early warning module and a video display module;
the processor is in signal connection with the image acquisition module, the information processing module, the vehicle speed measurement module and the vehicle early warning module and is used for issuing control instructions and receiving related data results;
the image acquisition module is used for acquiring and storing panoramic images around the vehicle, and sending the acquired video data to the information processing module for analysis and processing through the processor;
the information processing module is used for analyzing the panoramic image acquired by the image acquisition module and sending an analysis result to the vehicle early warning module through the processor;
the vehicle speed measuring module is used for detecting the traveling direction and the vehicle speed of the vehicle in real time and sending the traveling direction and the vehicle speed to the vehicle early warning module through the processor;
the vehicle early warning module carries out early warning on various driving accidents in the driving process of the vehicle according to the information sent by the information processing module and the information sent by the vehicle speed measuring module;
the video display module is composed of a plurality of displays and is used for displaying videos after various driving information is overlapped;
the information processing module comprises a video processing module and an AR data processing module;
the video processing module is used for optimizing the video information acquired by the image acquisition module and comprises a video optimizing module and a video anti-shake stabilizing module;
the video optimization module is used for improving the imaging quality of the fuzzy video in real time;
the video anti-shake stabilization module is used for carrying out real-time anti-shake on the shake strong video;
the AR data processing module comprises a driving information superposition module and a vehicle surrounding information superposition module;
the driving information superposition module superposes the width, longitude and latitude, non-driving area, road sign, lane line and logo building of the path of the navigation chart on the video content, converts the information on the navigation chart to the video according to the position of the vehicle, and superposes and displays the information on the video in real time;
the vehicle surrounding information superposition module is used for converting AIS information around the vehicle, the distance and the direction of the position of the surrounding vehicle from the vehicle to video data and superposing the AIS information on the video content in real time;
the video processing module further comprises a video quality evaluation module;
the video quality evaluation module is used for analyzing and judging shooting conditions of different lane sections according to the optimization conditions of the video optimization module and the video anti-shake stabilization module, and the specific analysis process is as follows:
the method comprises the steps that a processor divides a preset lane of a vehicle into n running areas according to road sections, wherein n is a positive integer and is greater than or equal to 1;
the vehicle speed measuring module calculates the running time t passing through each running area according to the vehicle speed and sends the running time t to the vehicle early warning module and the video quality evaluation module through the processor;
the video quality evaluation module respectively acquires the ratio of the optimization processing times to the optimization processing time of the video optimization module in the n driving areas, and respectively acquires the ratio of the anti-shake processing times to the anti-shake processing time of the video anti-shake stabilization module in the n driving areas;
the video quality evaluation module calculates a running stability coefficient S according to the optimization processing times, the optimization processing time length occupation ratio, the anti-shake processing times and the anti-shake processing time length occupation ratio through a formula, compares the running stability coefficient S with a standard stability threshold value, and calculates a stability ratio S0; meanwhile, the calculated stable ratio S0 is sent to a vehicle early warning module for early warning analysis through a processor;
after the vehicle early warning module receives the stable ratio S0 of each running area and the running time t of each running area sent by the video quality evaluation module, the running risk coefficient K of each running area is calculated through a formula;
the vehicle early warning module compares the running risk coefficient K with a preset risk threshold;
if the running risk coefficient K is greater than or equal to a preset risk threshold, the vehicle early warning module marks the running area as a risk running area, otherwise, the running area is marked as a normal running area, and the normal running area is stored.
2. The unmanned trajectory control system based on model predictive control of claim 1, wherein: the vehicle early warning module is used for calling historical storage information of the vehicle early warning module when the vehicle starts to run, acquiring a running risk coefficient K of each running area, and sending running area information marked as a risk running area to the video display module, wherein the video display module is used for highlighting the risk running area.
3. The unmanned trajectory control system based on model predictive control according to claim 2, wherein: when the vehicle early warning module invokes the historical storage information of the vehicle early warning module, only the running risk coefficient K of each running area closest to the running time is invoked.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310245289.2A CN115951692B (en) | 2023-03-15 | 2023-03-15 | Unmanned trajectory control system based on model predictive control |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310245289.2A CN115951692B (en) | 2023-03-15 | 2023-03-15 | Unmanned trajectory control system based on model predictive control |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115951692A CN115951692A (en) | 2023-04-11 |
CN115951692B true CN115951692B (en) | 2023-05-12 |
Family
ID=85907042
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310245289.2A Active CN115951692B (en) | 2023-03-15 | 2023-03-15 | Unmanned trajectory control system based on model predictive control |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115951692B (en) |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103105174B (en) * | 2013-01-29 | 2016-06-15 | 四川长虹佳华信息产品有限责任公司 | A kind of vehicle-mounted outdoor scene safety navigation method based on AR augmented reality |
CN106740470A (en) * | 2016-11-21 | 2017-05-31 | 奇瑞汽车股份有限公司 | A kind of blind area monitoring method and system based on full-view image system |
CN112991684A (en) * | 2019-12-15 | 2021-06-18 | 北京地平线机器人技术研发有限公司 | Driving early warning method and device |
CN111361557B (en) * | 2020-02-13 | 2022-12-16 | 江苏大学 | Early warning method for collision accident during turning of heavy truck |
-
2023
- 2023-03-15 CN CN202310245289.2A patent/CN115951692B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN115951692A (en) | 2023-04-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7332726B2 (en) | Detecting Driver Attention Using Heatmaps | |
US10168174B2 (en) | Augmented reality for vehicle lane guidance | |
CN111695546B (en) | Traffic signal lamp identification method and device for unmanned vehicle | |
EP3031655B1 (en) | Information provision device, information provision method, and carrier medium storing information provision program | |
EP2876622B1 (en) | Vehicle-surroundings monitoring device and vehicle-surroundings monitoring system | |
US11205284B2 (en) | Vehicle-mounted camera pose estimation method, apparatus, and system, and electronic device | |
EP3693203A1 (en) | Information provision device, and information provision method | |
US20150025800A1 (en) | Method for Monitoring Vehicle Driving State and Vehicle Navigation Device for Achieving the Same | |
US20220180483A1 (en) | Image processing device, image processing method, and program | |
US10232772B2 (en) | Driver assistance system | |
JP6415583B2 (en) | Information display control system and information display control method | |
US10996469B2 (en) | Method and apparatus for providing driving information of vehicle, and recording medium | |
DE102018112233B4 (en) | PROVIDING A TRAFFIC MIRROR CONTENT FOR A DRIVER | |
WO2017162812A1 (en) | Adaptive display for low visibility | |
CN109981980B (en) | Beyond-visual-range real-time display method and system, storage medium and computer equipment | |
CN111835998B (en) | Beyond-the-horizon panoramic image acquisition method, device, medium, equipment and system | |
CN115951692B (en) | Unmanned trajectory control system based on model predictive control | |
CN116972872A (en) | Determination method and device of navigation display mode and driving navigation system | |
KR101744718B1 (en) | Display system and control method therof | |
CN112526477B (en) | Method and device for processing information | |
CN109509364B (en) | Method, system, device and medium for confirming preview time of driver | |
CN110979319A (en) | Driving assistance method, device and system | |
CN116092321A (en) | Information detection method and device and electronic equipment | |
CN118269822A (en) | Information display method, apparatus and storage medium | |
JP2023153473A (en) | Display device for vehicle and display method for vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |