CN116125996B - Safety monitoring method and system for unmanned vehicle - Google Patents

Safety monitoring method and system for unmanned vehicle Download PDF

Info

Publication number
CN116125996B
CN116125996B CN202310350094.4A CN202310350094A CN116125996B CN 116125996 B CN116125996 B CN 116125996B CN 202310350094 A CN202310350094 A CN 202310350094A CN 116125996 B CN116125996 B CN 116125996B
Authority
CN
China
Prior art keywords
data
fusion
vehicle
unmanned vehicle
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310350094.4A
Other languages
Chinese (zh)
Other versions
CN116125996A (en
Inventor
杨宝华
李迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qianzhong Huanying Technology Co ltd
Original Assignee
Beijing Qianzhong Huanying Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qianzhong Huanying Technology Co ltd filed Critical Beijing Qianzhong Huanying Technology Co ltd
Priority to CN202310350094.4A priority Critical patent/CN116125996B/en
Publication of CN116125996A publication Critical patent/CN116125996A/en
Application granted granted Critical
Publication of CN116125996B publication Critical patent/CN116125996B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Alarm Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a safety monitoring method and a system for an unmanned vehicle, wherein the safety monitoring method comprises the following steps: establishing communication relations between a plurality of unmanned vehicles and a background server in a preset area; when a plurality of unmanned vehicles run in a preset area, performing environment sensing on the interior of the unmanned vehicles to obtain first sensing data; performing environment sensing on the outside of the unmanned vehicle to obtain second sensing data; the background server determines types of a plurality of unmanned vehicles according to the first perception data and the second perception data, and performs corresponding-level security monitoring according to the types. The method and the system realize safety monitoring of a plurality of unmanned vehicles in the preset area, simultaneously execute corresponding-level safety monitoring according to the types of the plurality of unmanned vehicles, reduce the load of a background server and improve the utilization rate of monitoring resources.

Description

Safety monitoring method and system for unmanned vehicle
Technical Field
The invention relates to the technical field of automobile electronics, in particular to a safety monitoring method and system for an unmanned vehicle.
Background
An unmanned vehicle is one of intelligent automobiles, and mainly depends on a vehicle-mounted sensing system to sense road environment, automatically plan a driving route and control the vehicle to reach a preset destination. The vehicle-mounted sensor is used for sensing the surrounding environment of the vehicle, and controlling the steering and the speed of the vehicle according to the road, the vehicle position and the obstacle information obtained by sensing, so that the vehicle can safely and reliably run on the road. In the prior art, safety monitoring is performed on a single unmanned vehicle, safety monitoring cannot be performed on a plurality of unmanned vehicles in a preset area, and if the same-level safety monitoring is performed when the safety monitoring is performed on the plurality of unmanned vehicles in the preset area, a great load is caused on a background server, and resource waste is caused.
Disclosure of Invention
The present invention aims to solve, at least to some extent, one of the technical problems in the above-described technology. Therefore, a first object of the present invention is to provide a method for monitoring safety of unmanned vehicles, which implements safety monitoring of a plurality of unmanned vehicles in a preset area, and simultaneously, based on types of the plurality of unmanned vehicles, performs corresponding level safety monitoring according to the types, thereby reducing load of a background server and improving utilization rate of monitoring resources.
A second object of the present invention is to propose a safety monitoring system for an unmanned vehicle.
To achieve the above object, an embodiment of a first aspect of the present invention provides a method for monitoring safety of an unmanned vehicle, including:
establishing communication relations between a plurality of unmanned vehicles and a background server in a preset area;
when a plurality of unmanned vehicles run in a preset area, performing environment sensing on the interior of the unmanned vehicles to obtain first sensing data; performing environment sensing on the outside of the unmanned vehicle to obtain second sensing data;
the background server determines types of a plurality of unmanned vehicles according to the first perception data and the second perception data, and performs corresponding-level security monitoring according to the types.
According to some embodiments of the present invention, performing environmental awareness of an interior of an unmanned vehicle to obtain first awareness data includes:
acquiring a control panel image and an internal scene image of a vehicle-mounted terminal in an unmanned vehicle;
checking the image quality of the control panel image and the scene image, and performing parameter adjustment processing when the image quality is determined to be unqualified, so as to obtain a target control panel image and a target scene image;
Analyzing the target control panel image, and determining control node operation information, vehicle state parameter information and operation information of vehicle equipment of the vehicle;
analyzing the target scene image to determine the behavior characteristics of the passengers;
and determining first perception data according to the control node operation information, the vehicle state parameter information, the operation information of the vehicle-mounted equipment and the behavior characteristics of passengers.
According to some embodiments of the present invention, the image quality of the control panel image is checked, and when it is determined that the image quality does not reach the standard, a parameter adjustment process is performed to obtain a target control panel image, including:
calculating a first gradient value of the control panel image in the horizontal direction and a second gradient value of the control panel image in the vertical direction by utilizing a Sobel operator based on a gradient algorithm;
inquiring a preset first gradient value-second gradient value-definition data table according to the first gradient value and the second gradient value to determine a definition value;
and when the definition value is determined to be smaller than the preset definition threshold, the image quality is not up to standard, the difference value between the preset definition threshold and the definition value is determined, the difference value-focal length correction value data table is inquired according to the difference value, the focal length correction value is determined, and parameter adjustment processing is carried out according to the focal length correction value, so that the target control panel image is obtained.
According to some embodiments of the invention, the environmental awareness of the exterior of the unmanned vehicle, resulting in second awareness data, includes:
acquiring an external environment image and radar information of an unmanned vehicle;
and determining obstacle information around the unmanned vehicle and collision risks with all the obstacles according to the external environment image and the radar information, and determining second perception data according to the obstacle information around the unmanned vehicle and the collision risks with all the obstacles.
According to some embodiments of the present invention, a background server determines types of a plurality of unmanned vehicles according to first perception data and second perception data, performs corresponding levels of security monitoring according to the types, and includes:
the background server inputs the first perception data into a pre-trained data classification model for classification, and outputs a plurality of first classification data;
inputting the second perception data into a pre-trained data classification model for classification, and outputting a plurality of second classification data;
performing data fusion on the first classified data and the second classified data of the same data type to obtain a group of fusion data, and further obtaining a plurality of groups of fusion data;
Respectively inputting a plurality of groups of fusion data into corresponding single recognition models, and outputting a first analysis result; and if at least one of the first analysis results is determined to indicate that the risk data exists, determining that the unmanned vehicle corresponding to the fusion data is a first type vehicle, and executing primary safety monitoring.
According to some embodiments of the invention, further comprising:
selecting target fusion data from a plurality of groups of fusion data when all the first analysis results are determined to indicate that no risk data exists;
extracting features of other fusion data except the target fusion data to obtain feature vectors, and converting the feature vectors to the types corresponding to the target fusion data according to the feature vectors to obtain converted feature vectors;
matching the target fusion data with each conversion feature vector, and determining the association relationship between the target fusion data and each conversion feature vector;
determining a target set according to the association relation;
generating a data system according to the target set;
inputting the data system into the composite recognition model, and outputting a second analysis result;
when the risk data are determined to exist according to the second analysis result, determining that the unmanned vehicle corresponding to the fusion data is a second type vehicle, and executing secondary safety monitoring; and when the risk data are not determined to exist according to the second analysis result, determining that the unmanned vehicle corresponding to the fusion data is a third type vehicle, and executing three-level safety monitoring.
According to some embodiments of the invention, a single recognition model is trained, the method comprising:
acquiring sample fusion data;
preprocessing sample fusion data based on a data layer of a single recognition model, and determining risk factors;
carrying out data analysis on the risk factors based on an index layer of the single recognition model, determining a plurality of indexes, and screening to obtain risk indexes;
combining various risk indexes based on a model parameter layer of a single recognition model to obtain various combination results, and screening out a combination result with highest risk probability prediction accuracy as a target combination result;
and outputting a predicted result of the sample fusion data based on the output layer of the single recognition model, and indicating that training is completed when the predicted result is consistent with a real result corresponding to the sample fusion data.
According to some embodiments of the invention, determining the target set according to the association relation includes:
taking the target fusion data as a key node, and converting the feature vector as an associated node of the key node;
constructing a screening system according to the key nodes, the associated nodes and the associated relation; the screening system comprises a distance value from each associated node to a key node;
Screening out the associated nodes with the distance value smaller than the preset distance value as target associated nodes;
and determining a target set according to the target associated node and the key node.
According to some embodiments of the invention, a method for training a composite recognition model includes:
determining big data of the unmanned vehicle, and constructing a map text of a driving scene according to the big data;
extracting each node in the map text, determining the association relation among each node, and constructing a knowledge map of a data chain corresponding to the driving scene;
determining risk index information of each node;
extracting associated feature vectors in the knowledge graph, and carrying out feature fusion according to the associated feature vectors and risk index information to obtain fusion vectors;
and training the composite recognition model based on the fusion vector.
To achieve the above object, a second aspect of the present invention provides a safety monitoring system for an unmanned vehicle, comprising:
the building module is used for building communication relations between a plurality of unmanned vehicles in a preset area and a background server;
the sensing module is used for sensing the environment in the unmanned vehicles when a plurality of unmanned vehicles run in a preset area to obtain first sensing data; performing environment sensing on the outside of the unmanned vehicle to obtain second sensing data;
The background server is used for determining the types of the unmanned vehicles according to the first perception data and the second perception data, and executing corresponding-level safety monitoring according to the types.
The invention provides a safety monitoring method and a system for an unmanned vehicle, which have the beneficial effects that:
1. the method and the system realize safety monitoring of a plurality of unmanned vehicles in the preset area, simultaneously execute corresponding-level safety monitoring according to the types of the plurality of unmanned vehicles, reduce the load of a background server and improve the utilization rate of monitoring resources.
2. Based on the environment sensing of the interior and the exterior of the unmanned vehicle, first sensing data and second sensing data are obtained, accurate sensing of the environment where the unmanned vehicle is located is achieved, and the comprehensiveness of the acquired data is improved.
3. When the types of the unmanned vehicles are distinguished, the single data is identified based on the single identification model, and the composite data is identified based on the composite identification model, so that the first type of vehicles, the second type of vehicles and the third type of vehicles can be accurately and comprehensively judged, and the safety monitoring of corresponding grades is carried out.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a flow chart of a method of safety monitoring of an unmanned vehicle according to one embodiment of the invention;
FIG. 2 is a flow chart of a method of deriving first perceptual data according to one embodiment of the invention;
fig. 3 is a block diagram of a safety monitoring system for an unmanned vehicle according to one embodiment of the invention.
Detailed Description
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings, it being understood that the preferred embodiments described herein are for illustration and explanation of the present invention only, and are not intended to limit the present invention.
As shown in fig. 1, an embodiment of a first aspect of the present invention proposes a method for monitoring safety of an unmanned vehicle, including steps S1 to S3:
s1, establishing a communication relationship between a plurality of unmanned vehicles and a background server in a preset area;
s2, when a plurality of unmanned vehicles run in a preset area, performing environment sensing on the interior of the unmanned vehicles to obtain first sensing data; performing environment sensing on the outside of the unmanned vehicle to obtain second sensing data;
And S3, the background server determines types of a plurality of unmanned vehicles according to the first sensing data and the second sensing data, and performs corresponding-level safety monitoring according to the types.
The working principle of the technical scheme is as follows: the communication relation between the unmanned vehicles and the background server in the preset area is established, so that the background server can conveniently acquire initial monitoring information of the unmanned vehicles in the preset area, and the initial monitoring information is acquired based on a perception module arranged on the unmanned vehicles. The preset area is a planned operation area of the unmanned vehicle.
In this embodiment, the first perception data is data obtained by performing environmental perception on the interior of the unmanned vehicle. The first sensing data is control node operation information, vehicle state parameter information, operation information of the vehicle equipment and behavior characteristics of passengers.
In this embodiment, the second perception data is data obtained by performing environmental perception on the outside of the unmanned vehicle. For example, the obstacle information around the unmanned vehicle and the collision risk with each obstacle are determined based on the external environment image of the unmanned vehicle and the radar information.
Based on the first perception data and the second perception data, the interior and the exterior of the unmanned vehicle are conveniently and comprehensively perceived, the comprehensive perception data are conveniently obtained, and the background server is conveniently and accurately determined the type of each unmanned vehicle.
In this embodiment, the types of the unmanned vehicle include a first type vehicle, a second type vehicle, and a third type vehicle, and primary safety monitoring, secondary safety monitoring, and tertiary safety monitoring are performed, respectively.
In the embodiment, the monitoring requirements of the primary security monitoring, the secondary security monitoring and the tertiary security monitoring are sequentially reduced, the load on the background server caused by the corresponding primary security monitoring is maximum, and the secondary security monitoring and the tertiary security monitoring are the lowest. For example, when primary security monitoring is performed on a first type of vehicle, a monitoring interval for acquiring data acquired by performing environment awareness on the first type of vehicle is 1s. When the second-type vehicle is subjected to the second-level safety monitoring, the monitoring interval for acquiring the data acquired by performing environment sensing on the second-type vehicle is 2s. When three-level safety monitoring is performed on the third type of vehicle, the monitoring interval for acquiring data acquired by performing environment sensing on the third type of vehicle is 3s.
The beneficial effects of the technical scheme are that: the method and the system realize safety monitoring of a plurality of unmanned vehicles in the preset area, simultaneously execute corresponding-level safety monitoring according to the types of the plurality of unmanned vehicles, reduce the load of a background server and improve the utilization rate of monitoring resources.
As shown in fig. 2, according to some embodiments of the present invention, the environment sensing is performed on the interior of the unmanned vehicle to obtain first sensing data, including steps S21-S25:
s21, acquiring a control panel image of a vehicle-mounted terminal in the unmanned vehicle and an internal scene image;
s22, checking the image quality of the control panel image and the scene image, and performing parameter adjustment processing when the image quality is determined to be unqualified, so as to obtain a target control panel image and a target scene image;
s23, analyzing the target control panel image, and determining control node operation information, vehicle state parameter information and operation information of the vehicle equipment of the vehicle;
s24, analyzing the target scene image to determine the behavior characteristics of the passengers;
s25, determining first perception data according to the control node operation information, the vehicle state parameter information, the operation information of the vehicle machine equipment and the behavior characteristics of passengers.
The working principle of the technical scheme is as follows: in this embodiment, the control panel image of the in-vehicle terminal is exemplified by an acquired image of a center control screen inside the unmanned vehicle.
In this embodiment, the scene images of the interior are images of the front and rear rows of the interior of the unmanned vehicle, including the ride image of the passenger.
In this embodiment, the target control panel image is a control panel image obtained by performing parameter adjustment processing on the control panel image and having an image quality that meets the standard.
In this embodiment, the target scene image is a scene image with up-to-standard image quality obtained by performing parameter adjustment processing on the scene image.
In this embodiment, the parameter adjustment processing includes sharpness adjustment processing.
In this embodiment, the control node operation information is operation state information of a node in control software carried by the unmanned vehicle, and the node includes a hardware driving node and a man-machine interaction node. The vehicle state parameter information includes a brake pedal parameter, an accelerator pedal parameter, a steering wheel torque parameter, and the like of the vehicle. The operation information of the vehicle equipment comprises whether each equipment operates normally or not and instruction response.
In the embodiment, analyzing the target scene image to determine the behavior characteristics of the passengers; the method is convenient for determining and collecting the behavior characteristics of the passengers. Behavioral characteristics include whether the belt is fastened, whether the facial expression is normal, etc.
The beneficial effects of the technical scheme are that: acquiring a control panel image and an internal scene image of a vehicle-mounted terminal in an unmanned vehicle; checking the image quality of the control panel image and the scene image, and performing parameter adjustment processing when the image quality is determined to be unqualified, so as to obtain a target control panel image and a target scene image; the target control panel image and the target scene image which reach the image quality are conveniently determined. And determining first perception data according to the control node operation information, the vehicle state parameter information, the operation information of the vehicle-mounted equipment and the behavior characteristics of passengers. The information in the unmanned vehicle is collected comprehensively, the comprehensiveness of the obtained first perception data is improved, meanwhile, the behavior characteristics of passengers are also collected, the factors of users are added conveniently, and the safety monitoring level is improved.
According to some embodiments of the present invention, the image quality of the control panel image is checked, and when it is determined that the image quality does not reach the standard, a parameter adjustment process is performed to obtain a target control panel image, including:
calculating a first gradient value of the control panel image in the horizontal direction and a second gradient value of the control panel image in the vertical direction by utilizing a Sobel operator based on a gradient algorithm;
Inquiring a preset first gradient value-second gradient value-definition data table according to the first gradient value and the second gradient value to determine a definition value;
and when the definition value is determined to be smaller than the preset definition threshold, the image quality is not up to standard, the difference value between the preset definition threshold and the definition value is determined, the difference value-focal length correction value data table is inquired according to the difference value, the focal length correction value is determined, and parameter adjustment processing is carried out according to the focal length correction value, so that the target control panel image is obtained.
The working principle of the technical scheme is as follows: in this embodiment, the gradient algorithm comprises a Tenengrad gradient algorithm or a Laplacian gradient algorithm. The larger the values of the determined first gradient value and second gradient value, the larger the sharpness value representing the control panel image.
In this embodiment, the preset first gradient value-second gradient value-definition data table is obtained through multiple experiments, and the definition is determined by performing comparison query based on the first gradient value and the second gradient value.
The beneficial effects of the technical scheme are that: and based on the control panel image with the definition reaching the standard, namely the target control panel image, the accuracy of analyzing the target control panel image is improved.
In one embodiment, the method of obtaining the target scene image is consistent with the method of obtaining the target control panel image.
According to some embodiments of the invention, the environmental awareness of the exterior of the unmanned vehicle, resulting in second awareness data, includes:
acquiring an external environment image and radar information of an unmanned vehicle;
and determining obstacle information around the unmanned vehicle and collision risks with all the obstacles according to the external environment image and the radar information, and determining second perception data according to the obstacle information around the unmanned vehicle and the collision risks with all the obstacles.
The working principle of the technical scheme is as follows: based on the external environment image and radar information, it is possible to determine which obstacles exist around, the position and motion state of the obstacle, and the distance value between the obstacle and the unmanned vehicle, and analyze and obtain the obstacle information around the unmanned vehicle and the collision risk with each obstacle as second perception data.
The beneficial effects of the technical scheme are that: the environment sensing information, i.e. the second sensing data, outside the unmanned vehicle is conveniently and accurately determined.
According to some embodiments of the present invention, a background server determines types of a plurality of unmanned vehicles according to first perception data and second perception data, performs corresponding levels of security monitoring according to the types, and includes:
The background server inputs the first perception data into a pre-trained data classification model for classification, and outputs a plurality of first classification data;
inputting the second perception data into a pre-trained data classification model for classification, and outputting a plurality of second classification data;
performing data fusion on the first classified data and the second classified data of the same data type to obtain a group of fusion data, and further obtaining a plurality of groups of fusion data;
respectively inputting a plurality of groups of fusion data into corresponding single recognition models, and outputting a first analysis result; and if at least one of the first analysis results is determined to indicate that the risk data exists, determining that the unmanned vehicle corresponding to the fusion data is a first type vehicle, and executing primary safety monitoring.
The working principle of the technical scheme is as follows: in this embodiment, the first and second sensing data each comprise different types of data. Types include images, text, etc.
In this embodiment, the plurality of first classification data includes image data, text data, and the like; the plurality of second classification data includes image data, text data, and the like.
In the embodiment, data fusion is performed on first classified data and second classified data of the same data type, and the first classified data and the second classified data are used as one group of fusion data, so that a plurality of groups of fusion data are obtained; for example, the image data corresponding to the first classification data is fused with the image data corresponding to the second classification data.
In this embodiment, the single recognition model is a model that recognizes a single data type, and the single recognition model is an image recognition model that recognizes only image type data.
In the embodiment, a plurality of groups of fusion data are respectively input into a corresponding single recognition model, and a first analysis result is output; the fusion data A is input into the single recognition model A for recognition, the fusion data B is input into the single recognition model B for recognition, on one hand, the efficiency of data recognition is improved based on recognition of the fusion data, and meanwhile, the recognition accuracy of corresponding data is improved based on a specific single recognition model.
In this embodiment, for a group of fusion data, whether risk data exists in the group of fusion data is determined based on the recognition result of the corresponding single recognition model. The risk data is whether the control information of the vehicle is correct, whether the passenger is physically uncomfortable, whether the risk of collision is great based on the current control instruction, and the like.
The beneficial effects of the technical scheme are that: the fusion of the internal perception data and the external perception data of the same type of unmanned vehicle corresponding to each group of fusion data is realized, and the processing efficiency of the same type of data is improved conveniently. And (3) respectively inputting a plurality of groups of fusion data into the corresponding single recognition model, and outputting a first analysis result, wherein on one hand, the recognition efficiency of the data is improved based on the recognition of the fusion data, and on the other hand, the recognition accuracy of the corresponding data is improved based on the specific single recognition model. And if at least one of the first analysis results is determined to indicate that the risk data exists, determining that the unmanned vehicle corresponding to the fusion data is a first type vehicle, and executing primary safety monitoring. The unmanned vehicle which is the first type of vehicle is convenient to accurately determine, and primary safety monitoring is conducted.
According to some embodiments of the invention, further comprising:
selecting target fusion data from a plurality of groups of fusion data when all the first analysis results are determined to indicate that no risk data exists;
extracting features of other fusion data except the target fusion data to obtain feature vectors, and converting the feature vectors to the types corresponding to the target fusion data according to the feature vectors to obtain converted feature vectors;
matching the target fusion data with each conversion feature vector, and determining the association relationship between the target fusion data and each conversion feature vector;
determining a target set according to the association relation;
generating a data system according to the target set;
inputting the data system into the composite recognition model, and outputting a second analysis result;
when the risk data are determined to exist according to the second analysis result, determining that the unmanned vehicle corresponding to the fusion data is a second type vehicle, and executing secondary safety monitoring; and when the risk data are not determined to exist according to the second analysis result, determining that the unmanned vehicle corresponding to the fusion data is a third type vehicle, and executing three-level safety monitoring.
The working principle of the technical scheme is as follows: in this embodiment, the target fusion data is a certain type of data that is preset, and it is assumed that the type of the target fusion data is a, as the standardized data.
In this embodiment, the feature vector represents key data of the fusion data. Converting the feature vector to the type corresponding to the target fusion data to obtain a converted feature vector; for example, the fusion data B, C are each converted into data of type a.
In the embodiment, the feature vectors of the conversion feature vectors are consistent with those of the target fusion data, so that data analysis is convenient, the target fusion data and the conversion feature vectors are matched, and the association relation between the target fusion data and the conversion feature vectors is determined; and realizing unification of all types of data based on the types of the target fusion data, and establishing a comprehensive and accurate association relation.
In this embodiment, a target set is determined according to the association relationship; the target set comprises a conversion feature vector with higher association degree with target fusion data and the target fusion data. The conversion feature vector with low association degree with the target fusion data is eliminated, the data processing amount is reduced, and the subsequent data processing rate improvement is facilitated.
In this embodiment, the data system is a complete data model constructed based on integrating data resources with the target set as a whole. Efficient logical relationship processing of data is achieved based on the data hierarchy.
In this embodiment, the second analysis result is an identification result obtained by inputting the data system into the composite identification model, and determines whether risk data exists in the data system.
In this embodiment, the composite recognition model includes recognition rules for complex relationships, not single type data recognition.
The beneficial effects of the technical scheme are that: based on the composite recognition model, risk data of various fusion data after being combined together can be determined comprehensively and accurately, and the second type of vehicle and the third type of vehicle can be recognized accurately. The vehicle safety monitoring system is characterized in that the vehicle safety monitoring system is used for identifying from a single aspect based on a single identification model, identifying from a composite aspect based on a composite identification model, and facilitating accurate and comprehensive judgment of a first type vehicle, a second type vehicle and a third type vehicle and executing corresponding-level safety monitoring.
According to some embodiments of the invention, a single recognition model is trained, the method comprising:
acquiring sample fusion data;
preprocessing sample fusion data based on a data layer of a single recognition model, and determining risk factors;
carrying out data analysis on the risk factors based on an index layer of the single recognition model, determining a plurality of indexes, and screening to obtain risk indexes;
Combining various risk indexes based on a model parameter layer of a single recognition model to obtain various combination results, and screening out a combination result with highest risk probability prediction accuracy as a target combination result;
and outputting a predicted result of the sample fusion data based on the output layer of the single recognition model, and indicating that training is completed when the predicted result is consistent with a real result corresponding to the sample fusion data.
The working principle of the technical scheme is as follows: in this embodiment, the risk factors include factors that affect the control safety of the vehicle and factors that affect the seating safety of the passenger.
In this embodiment, the risk indicator includes a standardized indicator formed by data analysis of the risk factor based on an indicator layer.
The beneficial effects of the technical scheme are that: based on different types of sample fusion data, training the corresponding single recognition model respectively, wherein the training process comprises training a data layer, an index layer, a model parameter layer and an output layer of the single recognition model, so that model parameters of the accurate single recognition model are conveniently obtained, and the single recognition model is accurately trained.
According to some embodiments of the invention, determining the target set according to the association relation includes:
Taking the target fusion data as a key node, and converting the feature vector as an associated node of the key node;
constructing a screening system according to the key nodes, the associated nodes and the associated relation; the screening system comprises a distance value from each associated node to a key node;
screening out the associated nodes with the distance value smaller than the preset distance value as target associated nodes;
and determining a target set according to the target associated node and the key node.
The working principle of the technical scheme is as follows: in this embodiment, the relevant nodes with the distance value smaller than the preset distance value are screened out and used as target relevant nodes, namely, the nodes with high relevance degrees are indicated.
In this embodiment, the key node is the central node and the master node.
The screening system is a relationship topological graph generated based on the key nodes, the associated nodes and the associated relationships, the associated relationships of the key nodes and the associated nodes are intuitively displayed, and the distance value from each associated node to the key nodes can be determined.
The beneficial effects of the technical scheme are that: based on the construction of the screening system, the key nodes, the associated nodes and the associated relation are conveniently and clearly displayed, and the distance value from each associated node to the key node in the screening system is conveniently quantified; thereby facilitating accurate determination of the target key nodes and further accurate determination of the target set.
According to some embodiments of the invention, a method for training a composite recognition model includes:
determining big data of the unmanned vehicle, and constructing a map text of a driving scene according to the big data;
extracting each node in the map text, determining the association relation among each node, and constructing a knowledge map of a data chain corresponding to the driving scene;
determining risk index information of each node;
extracting associated feature vectors in the knowledge graph, and carrying out feature fusion according to the associated feature vectors and risk index information to obtain fusion vectors;
and training the composite recognition model based on the fusion vector.
The working principle of the technical scheme is as follows: in this embodiment, the atlas text includes various driving scenarios constructed from big data of the unmanned vehicle.
In the embodiment, the composite recognition model is a risk conduction fusion model, and the overall risk data is recognized more accurately based on the whole recognition data system.
In this embodiment, when the composite recognition model is trained, the model is obtained based on the whole driving scene training, and the overall driving risk is judged.
In this embodiment, the composite recognition model is a deep neural network model.
In this embodiment, each node corresponds to fusion data of the same type of data in the internal perception data and the external perception data of the unmanned vehicle, and the fusion data is converted into a corresponding conversion feature vector matched with the target fusion data.
In this embodiment, in the knowledge graph, each driving scenario corresponds to one data chain, which is a relationship chain of each node.
In this embodiment, the risk indicator information is a dynamic risk parameter and a static risk parameter of the main body corresponding to the node. The body of each node is a passenger or a vehicle. The dynamic risk parameters of the passenger include various physical movement changes, and the static risk parameters include the relative fixed position of the passenger, such as in the rear or front row. The dynamic risk parameters of the vehicle include control parameters of the vehicle and execution parameter variations. The static risk parameter is a stationary parameter of the vehicle, such as the device being in a stationary position.
In this embodiment, the association feature vector in the knowledge graph is extracted to represent the association relationship of each node in the data chain.
In this embodiment, the fusion vector is obtained by fusing based on the association relationship between the nodes and the risk index information of each node, and the fusion vector represents the logical relationship between the nodes and the overall risk output result.
The beneficial effects of the technical scheme are that: based on the extracted associated feature vectors in the knowledge graph, carrying out feature fusion according to the associated feature vectors and risk index information to obtain fusion vectors; and training the composite recognition model based on the fusion vector. And an accurate composite recognition model is convenient to obtain. In the training process, constructing a map text of a driving scene based on big data, extracting each node in the map text, determining the association relation among each node, and constructing a knowledge map of a data chain corresponding to the driving scene; based on the knowledge graph of the data chain corresponding to the constructed driving scene; the method is convenient for accurately determining the driving scene actually corresponding to the unmanned vehicle, and realizes accurate judgment on the logic relationship of each node.
According to some embodiments of the invention, generating a data hierarchy from the target set comprises:
extracting dimension fields for data analysis according to the target set, description fields for describing dimensions and abstract fields for statistics;
calculating and modifying the abstract field, and correcting the statistical parameters;
establishing a description script according to the description field, establishing an operation program in the description script, and modeling a data system according to the operation program and the correction statistical parameters;
in the modeling process, the dimension fields are analyzed and processed through a cross index technology, and a data system is generated according to analysis and processing results.
The technical scheme has the working principle and beneficial effects that: constructing a data system, extracting dimension fields for data analysis according to a target set, describing the dimension description fields and summarizing fields for statistics; the data system comprises a dimension field, a description field and a summary field. The fields defined as dimensions are cross-indexed, so that any dimension can be quickly embedded into each other to obtain the most needed information. The description field contains additional information related to the dimension. The method is convenient for accurately determining a data system and better displaying various data and relations of the target set.
According to some embodiments of the invention, the background server updates the type of the unmanned vehicle based on the safety monitoring information when performing the safety monitoring of the corresponding level according to the type, and performs the safety monitoring of the corresponding level of the new type when determining that the type of the unmanned vehicle is changed.
The beneficial effects of the technical scheme are that: the background server is convenient to update the type of the unmanned vehicle according to the safety monitoring information, and when the type of the unmanned vehicle is determined to change, the safety monitoring of the corresponding level of the new type is executed, so that the reasonable configuration of monitoring resources is realized, and meanwhile, the safety and the accuracy of the monitoring are ensured.
According to some embodiments of the invention, further comprising:
the background server receives the control verification information of the control terminal and sends the control verification information to each unmanned vehicle, and the control verification information is verified based on the following formula:
Figure SMS_1
Figure SMS_2
in the above-mentioned formula(s),
Figure SMS_5
a plaintext information set in the corresponding unmanned vehicle for controlling the verification information; />
Figure SMS_7
A decryption method in a corresponding unmanned vehicle; />
Figure SMS_10
To control the authentication information; />
Figure SMS_4
Is the verification result; when->
Figure SMS_8
When the verification result is passed; when->
Figure SMS_9
When the verification result is not passed; / >
Figure SMS_11
A set of rights information in the corresponding unmanned vehicle; />
Figure SMS_3
For aggregate count function, +.>
Figure SMS_6
The value is in the range of 0 to 1 for the preset limit value.
When the verification is determined to pass, determining that the unmanned vehicle passing the verification is a target unmanned vehicle, and controlling the target unmanned vehicle to execute a control instruction included in the control verification information;
and when the verification is determined to be failed, transmitting error information to the control terminal.
The technical scheme has the working principle and beneficial effects that: the background server receives control verification information of the control terminal, sends the control verification information to each unmanned vehicle, verifies, and when verification passing is determined, determines that the unmanned vehicle passing the verification is a target unmanned vehicle, and controls the target unmanned vehicle to execute a control instruction included in the control verification information; and when the verification is determined to be failed, transmitting error information to the control terminal. Each unmanned vehicle only passes the verification of the specific control verification information, and executes the control instruction included in the control verification information, so that the safety of the control terminal on the control of the unmanned vehicle is improved, and the potential safety hazard caused by the execution of the non-corresponding unmanned vehicle is avoided due to the error transmission of the control instruction. The preset limit value can be set according to the safety requirement.
As shown in fig. 3, an embodiment of a second aspect of the present invention proposes a safety monitoring system for an unmanned vehicle, including:
the building module is used for building communication relations between a plurality of unmanned vehicles in a preset area and a background server;
the sensing module is used for sensing the environment in the unmanned vehicles when a plurality of unmanned vehicles run in a preset area to obtain first sensing data; performing environment sensing on the outside of the unmanned vehicle to obtain second sensing data;
the background server is used for determining the types of the unmanned vehicles according to the first perception data and the second perception data, and executing corresponding-level safety monitoring according to the types.
The working principle of the technical scheme is as follows: the communication relation between the unmanned vehicles and the background server in the preset area is established, so that the background server can conveniently acquire initial monitoring information of the unmanned vehicles in the preset area, and the initial monitoring information is acquired based on a perception module arranged on the unmanned vehicles. The preset area is a planned operation area of the unmanned vehicle.
In this embodiment, the first perception data is data obtained by performing environmental perception on the interior of the unmanned vehicle. The first sensing data is control node operation information, vehicle state parameter information, operation information of the vehicle equipment and behavior characteristics of passengers.
In this embodiment, the second perception data is data obtained by performing environmental perception on the outside of the unmanned vehicle. For example, the obstacle information around the unmanned vehicle and the collision risk with each obstacle are determined based on the external environment image of the unmanned vehicle and the radar information.
Based on the first perception data and the second perception data, the interior and the exterior of the unmanned vehicle are conveniently and comprehensively perceived, the comprehensive perception data are conveniently obtained, and the background server is conveniently and accurately determined the type of each unmanned vehicle.
In this embodiment, the types of the unmanned vehicle include a first type vehicle, a second type vehicle, and a third type vehicle, and primary safety monitoring, secondary safety monitoring, and tertiary safety monitoring are performed, respectively.
In the embodiment, the monitoring requirements of the primary security monitoring, the secondary security monitoring and the tertiary security monitoring are sequentially reduced, the load on the background server caused by the corresponding primary security monitoring is maximum, and the secondary security monitoring and the tertiary security monitoring are the lowest. For example, when primary security monitoring is performed on a first type of vehicle, a monitoring interval for acquiring data acquired by performing environment awareness on the first type of vehicle is 1s. When the second-type vehicle is subjected to the second-level safety monitoring, the monitoring interval for acquiring the data acquired by performing environment sensing on the second-type vehicle is 2s. When three-level safety monitoring is performed on the third type of vehicle, the monitoring interval for acquiring data acquired by performing environment sensing on the third type of vehicle is 3s.
The beneficial effects of the technical scheme are that: the method and the system realize safety monitoring of a plurality of unmanned vehicles in the preset area, simultaneously execute corresponding-level safety monitoring according to the types of the plurality of unmanned vehicles, reduce the load of a background server and improve the utilization rate of monitoring resources.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (9)

1. A method of safety monitoring of an unmanned vehicle, comprising:
establishing communication relations between a plurality of unmanned vehicles and a background server in a preset area;
when a plurality of unmanned vehicles run in a preset area, performing environment sensing on the interior of the unmanned vehicles to obtain first sensing data; performing environment sensing on the outside of the unmanned vehicle to obtain second sensing data;
the background server determines types of a plurality of unmanned vehicles according to the first perception data and the second perception data, and performs corresponding-level security monitoring according to the types;
The background server determines types of a plurality of unmanned vehicles according to the first sensing data and the second sensing data, and executes corresponding-level safety monitoring according to the types, and the method comprises the following steps:
the background server inputs the first perception data into a pre-trained data classification model for classification, and outputs a plurality of first classification data;
inputting the second perception data into a pre-trained data classification model for classification, and outputting a plurality of second classification data;
performing data fusion on the first classified data and the second classified data of the same data type to obtain a group of fusion data, and further obtaining a plurality of groups of fusion data;
respectively inputting a plurality of groups of fusion data into corresponding single recognition models, and outputting a first analysis result; and if at least one of the first analysis results is determined to indicate that the risk data exists, determining that the unmanned vehicle corresponding to the fusion data is a first type vehicle, and executing primary safety monitoring.
2. The method of claim 1, wherein the environmental awareness of the interior of the unmanned vehicle to obtain the first awareness data comprises:
Acquiring a control panel image and an internal scene image of a vehicle-mounted terminal in an unmanned vehicle;
checking the image quality of the control panel image and the scene image, and performing parameter adjustment processing when the image quality is determined to be unqualified, so as to obtain a target control panel image and a target scene image;
analyzing the target control panel image, and determining control node operation information, vehicle state parameter information and operation information of vehicle equipment of the vehicle;
analyzing the target scene image to determine the behavior characteristics of the passengers;
and determining first perception data according to the control node operation information, the vehicle state parameter information, the operation information of the vehicle-mounted equipment and the behavior characteristics of passengers.
3. The method for safety monitoring of an unmanned vehicle according to claim 2, wherein the step of checking the image quality of the control panel image, and performing the parameter adjustment process when it is determined that the image quality does not reach the standard, to obtain the target control panel image, comprises:
calculating a first gradient value of the control panel image in the horizontal direction and a second gradient value of the control panel image in the vertical direction by utilizing a Sobel operator based on a gradient algorithm;
inquiring a preset first gradient value-second gradient value-definition data table according to the first gradient value and the second gradient value to determine a definition value;
And when the definition value is determined to be smaller than the preset definition threshold, the image quality is not up to standard, the difference value between the preset definition threshold and the definition value is determined, the difference value-focal length correction value data table is inquired according to the difference value, the focal length correction value is determined, and parameter adjustment processing is carried out according to the focal length correction value, so that the target control panel image is obtained.
4. The method of claim 1, wherein the environmental awareness of the exterior of the unmanned vehicle to obtain the second awareness data comprises:
acquiring an external environment image and radar information of an unmanned vehicle;
and determining obstacle information around the unmanned vehicle and collision risks with all the obstacles according to the external environment image and the radar information, and determining second perception data according to the obstacle information around the unmanned vehicle and the collision risks with all the obstacles.
5. The method for safety monitoring of an unmanned vehicle according to claim 1, further comprising:
selecting target fusion data from a plurality of groups of fusion data when all the first analysis results are determined to indicate that no risk data exists;
extracting features of other fusion data except the target fusion data to obtain feature vectors, and converting the feature vectors to the types corresponding to the target fusion data according to the feature vectors to obtain converted feature vectors;
Matching the target fusion data with each conversion feature vector, and determining the association relationship between the target fusion data and each conversion feature vector;
determining a target set according to the association relation;
generating a data system according to the target set;
inputting the data system into the composite recognition model, and outputting a second analysis result;
when the risk data are determined to exist according to the second analysis result, determining that the unmanned vehicle corresponding to the fusion data is a second type vehicle, and executing secondary safety monitoring; and when the risk data are not determined to exist according to the second analysis result, determining that the unmanned vehicle corresponding to the fusion data is a third type vehicle, and executing three-level safety monitoring.
6. A method of safety monitoring of an unmanned vehicle as claimed in claim 1, wherein the training of the single recognition model comprises:
acquiring sample fusion data;
preprocessing sample fusion data based on a data layer of a single recognition model, and determining risk factors;
carrying out data analysis on the risk factors based on an index layer of the single recognition model, determining a plurality of indexes, and screening to obtain risk indexes;
combining various risk indexes based on a model parameter layer of a single recognition model to obtain various combination results, and screening out a combination result with highest risk probability prediction accuracy as a target combination result;
And outputting a predicted result of the sample fusion data based on the output layer of the single recognition model, and indicating that training is completed when the predicted result is consistent with a real result corresponding to the sample fusion data.
7. The method of safety monitoring of an unmanned vehicle according to claim 5, wherein determining a set of targets from the association relationship comprises:
taking the target fusion data as a key node, and converting the feature vector as an associated node of the key node;
constructing a screening system according to the key nodes, the associated nodes and the associated relation; the screening system comprises a distance value from each associated node to a key node;
screening out the associated nodes with the distance value smaller than the preset distance value as target associated nodes;
and determining a target set according to the target associated node and the key node.
8. The method of safety monitoring of an unmanned vehicle of claim 5, wherein the composite recognition model is trained, the method comprising:
determining big data of the unmanned vehicle, and constructing a map text of a driving scene according to the big data;
extracting each node in the map text, determining the association relation among each node, and constructing a knowledge map of a data chain corresponding to the driving scene;
Determining risk index information of each node;
extracting associated feature vectors in the knowledge graph, and carrying out feature fusion according to the associated feature vectors and risk index information to obtain fusion vectors;
and training the composite recognition model based on the fusion vector.
9. A safety monitoring system for an unmanned vehicle, comprising:
the building module is used for building communication relations between a plurality of unmanned vehicles in a preset area and a background server;
the sensing module is used for sensing the environment in the unmanned vehicles when a plurality of unmanned vehicles run in a preset area to obtain first sensing data; performing environment sensing on the outside of the unmanned vehicle to obtain second sensing data;
the background server is used for determining the types of the unmanned vehicles according to the first perception data and the second perception data and executing corresponding-level safety monitoring according to the types;
the background server determines types of a plurality of unmanned vehicles according to the first sensing data and the second sensing data, and executes corresponding-level safety monitoring according to the types, and the method comprises the following steps:
the background server inputs the first perception data into a pre-trained data classification model for classification, and outputs a plurality of first classification data;
Inputting the second perception data into a pre-trained data classification model for classification, and outputting a plurality of second classification data;
performing data fusion on the first classified data and the second classified data of the same data type to obtain a group of fusion data, and further obtaining a plurality of groups of fusion data;
respectively inputting a plurality of groups of fusion data into corresponding single recognition models, and outputting a first analysis result; and if at least one of the first analysis results is determined to indicate that the risk data exists, determining that the unmanned vehicle corresponding to the fusion data is a first type vehicle, and executing primary safety monitoring.
CN202310350094.4A 2023-04-04 2023-04-04 Safety monitoring method and system for unmanned vehicle Active CN116125996B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310350094.4A CN116125996B (en) 2023-04-04 2023-04-04 Safety monitoring method and system for unmanned vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310350094.4A CN116125996B (en) 2023-04-04 2023-04-04 Safety monitoring method and system for unmanned vehicle

Publications (2)

Publication Number Publication Date
CN116125996A CN116125996A (en) 2023-05-16
CN116125996B true CN116125996B (en) 2023-06-27

Family

ID=86299358

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310350094.4A Active CN116125996B (en) 2023-04-04 2023-04-04 Safety monitoring method and system for unmanned vehicle

Country Status (1)

Country Link
CN (1) CN116125996B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112437111A (en) * 2020-10-13 2021-03-02 上海京知信息科技有限公司 Vehicle-road cooperative system based on context awareness
CN114464216A (en) * 2022-02-08 2022-05-10 贵州翰凯斯智能技术有限公司 Acoustic detection method and device under unmanned driving environment
DE102020215333A1 (en) * 2020-12-04 2022-06-09 Zf Friedrichshafen Ag Computer-implemented method and computer program for the weakly supervised learning of 3D object classifications for environment perception, regulation and/or control of an automated driving system, classification module and classification system
CN115326131A (en) * 2022-07-06 2022-11-11 江苏大块头智驾科技有限公司 Intelligent analysis method and system for unmanned mine road conditions

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107976989A (en) * 2017-10-25 2018-05-01 中国第汽车股份有限公司 Comprehensive vehicle intelligent safety monitoring system and monitoring method
CN109991971A (en) * 2017-12-29 2019-07-09 长城汽车股份有限公司 Automatic driving vehicle and automatic driving vehicle management system
US11392131B2 (en) * 2018-02-27 2022-07-19 Nauto, Inc. Method for determining driving policy
CN108922188B (en) * 2018-07-24 2020-12-29 河北德冠隆电子科技有限公司 Radar tracking and positioning four-dimensional live-action traffic road condition perception early warning monitoring management system
CN111240328B (en) * 2020-01-16 2020-12-25 中智行科技有限公司 Vehicle driving safety monitoring method and device and unmanned vehicle
JP7167958B2 (en) * 2020-03-26 2022-11-09 株式会社デンソー Driving support device, driving support method, and driving support program
WO2021202794A1 (en) * 2020-03-31 2021-10-07 Flir Detection, Inc. User-in-the-loop object detection and classification systems and methods
CN111862389B (en) * 2020-07-21 2022-10-21 武汉理工大学 Intelligent navigation perception and augmented reality visualization system
CN114137947A (en) * 2021-04-15 2022-03-04 上海丰豹商务咨询有限公司 Vehicle-mounted intelligent unit with cooperative control function and cooperative control method
CN113741485A (en) * 2021-06-23 2021-12-03 阿波罗智联(北京)科技有限公司 Control method and device for cooperative automatic driving of vehicle and road, electronic equipment and vehicle
CN114299473A (en) * 2021-12-24 2022-04-08 杭州电子科技大学 Driver behavior identification method based on multi-source information fusion
CN115618932A (en) * 2022-09-23 2023-01-17 清华大学 Traffic incident prediction method and device based on internet automatic driving and electronic equipment
CN115278103B (en) * 2022-09-26 2022-12-20 合肥岭雁科技有限公司 Security monitoring image compensation processing method and system based on environment perception

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112437111A (en) * 2020-10-13 2021-03-02 上海京知信息科技有限公司 Vehicle-road cooperative system based on context awareness
DE102020215333A1 (en) * 2020-12-04 2022-06-09 Zf Friedrichshafen Ag Computer-implemented method and computer program for the weakly supervised learning of 3D object classifications for environment perception, regulation and/or control of an automated driving system, classification module and classification system
CN114464216A (en) * 2022-02-08 2022-05-10 贵州翰凯斯智能技术有限公司 Acoustic detection method and device under unmanned driving environment
CN115326131A (en) * 2022-07-06 2022-11-11 江苏大块头智驾科技有限公司 Intelligent analysis method and system for unmanned mine road conditions

Also Published As

Publication number Publication date
CN116125996A (en) 2023-05-16

Similar Documents

Publication Publication Date Title
CN108921200B (en) Method, apparatus, device and medium for classifying driving scene data
CN112699859A (en) Target detection method, device, storage medium and terminal
GB2621048A (en) Vehicle-road laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field
US20210042627A1 (en) Method for recognizing an adversarial disturbance in input data of a neural network
US20220383736A1 (en) Method for estimating coverage of the area of traffic scenarios
CN111874007A (en) Knowledge and data drive-based unmanned vehicle hierarchical decision method, system and device
CN116205024A (en) Self-adaptive automatic driving dynamic scene general generation method for high-low dimension evaluation scene
CN114787010A (en) Driving safety system
CN112462759B (en) Evaluation method, system and computer storage medium of rule control algorithm
CN116125996B (en) Safety monitoring method and system for unmanned vehicle
Berger et al. ZEBRA: Z-order Curve-based Event Retrieval Approach to Efficiently Explore Automotive Data
Shu et al. Test scenarios construction based on combinatorial testing strategy for automated vehicles
CN112509321A (en) Unmanned aerial vehicle-based driving control method and system for urban complex traffic situation and readable storage medium
CN116662856A (en) Real-time driving style classification and early warning method based on fuzzy logic
Bäumler et al. Report on validation of the stochastic traffic simulation (Part B)
CN117882116A (en) Parameter adjustment and data processing method and device for vehicle identification model and vehicle
Hamzah et al. Parking Violation Detection on The Roadside of Toll Roads with Intelligent Transportation System Using Faster R-CNN Algorithm
Ma et al. Lane change analysis and prediction using mean impact value method and logistic regression model
Sánchez et al. Prediction Horizon Requirements for Automated Driving: Optimizing Safety, Comfort, and Efficiency
Shubenkova et al. Machine vision in autonomous vehicles: designing and testing the decision making algorithm based on entity attribute value model
US11651583B2 (en) Multi-channel object matching
US20230195977A1 (en) Method and system for classifying scenarios of a virtual test, and training method
CN114863685B (en) Traffic participant trajectory prediction method and system based on risk acceptance degree
EP4246460A2 (en) Vehicle identification system
Wang et al. A Study of Lane-Changing Behavior Evaluation Methods Based on Machine Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant