CN111880545A - Automatic driving device, system, automatic driving decision processing method and device - Google Patents

Automatic driving device, system, automatic driving decision processing method and device Download PDF

Info

Publication number
CN111880545A
CN111880545A CN202010791018.3A CN202010791018A CN111880545A CN 111880545 A CN111880545 A CN 111880545A CN 202010791018 A CN202010791018 A CN 202010791018A CN 111880545 A CN111880545 A CN 111880545A
Authority
CN
China
Prior art keywords
floating
state
target
region
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010791018.3A
Other languages
Chinese (zh)
Inventor
李华兰
Original Assignee
李华兰
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 李华兰 filed Critical 李华兰
Priority to CN202010791018.3A priority Critical patent/CN111880545A/en
Priority to CN202010094861.6A priority patent/CN111208821B/en
Publication of CN111880545A publication Critical patent/CN111880545A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles

Abstract

The embodiment of the application provides an automatic driving device, an automatic driving system, an automatic driving decision processing method and an automatic driving decision processing device, status information under each monitoring area is divided according to preset status categories, status summary information of each status category is generated respectively, so that differences of driving objects in different status categories in the automatic driving process can be fully considered, characteristic status differences of different drivers are effectively distinguished, floating changes of area characteristic points of the drivers in the automatic driving process are considered, automatic driving decisions are further carried out by further combining historical driving conditions of the drivers, and data accuracy of an automatic driving strategy in the decision process can be improved.

Description

Automatic driving device, system, automatic driving decision processing method and device
Technical Field
The application relates to the technical field of automatic driving, in particular to an automatic driving device, an automatic driving system, an automatic driving decision processing method and an automatic driving decision processing device.
Background
With the development of science and technology and the progress of society, the automatic driving technology becomes a development trend in the traffic field, in the conventional automatic driving technology, a unified feature class is generally output to adaptively adjust the automatic driving strategy by combining the overall feature state of the driver, but the scheme does not consider the difference of the driver for different state classes in the automatic driving process, and the driver is generally required to be in a driving state in the automatic driving process in order to improve the driving safety, however, the difference of the feature states of different drivers is large because the driving habits of different drivers are different and the feature states of the drivers are fluctuated, so that the decision process of the automatic driving strategy is not sufficient.
Disclosure of Invention
In order to overcome the above defects in the prior art, the present application aims to provide an automatic driving device, an automatic driving system, an automatic driving decision processing method and an automatic driving decision processing device, which can improve the data accuracy of the decision process of an automatic driving strategy.
In a first aspect, the present application provides an automatic driving decision processing method applied to an automatic driving device, where the automatic driving device is in communication connection with a plurality of state monitoring devices in an automobile, and the method includes:
acquiring state information of a driving object in a monitoring area of each state monitoring device, dividing the state information in each monitoring area according to preset state categories, and respectively generating state summary information of each state category, wherein the preset categories comprise a clutch operation category, a steering wheel operation category and an electrical equipment control category;
determining preset region feature points in each monitoring region according to the identity authentication information of the driving object, and respectively determining floating change information of a floating region of the preset region feature points in state summary information of corresponding state types aiming at the preset region feature points in each monitoring region to obtain a first state floating change result of the preset region feature points, wherein the preset region feature points are region feature points which are matched with the identity authentication information of the driving object in advance, the identity authentication information comprises biological feature information, and the biological feature information is fingerprint feature information, human face feature information, iris feature information or voice feature information;
determining frequent region feature points in each monitoring region according to historical driving information of the driving object, respectively obtaining floating tracks of the frequent region feature points aiming at the frequent region feature points in each monitoring region, determining floating change information of the floating tracks in state summarizing information of corresponding state types, and obtaining a second state floating change result of the frequent region feature points, wherein the frequent region feature points are region feature points of which the change frequency in the historical driving information of the driving object is greater than a set frequency threshold value, and the change frequency is used for expressing the change degree of the region feature points in unit time;
and generating an automatic driving control instruction for the automobile according to the matching relation between the first state floating change result and the second state floating change result.
In a possible design of the first aspect, the step of dividing the status information in each monitoring area according to a predetermined status category and generating status summary information of each status category includes:
acquiring state category characteristic points corresponding to each preset state category, forming a characteristic point set of each preset state category, and acquiring coincidence characteristic point information of the target characteristic points of each monitoring area and the characteristic points of the characteristic point set;
calculating the number of key feature points of each target state category according to the superposition feature point information of the target feature point number and the feature point number of the feature point set, and selecting state category feature points from the feature point set according to the number of the key feature points of each target state category to obtain an initial feature point matrix;
if the total feature point distribution quantity of the initial feature point matrix is greater than the maximum total feature point distribution quantity meeting the total feature point distribution quantity requirement, reducing the coarse-range key feature points in the initial feature point matrix by a first set quantity, and increasing the fine-range key feature points in the initial feature point matrix by the first set quantity, wherein the fine-range key feature points refer to key feature points of which the unit intensity degree of the key feature points in the detection area is less than the set degree, and the coarse-range key feature points refer to key feature points of which the unit intensity degree of the key feature points in the detection area is not less than the set degree;
calculating the total characteristic point distribution quantity of the updated initial characteristic point matrix;
if the total characteristic point distribution quantity of the initial characteristic point matrix after the updating is larger than the maximum total characteristic point distribution quantity, the initial characteristic point matrix after the updating is executed with the processing again;
if the total feature point distribution quantity of the initial feature point matrix after the updating is less than or equal to the maximum total feature point distribution quantity, taking the initial feature point matrix before the updating as a first updating matrix, and sequencing all the target state classes according to the sequence of the state classes from low priority to high priority to obtain a target state class sequence;
grouping the target state categories according to the target state category sequence, wherein each group comprises a first state category and a second state category which are arranged at two sides of a target position of the target state category sequence and consistent with the difference of the target position, and the priority of the first state category is smaller than that of the second state category;
and according to the sequence from low priority to high priority of the difference with the target position, sequentially taking each packet as a target packet, and performing the following second updating processing on the target packet: adding one more key feature point of a first state category of the target group in the first update matrix, and reducing one less key feature point of a second state category of the target group in the first update matrix;
judging whether the total characteristic point distribution quantity of the updated first updating matrix meets the total characteristic point distribution quantity requirement or not;
if the total characteristic point distribution quantity of the updated first updating matrix meets the total characteristic point distribution quantity requirement, taking the updated first updating matrix as a final characteristic point matrix;
if the total characteristic point distribution quantity of the updated first updating matrix does not meet the total characteristic point distribution quantity requirement, taking the next group as a new target group, and performing the second updating processing on the new target group;
if the total feature point distribution quantity of the initial feature point matrix is less than the minimum total feature point distribution quantity meeting the total feature point distribution quantity requirement, performing the following third updating processing on the initial feature point matrix: increasing the coarse range key feature points in the initial feature point matrix by a first set number, and decreasing the fine range key feature points in the initial feature point matrix by the first set number;
calculating the total characteristic point distribution quantity of the updated initial characteristic point matrix;
if the total characteristic point distribution quantity of the initial characteristic point matrix after the updating is less than the minimum total characteristic point distribution quantity, executing the third updating treatment on the initial characteristic point matrix after the updating again;
if the total feature point distribution quantity of the initial feature point matrix after the updating is greater than or equal to the minimum total feature point distribution quantity, taking the initial feature point matrix before the updating as a second updating matrix, and sequencing all the target state classes according to the sequence of the state classes from low priority to high priority to obtain a target state class sequence;
grouping the target state categories according to the target state category sequence, wherein each group comprises a first state category and a second state category which are arranged at two sides of a target position of the target state category sequence and consistent with the difference of the target position, and the priority of the first state category is smaller than that of the second state category;
and sequentially taking each packet as a target packet according to the sequence from low priority to high priority of the difference with the target position, and performing the following fourth updating processing on the target packet: reducing the key feature points of the first state category of the target grouping in the second update matrix by one, and increasing the key feature points of the second state category of the target grouping in the second update matrix by one;
judging whether the total characteristic point distribution quantity of the second updated matrix after the updating meets the total characteristic point distribution quantity requirement or not;
if the total feature point distribution quantity of the second updated matrix meets the total feature point distribution quantity requirement, taking the second updated matrix as the final feature point matrix;
if the total feature point distribution quantity of the second updated matrix after the updating does not meet the total feature point distribution quantity requirement, taking the next group as a new target group, and performing the fourth updating processing on the new target group;
and classifying the state information of each feature point in the final feature point matrix of each target state category into the state summary information of the state category.
In a possible design of the first aspect, the identification information includes biometric information, and the step of determining preset area feature points in each monitored area according to the identification information of the driving object includes:
collecting biological characteristic information of the driving object;
and obtaining preset area characteristic points in each monitoring area according to the biological characteristic information and the corresponding relation between each preset area characteristic point in each detection area and each preset biological characteristic information configured in advance.
In a possible design of the first aspect, the step of obtaining a first state floating change result of the preset area feature point by determining floating change information of the floating area of the preset area feature point in the state summary information of the corresponding state category for the preset area feature point in the monitoring area includes:
respectively acquiring three-dimensional fixed points matched with the preset region feature points aiming at the preset region feature points in each monitoring region, and acquiring a corresponding three-dimensional space region as a target three-dimensional space region when the three-dimensional fixed points continuously fall into a coordinate range corresponding to one three-dimensional space region in the monitoring region within a preset time period;
judging whether the area range of the target three-dimensional space area is the same as the area range input by a preset automatic driving control model;
if the area ranges are different, the area range of the target three-dimensional space area is zoomed to a three-dimensional space area which is consistent with the area range input by the model of the automatic driving control model, and the area range is input into the automatic driving control model;
calculating an input three-dimensional space region by adopting the automatic driving control model, and acquiring floating change information corresponding to the input three-dimensional space region;
tracking each floating position of the preset region characteristic points in the target three-dimensional space region to obtain a floating characteristic vector of each floating position in the target three-dimensional space region;
determining a region with the floating position frequency greater than a preset threshold value in the floating change information corresponding to the input three-dimensional space region as a floating region;
converting the vector value of each floating position in the input three-dimensional space region to obtain a floating feature vector of each floating position in the input three-dimensional space region;
calculating a first floating vector mean value of the whole three-dimensional space region according to the floating feature vector of each floating position in the target three-dimensional space region;
calculating a second floating vector mean value of the floating region according to the floating feature vector of each floating position in the floating region;
calculating the first floating vector mean value, the second floating vector mean value and a preset coefficient to obtain a floating reference coefficient of the floating region, calculating a ratio of a floating feature vector of each floating position in the target three-dimensional space region to the floating reference coefficient, and obtaining a first floating strength of each floating position in the target three-dimensional space region according to the ratio;
calculating the first floating intensity and the floating change information of each floating position in the target three-dimensional space region to obtain the floating intensity of each floating position in the target three-dimensional space region;
or, calculating a ratio of a floating feature vector of each floating position in the target three-dimensional space region to the floating reference coefficient to obtain a first floating strength of each floating position in the target three-dimensional space region, calculating the first floating strength of each floating position in the target three-dimensional space region according to a preset floating range to obtain a second floating strength of each floating position in the target three-dimensional space region, wherein a difference value between the second floating strength and the first floating strength is smaller than the preset floating range, calculating the second floating strength of each floating position in the target three-dimensional space region and the floating change information to obtain the floating strength of each floating position in the target three-dimensional space region;
determining a target coefficient of each floating position in the target three-dimensional space region according to a target feature point, floating strength and the floating change information of a specified space position, and calculating a ratio of the floating strength of each floating position in the target three-dimensional space region to a preset constant, wherein the target coefficient is a value obtained by multiplying a feature vector value of the target feature point of the specified space position by the floating strength and dividing the feature vector value by the floating change information;
calculating the product of the ratio of the floating strength of each floating position to a preset constant and a corresponding target coefficient, and obtaining a first state floating result of each floating position in the target three-dimensional space region;
performing color editing processing on the target three-dimensional space region according to the first state floating result of each floating position to output the target three-dimensional space region;
or calculating the ratio of the floating intensity of each floating position in the target three-dimensional space region to a preset constant;
calculating the product of the ratio of the floating intensity of each floating position to a preset constant and the corresponding target dyeing value, and obtaining a first state floating result of each floating position in the target three-dimensional space region;
calculating a first state floating result of each floating position in the target three-dimensional space region, the target three-dimensional space region and the floating change information to obtain a second state floating result of each floating position in the target three-dimensional space region;
and arranging the second state floating results of each floating position to obtain a first state floating change result of the preset region characteristic point.
In a possible design of the first aspect, the step of determining frequent region feature points in each monitored region according to the historical driving information of the driving object includes:
acquiring historical driving information of the driving object, wherein the historical driving information comprises a plurality of position change information corresponding to a plurality of area characteristic points respectively;
when it is determined that a plurality of position change information corresponding to any one area feature point all meet a preset position change condition, determining an initial position of a first position change interval matched with the preset position change condition according to the position change information of the area feature point and the amplitude of the position change interval, wherein the preset position change condition comprises: the position change amplitude is larger than a set amplitude threshold value;
determining a plurality of position change intervals matched with the preset position change condition corresponding to the initial positions of the area feature points according to the position change information of the area feature points, the amplitude of the position change intervals, the initial position of the first position change interval and the number of preset position change intervals;
if the position of the area characteristic point of the tracking node corresponding to the area characteristic point in the area characteristic point is matched with the initial position of a target position change interval, and if the tracking node is the first tracking node of the target position change interval, acquiring the area characteristic point matched with the previous position change interval adjacent to the target position change interval as a screening area characteristic point, and identifying one area characteristic point without the screening area characteristic point in the tracking node as a target area characteristic point matched with the target position change interval;
if the tracking node is not the first tracking node of the target position change interval, acquiring a target area characteristic point matched with the target position change interval, identifying the target area characteristic point in the tracking node, and identifying at least one active position node of the target area characteristic point, wherein the area characteristic point corresponds to a plurality of position change intervals;
in the position change interval, according to the position information of at least one active position node of the target area feature point in the plurality of tracking nodes, calculating the moving space distance of any two adjacent tracking nodes of the at least one active position node of the target area feature point in the position change interval, and the position vector of the at least one active position node of the target area feature point in the position change interval;
counting the duration of the position change interval, determining the average change frequency and the change frequency variance of the target area feature point in the position change interval according to the movement space distance and the position vector, and calculating the frequent feature parameter of the target area feature point in the position change interval according to the average change frequency and the change frequency variance;
and calculating the frequent feature score of each region feature point according to the frequent feature parameter of each region feature point in the matched position change interval, and determining the region feature point with the frequent feature score larger than the set score as the frequent region feature point.
In a possible design of the first aspect, the step of generating an autopilot control command for the vehicle according to a matching relationship between the first state floating change result and the second state floating change result includes:
matching the state floating result of each first floating position in the first state floating change results with the state floating result of each matched second floating position in the second state floating change results to obtain a plurality of matching degrees, wherein each matched second floating position in the second state floating change results is matched with the corresponding first floating position in the arrangement sequence of the state floating change results, and the matching degrees are determined according to the coincidence degree between the state floating results of the first floating positions and the state floating results of the matched second floating positions;
and generating an automatic driving control instruction for the automobile according to the matching degrees.
In one possible design of the first aspect, the step of generating an automatic driving control instruction for the automobile according to the plurality of matching degrees includes:
determining a first number of the plurality of matching degrees which is lower than a first set matching degree, a second number which is greater than a second set matching degree and a third number of intervals between the first set matching degree and the second set matching degree;
if the first number is larger than the sum of the second number and the third number, generating a first automatic driving control instruction for the automobile, wherein the first automatic driving control instruction is used for controlling the automobile to enter a preset deceleration mode;
if the third quantity is larger than the sum of the first quantity and the second quantity, generating a second automatic driving control instruction for the automobile, wherein the second automatic driving control instruction is used for controlling the automobile to enter a preset acceleration mode;
and if the second quantity is greater than the sum of the first quantity and the third quantity, generating a third automatic driving control instruction for the automobile, wherein the third automatic driving control instruction is used for controlling the automobile to enter a preset constant speed mode.
In a second aspect, an embodiment of the present application further provides an automatic driving decision processing device, which is applied to an automatic driving device, where the automatic driving device is in communication connection with a plurality of state monitoring devices in an automobile, and the device includes:
the acquisition module is used for acquiring the state information of the driving object in the monitoring area of each state monitoring device, dividing the state information in each monitoring area according to preset state categories and respectively generating state summary information of each state category, wherein the preset categories comprise a clutch operation category, a steering wheel operation category and an electrical equipment control category;
the first determining module is used for determining preset region feature points in each monitoring region according to the identity authentication information of the driving object, and respectively determining floating change information of a floating region of the preset region feature points in state summary information of corresponding state types aiming at the preset region feature points in each monitoring region to obtain a first state floating change result of the preset region feature points, wherein the preset region feature points are region feature points matched with the identity authentication information of the driving object in advance, the identity authentication information comprises biological feature information, and the biological feature information is fingerprint feature information, face feature information, iris feature information or voice feature information;
a second determining module, configured to determine frequent region feature points in each monitored region according to historical driving information of the driving object, obtain, for the frequent region feature points in each monitored region, floating tracks of the frequent region feature points, respectively, determine floating change information of the floating tracks in state summary information of corresponding state types, and obtain a second state floating change result of the frequent region feature points, where the frequent region feature points are region feature points whose change frequency in the historical driving information of the driving object is greater than a set frequency threshold, and the change frequency is used to indicate a change degree of the region feature points in unit time;
and the generating module is used for generating an automatic driving control instruction for the automobile according to the matching relation between the first state floating change result and the second state floating change result.
In a third aspect, an embodiment of the present application further provides an automatic driving system, where the automatic driving system includes an automatic driving device and a plurality of state monitoring devices in an automobile communicatively connected to the automatic driving device, and the method includes:
the state monitoring device is used for monitoring state information of the driving object in the monitored area;
the automatic driving device is used for acquiring the state information of a driving object in the monitoring area of each state monitoring device, dividing the state information in each monitoring area according to preset state categories and respectively generating state summary information of each state category, wherein the preset categories comprise a clutch operation category, a steering wheel operation category and an electrical equipment control category;
the automatic driving device is used for determining preset region feature points in each monitoring region according to identity authentication information of the driving object, respectively determining floating change information of floating regions of the preset region feature points in state summary information of corresponding state types aiming at the preset region feature points in each monitoring region, and obtaining a first state floating change result of the preset region feature points, wherein the preset region feature points are region feature points matched with the identity authentication information of the driving object in advance, the identity authentication information comprises biological feature information, and the biological feature information is fingerprint feature information, human face feature information, iris feature information or voice feature information;
the automatic driving device is used for determining frequent region feature points in each monitoring region according to historical driving information of the driving object, respectively acquiring floating tracks of the frequent region feature points aiming at the frequent region feature points in each monitoring region, determining floating change information of the floating tracks in state summary information of corresponding state types, and obtaining a second state floating change result of the frequent region feature points, wherein the frequent region feature points are region feature points of which the change frequency in the historical driving information of the driving object is greater than a set frequency threshold value, and the change frequency is used for expressing the change degree of the region feature points in unit time;
and the automatic driving device is used for generating an automatic driving control instruction for the automobile according to the matching relation between the first state floating change result and the second state floating change result.
In a fourth aspect, the present invention further provides an automatic driving device, where the automatic driving device includes a processor, a machine-readable storage medium, and a network interface, where the machine-readable storage medium, the network interface, and the processor are connected through a bus system, the network interface is configured to be communicatively connected to at least one condition monitoring device, the machine-readable storage medium is configured to store a program, an instruction, or a code, and the processor is configured to execute the program, the instruction, or the code in the machine-readable storage medium to perform the automatic driving decision processing method in the first aspect or any possible design of the first aspect.
In a fifth aspect, an embodiment of the present application provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are detected on an automatic driving device, the instructions cause the automatic driving device to perform the automatic driving decision processing method in the first aspect or any one of the possible designs of the first aspect.
Based on any one of the above aspects, the status information in each monitoring area is divided according to the predetermined status category, and the status summary information of each status category is generated respectively, so that the difference of the driving object in the automatic driving process for different status categories can be fully considered, and the characteristic status difference of different drivers can be effectively distinguished, so that the floating change of the area characteristic point of the driver in the automatic driving process is considered, the automatic driving decision is further performed by combining the historical driving condition of the driver, and the data accuracy of the decision process of the automatic driving strategy can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic view of an application scenario of an automatic driving system according to an embodiment of the present application;
fig. 2 is a schematic flow chart of an automatic driving decision processing method according to an embodiment of the present application;
fig. 3 is a functional block diagram of an automatic driving decision processing apparatus according to an embodiment of the present application;
fig. 4 is a block diagram schematically illustrating a structure of an automatic driving device for implementing the automatic driving decision processing method according to an embodiment of the present application.
Detailed Description
The present application will now be described in detail with reference to the drawings, and the specific operations in the method embodiments may also be applied to the apparatus embodiments or the system embodiments. In the description of the present application, "at least one" includes one or more unless otherwise specified. "plurality" means two or more. For example, at least one of A, B and C, comprising: a alone, B alone, a and B in combination, a and C in combination, B and C in combination, and A, B and C in combination.
Fig. 1 is an interactive schematic diagram of an autopilot system 10 provided in an embodiment of the present application. The autopilot system 10 may include an autopilot device 100 and a condition monitoring device 200 communicatively coupled to the autopilot device 100, and the autopilot device 100 may include a processor for executing command operations. The autopilot system 10 shown in fig. 1 is merely one possible example, and in other possible embodiments, the autopilot system 10 may include only some of the components shown in fig. 1 or may include additional components.
In some embodiments, the autopilot device 100 may be a single autopilot device or a group of autopilot devices. The set of autopilot units may be centralized or distributed. In some embodiments, the autopilot device 100 may be local or remote with respect to the condition monitoring device 200. For example, the autopilot device 100 may access information stored in the condition monitoring device 200 and a database, or any combination thereof, via a network. As another example, the autopilot device 100 may be directly connected to at least one of the condition monitoring device 200 and a database to access information and/or data stored therein.
In some embodiments, the autopilot device 100 may include a processor. The processor may process information and/or data related to the service request to perform one or more of the functions described herein. A processor may include one or more processing cores (e.g., a single-core processor (S) or a multi-core processor (S)). Merely by way of example, a Processor may include a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), an Application Specific Instruction Set Processor (ASIP), a Graphics Processing Unit (GPU), a Physical Processing Unit (PPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a Programmable Logic Device (PLD), a controller, a microcontroller Unit, a reduced Instruction Set computer (reduced Instruction Set computer), a microprocessor, or the like, or any combination thereof.
The network may be used for the exchange of information and/or data. In some embodiments, one or more components in the autopilot system 10 (e.g., autopilot device 100, condition monitoring device 200, and a database) may send information and/or data to other components. In some embodiments, the network may be any type of wired or wireless network, or combination thereof. Merely by way of example, Network 130 may include a wired Network, a Wireless Network, a fiber optic Network, a telecommunications Network, an intranet, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a WLAN, a Metropolitan Area Network (MAN), a Wide Area Network (WAN), a Public Switched Telephone Network (PSTN), a bluetooth Network, a ZigBee Network, a Near Field Communication (NFC) Network, or the like, or any combination thereof. In some embodiments, the network may include one or more network access points. For example, the network may include wired or wireless network access points, such as base stations and/or network switching nodes, through which one or more components of the autopilot system 10 may connect to the network to exchange data and/or information.
The aforementioned database may store data and/or instructions. In some embodiments, the database may store data assigned to condition monitoring device 200. In some embodiments, the database may store data and/or instructions for the exemplary methods described herein. In some embodiments, the database may include mass storage, removable storage, volatile Read-write Memory, or Read-Only Memory (ROM), among others, or any combination thereof. By way of example, mass storage may include magnetic disks, optical disks, solid state drives, and the like; removable memory may include flash drives, floppy disks, optical disks, memory cards, zip disks, tapes, and the like; volatile read-write memory may include Random Access Memory (RAM); the RAM may include Dynamic RAM (DRAM), Double data Rate Synchronous Dynamic RAM (DDR SDRAM); static RAM (SRAM), Thyristor-Based Random Access Memory (T-RAM), Zero-capacitor RAM (Zero-RAM), and the like. By way of example, ROMs may include Mask Read-Only memories (MROMs), Programmable ROMs (PROMs), Erasable Programmable ROMs (PERROMs), Electrically Erasable Programmable ROMs (EEPROMs), compact disk ROMs (CD-ROMs), digital versatile disks (ROMs), and the like.
In some embodiments, a database may be connected to a network to communicate with one or more components in the autopilot system 10 (e.g., autopilot device 100, condition monitoring device 200, etc.). One or more components in the autopilot system 10 may access data or instructions stored in a database via a network. In some embodiments, the database may be directly connected to one or more components of the autopilot system 10 (e.g., autopilot device 100, condition monitoring device 200, etc.; or, in some embodiments, the database may be part of autopilot device 100.
In this embodiment, the status monitoring device 200 may be various monitoring sensors (e.g., a gravity sensor, a biometric sensor, an iris sensor, a motion sensor, etc.), and this embodiment is not limited in this respect.
To solve the technical problems in the background art, fig. 2 is a schematic flow chart of an automatic driving decision processing method provided in an embodiment of the present application, which can be executed by the automatic driving device 100 shown in fig. 1, and the automatic driving decision processing method is described in detail below.
Step S110 is to acquire the status information of the driving object in the monitoring area of each status monitoring device, and divide the status information in each monitoring area according to the predetermined status category to generate status summary information of each status category.
Step S120, determining preset area characteristic points in each monitoring area according to the identity authentication information of the driving object, and respectively determining the floating change information of the floating area of the preset area characteristic points in the state summarizing information of the corresponding state types aiming at the preset area characteristic points in each monitoring area to obtain a first state floating change result of the preset area characteristic points.
Step S130, determining frequent region characteristic points in each monitoring region according to historical driving information of the driving object, respectively obtaining the floating tracks of the frequent region characteristic points aiming at the frequent region characteristic points in each monitoring region, determining the floating change information of the floating tracks in the state summarizing information of the corresponding state types, and obtaining a second state floating change result of the frequent region characteristic points.
And step S140, generating an automatic driving control instruction for the automobile according to the matching relation between the first state floating change result and the second state floating change result.
In this embodiment, for each condition monitoring device, the corresponding monitoring area may be allocated according to a preset design requirement and a specific function of the condition monitoring device, so that the condition monitoring device monitors only the condition information of the driving object in the corresponding monitoring area. For example, for a driving object, each feature region of the head, each joint feature region of the hand, a neck feature region, a leg region, and the like may be individually designed to correspond to a detection region, and a related state monitoring device may be provided to correspond to the detection region to monitor state information of the driving object.
In this embodiment, the preset area feature point may be an area feature point that is matched with the identity authentication information of the driving object in advance. In detail, different preset region feature points corresponding to different driving subjects (for example, the elderly, the middle aged, or the males, the females) may be preset according to different driving habits.
In this embodiment, the predetermined category may be determined according to the function of the vehicle, and may include, for example, a clutch operation category, a steering wheel operation category, an electrical device control category, and the like, which is not specifically limited herein.
In this embodiment, the frequent region feature point may be a region feature point in which the change frequency in the historical driving information of the driving target is greater than a set frequency threshold, and the change frequency may be used to indicate the change degree of the region feature point in unit time.
Based on the above design, in the present embodiment, the status information in each monitoring area is divided according to the predetermined status category, and the status summary information of each status category is generated respectively, so that the difference of the driving object in the automatic driving process for different status categories can be fully considered, and the characteristic status difference of different drivers can be effectively distinguished, so that the floating change of the area characteristic point of the driver in the automatic driving process is considered, and thus the automatic driving decision is performed in further combination with the historical driving condition of the driver, and the data accuracy of the decision process of the automatic driving strategy can be improved.
In a possible design, for step S110, in the process of dividing the status information in each monitoring area, in order to improve the accuracy of the division and reduce redundant information to improve the decision efficiency, this embodiment may obtain status category feature points corresponding to each predetermined status category, form a feature point set of each predetermined status category, and obtain the coincidence feature point information of the target feature point number of each monitoring area and the feature point number of the feature point set. For example, the state category feature points under the clutch operation category, the steering wheel operation category and the electrical device control category may be acquired respectively for the clutch operation category, the steering wheel operation category and the electrical device control category, for example, the state category feature points under the clutch operation category may include part specific feature points of legs, and the state category feature points under the steering wheel operation category may include part specific feature points of hands, feature points of eyes, and the like. That is, the state class feature point corresponding to each predetermined state class may be used to implement the coordination of the operation process corresponding to the predetermined state class.
On the basis, the number of the key feature points of each target state category can be calculated according to the superposition feature point information of the target feature point number and the feature point number of the feature point set, and the state category feature points are selected from the feature point set according to the number of the key feature points of each target state category to obtain an initial feature point matrix.
For example, if the total feature point distribution number of the initial feature point matrix is greater than the maximum total feature point distribution number that satisfies the total feature point distribution number requirement, the coarse-range key feature points in the initial feature point matrix are decreased by a first set number, and the fine-range key feature points in the initial feature point matrix are increased by a first set number.
As one possible design, the narrow-range key feature points may be key feature points whose unit density of the detection area where the key feature points are located is less than a set level, and the wide-range key feature points may be key feature points whose unit density of the detection area where the key feature points are located is not less than the set level.
Therefore, the total feature point distribution number of the initial feature point matrix updated this time can be calculated, and then the next processing operation is executed according to the total feature point distribution number, and several possible examples will be given below to further explain the embodiment in detail.
For example, if the total feature point distribution number of the initial feature point matrix updated this time is greater than the maximum total feature point distribution number, the above processing is performed on the initial feature point matrix updated this time again.
For another example, if the total feature point distribution number of the initial feature point matrix after the current update is less than or equal to the maximum total feature point distribution number, the initial feature point matrix before the current update is used as the first update matrix, and the target state categories are sorted in the order from low priority to high priority of the state categories, so as to obtain the target state category sequence.
Then, the target state categories may be grouped according to the target state category sequence, each group including a first state category and a second state category which are on both sides of the target position of the target state category sequence and which are consistent with a difference between the target positions, the priority of the first state category being smaller than the priority of the second state category.
Then, each packet may be sequentially used as a target packet in an order from a low priority to a high priority in the gap from the target position, and the following second update processing may be performed on the target packet: and increasing one key feature point of the first state category of the target group in the first updating matrix, and decreasing one key feature point of the second state category of the target group in the first updating matrix.
On the basis, whether the total feature point distribution quantity of the updated first updating matrix meets the requirement of the total feature point distribution quantity can be further judged.
And if the total characteristic point distribution quantity of the updated first updating matrix meets the requirement of the total characteristic point distribution quantity, taking the updated first updating matrix as a final characteristic point matrix.
And if the total feature point distribution quantity of the updated first updating matrix does not meet the total feature point distribution quantity requirement, taking the next group as a new target group, and performing second updating processing on the new target group.
If the total feature point distribution quantity of the initial feature point matrix is less than the minimum total feature point distribution quantity meeting the total feature point distribution quantity requirement, performing the following third updating processing on the initial feature point matrix: the coarse-range key feature points in the initial feature point matrix are increased by a first set number, and the fine-range key feature points in the initial feature point matrix are decreased by the first set number.
On the basis, the total distribution quantity of the feature points of the initial feature point matrix after the updating can be further calculated.
And if the total characteristic point distribution quantity of the initial characteristic point matrix after the updating is less than the minimum total characteristic point distribution quantity, executing third updating processing on the initial characteristic point matrix after the updating again.
And if the total characteristic point distribution quantity of the initial characteristic point matrix after the updating is greater than or equal to the minimum total characteristic point distribution quantity, taking the initial characteristic point matrix before the updating as a second updating matrix, and sequencing all target state types according to the sequence of the state types from low priority to high priority to obtain a target state type sequence.
On this basis, the target state categories can be further grouped according to the target state category sequence, each group comprises a first state category and a second state category which are arranged on two sides of the target position of the target state category sequence and consistent with the difference of the target position, and the priority of the first state category is smaller than that of the second state category.
Then, according to the sequence from low priority to high priority of the difference with the target position, taking each packet as a target packet in turn, and performing the following fourth updating processing on the target packet: and reducing the key feature points of the first state category of the target group in the second updating matrix by one, and increasing the key feature points of the second state category of the target group in the second updating matrix by one.
On the basis, whether the total feature point distribution quantity of the second updated matrix after the updating meets the requirement of the total feature point distribution quantity can be further judged.
And if the total characteristic point distribution quantity of the second updated matrix after the updating meets the requirement of the total characteristic point distribution quantity, taking the second updated matrix after the updating as a final characteristic point matrix.
And if the total feature point distribution quantity of the second updated matrix after the updating does not meet the total feature point distribution quantity requirement, taking the next group as a new target group, and performing fourth updating processing on the new target group.
Therefore, the state information of each feature point in the final feature point matrix of each target state category can be classified into the state summary information of the state category.
Based on the design, the dividing accuracy in the process of dividing the state information in each monitoring area can be improved, and redundant information is reduced to improve the decision efficiency.
In one possible design, for step S120, the identity authentication information may include biometric information, such as fingerprint feature information, face feature information, iris feature information, voice feature information, and the like, which is not specifically limited herein, and one or more combinations of the biometric information and the voice feature information may be flexibly selected according to actual hardware components in the automobile, and when a plurality of combinations are selected, the accuracy of identity verification may be improved. On the basis, when the driving object is recognized to sit at the driving position, the biological characteristic information of the driving object can be collected, and the preset area characteristic points in each monitoring area are obtained according to the biological characteristic information and the corresponding relation between each piece of biological characteristic information which is configured in advance and the preset area characteristic points in each detection area.
In a possible design, for step S120, in order to accurately determine the first state floating change result of the preset region feature point and avoid the influence of the inertial floating feature on the result in the position floating process, in this embodiment, for the preset region feature point in each monitoring region, a three-dimensional fixed point matched with the preset region feature point is respectively obtained, and a three-dimensional space region corresponding to the three-dimensional fixed point when the three-dimensional fixed point continuously falls within a coordinate range corresponding to one three-dimensional space region in the monitoring region in a preset time period is obtained as the target three-dimensional space region.
On the basis, whether the area range of the target three-dimensional space area is the same as the area range input by the preset automatic driving control model is further judged.
If the area ranges are different, the area range of the target three-dimensional space area is zoomed to a three-dimensional space area which is consistent with the area range input by the model of the automatic driving control model, and the area range is input into the automatic driving control model.
Then, an automatic driving control model is adopted to calculate the input three-dimensional space region, floating change information corresponding to the input three-dimensional space region is obtained, each floating position of a preset region feature point in the target three-dimensional space region is tracked, a floating feature vector of each floating position in the target three-dimensional space region is obtained, and then a region, with the floating position frequency being larger than a preset threshold value, in the floating change information corresponding to the input three-dimensional space region is determined as a floating region.
Then, the vector value of each floating position in the input three-dimensional space region can be converted to obtain the floating feature vector of each floating position in the input three-dimensional space region, the first floating vector mean value of the whole three-dimensional space region is calculated according to the floating feature vector of each floating position in the target three-dimensional space region, and the second floating vector mean value of the floating region is calculated according to the floating feature vector of each floating position in the floating region. Then, calculating the first floating vector mean value, the second floating vector mean value and a preset coefficient to obtain a floating reference coefficient of the floating region, calculating the ratio of the floating feature vector of each floating position in the target three-dimensional space region to the floating reference coefficient, obtaining the first floating strength of each floating position in the target three-dimensional space region according to the ratio, and then calculating the first floating strength and the floating change information of each floating position in the target three-dimensional space region to obtain the floating strength of each floating position in the target three-dimensional space region.
Or, in another possible design, the embodiment may further calculate a ratio of the floating feature vector of each floating position in the target three-dimensional space region to the floating reference coefficient to obtain a first floating strength of each floating position in the target three-dimensional space region, calculate the first floating strength of each floating position in the target three-dimensional space region according to a preset floating range, obtain a second floating strength of each floating position in the target three-dimensional space region, where a difference between the second floating strength and the first floating strength is smaller than the preset floating range, calculate the second floating strength and the floating change information of each floating position in the target three-dimensional space region, and obtain the floating strength of each floating position in the target three-dimensional space region.
Therefore, the present embodiment may determine a target coefficient of each floating position in the target three-dimensional space region according to the target feature point of the specified space position, the floating strength, and the floating change information, and calculate a ratio of the floating strength of each floating position in the target three-dimensional space region to a preset constant, where the target coefficient may be a value obtained by multiplying the feature vector value of the target feature point of the specified space position by the floating strength and dividing the multiplied value by the floating change information.
And then, calculating the product of the ratio of the floating strength of each floating position to a preset constant and the corresponding target coefficient, and obtaining the first state floating result of each floating position in the target three-dimensional space region.
And then, carrying out color editing processing on the target three-dimensional space area according to the first state floating result of each floating position to output the target three-dimensional space area.
Or, in another case, the embodiment may also calculate a ratio of the floating intensity of each floating position in the target three-dimensional space region to a preset constant, and calculate a product of the ratio of the floating intensity of each floating position to the preset constant and the corresponding target dyeing value, so as to obtain the first state floating result of each floating position in the target three-dimensional space region.
Therefore, the embodiment can calculate the first state floating result of each floating position in the target three-dimensional space region, the target three-dimensional space region and the floating change information, obtain the second state floating result of each floating position in the target three-dimensional space region, and arrange the second state floating results of each floating position to obtain the first state floating change result of the preset region feature point.
In a possible design, regarding step S130, in order to improve the situation and increase the accuracy of the frequent feature points, in consideration of the fact that some regional feature points may be abnormal movements of other components such as clothes of the driving object, the present embodiment first acquires the historical driving information of the driving object, and the historical driving information may include a plurality of pieces of position change information corresponding to a plurality of regional feature points, respectively.
Then, when it is determined that a plurality of position change information corresponding to any one of the area feature points all satisfy a preset position change condition, according to the position change information of the area feature points and the amplitude of the position change interval, determining an initial position of a first position change interval matched with the preset position change condition, wherein the preset position change condition includes: the position change amplitude is larger than a set amplitude threshold value.
On the basis, the initial positions of the plurality of position change sections matched with the preset position change condition corresponding to the area feature points can be determined according to the position change information of the area feature points, the amplitude of the position change sections, the initial position of the first position change section and the number of the preset position change sections.
If the position of the area characteristic point of the tracking node corresponding to the area characteristic point in the area characteristic point is matched with the initial position of the target position change interval, and if the tracking node is the first tracking node of the target position change interval, acquiring the area characteristic point matched with the previous position change interval adjacent to the target position change interval as a screening area characteristic point, and identifying one area characteristic point without the screening area characteristic point in the tracking node as the target area characteristic point matched with the target position change interval.
And if the tracking node is not the first tracking node of the target position change interval, acquiring a target area characteristic point matched with the target position change interval, identifying the target area characteristic point in the tracking node, and identifying at least one active position node of the target area characteristic point, wherein the area characteristic point corresponds to a plurality of position change intervals.
And then, in the position change interval, calculating the moving space distance of at least one active position node of the target area characteristic point between any two adjacent tracking nodes in the position change interval and the position vector of at least one active position node of the target area characteristic point in the position change interval according to the position information of the at least one active position node of the target area characteristic point in the plurality of tracking nodes.
Then, the duration of the position change interval may be counted, and according to the moving spatial distance and the position vector, the average change frequency and the change frequency variance of the target region feature point in the position change interval may be determined, and according to the average change frequency and the change frequency variance, the frequent feature parameter of the target region feature point in the position change interval may be calculated, so that according to the frequent feature parameter of each region feature point in the matched position change interval, the frequent feature score of each region feature point may be calculated, and the region feature point whose frequent feature score is greater than the set score may be determined as the frequent region feature point. For example, the frequent feature parameters of each region feature point in the matched position change interval may be weighted and summed to obtain the frequent feature score of each region feature point.
Based on the above design, the present embodiment considers the case where some regional feature points may be abnormal movements of other components such as clothes of the driving object, and thus the accuracy and reliability of frequent feature points can be effectively improved through the above further screening process.
It should be particularly noted that after determining the frequent region feature points in each monitoring region, the present embodiment may further obtain a second state floating change result of the frequent region feature points according to a similar operation manner of obtaining the first state floating change result of the preset region feature points in the foregoing embodiment, which is not described herein again.
In a possible design, further referring to step S140, after obtaining the first state floating change result and the second state floating change result, the embodiment may match the state floating result of each first floating position in the first state floating change result with the state floating result of each matched second floating position in the second state floating change result, so as to obtain a plurality of matching degrees. Each matched second floating position in the second state floating change results is matched with the corresponding first floating position in the arrangement sequence of the respective state floating change results, and the matching degree is determined according to the coincidence degree between the state floating result of the first floating position and the state floating result of the matched second floating position.
Thus, an automatic driving control command for the automobile can be generated according to the plurality of matching degrees.
For example, in one possible design, the present embodiment may determine a first number of the plurality of degrees of matching that is lower than a first set degree of matching, a second number of the plurality of degrees of matching that is greater than a second set degree of matching, and a third number of intervals between the first set degree of matching and the second set degree of matching.
And if the first number is larger than the sum of the second number and the third number, generating a first automatic driving control instruction for the automobile, wherein the first automatic driving control instruction is used for controlling the automobile to enter a preset deceleration mode.
And if the third number is larger than the sum of the first number and the second number, generating a second automatic driving control instruction for the automobile, wherein the second automatic driving control instruction is used for controlling the automobile to enter a preset acceleration mode.
And if the second number is greater than the sum of the first number and the third number, generating a third automatic driving control instruction for the automobile, wherein the third automatic driving control instruction is used for controlling the automobile to enter a preset constant speed mode.
It should be noted that, during the automatic driving, when entering the preset deceleration mode, it does not mean that the vehicle decelerates all the time, but means that the vehicle uniformly fluctuates at the current speed down to a certain speed range. Similarly, when entering the preset acceleration mode, the vehicle is not always accelerated but is uniformly fluctuated in a certain speed range at the current speed. When the automobile enters the preset constant speed mode, the automobile can be understood to be uniformly fluctuated in a minimum speed range of the current speed.
It is understood that the first number, the second number, and the third number may fluctuate in real time during actual automatic driving, and the automatic driving apparatus 100 may adaptively switch the automatic driving control command at any time. In addition, other automatic driving strategies may exist in the automatic driving process, and the scheme provided by this embodiment may be executed simultaneously with the other automatic driving strategies, or may also be executed with a certain precondition, and may be flexibly designed by a person skilled in the art according to implementation possibilities of the scheme, and is not specifically limited herein.
Fig. 3 is a schematic diagram of functional modules of an automatic driving decision processing apparatus 300 according to an embodiment of the present application, and the present embodiment may divide the functional modules of the automatic driving decision processing apparatus 300 according to the foregoing method embodiment. For example, the functional blocks may be divided for the respective functions, or two or more functions may be integrated into one processing block. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, the division of the modules in the present application is schematic, and is only a logical function division, and there may be another division manner in actual implementation. For example, in the case of dividing each function module according to each function, the automatic driving decision processing device 300 shown in fig. 3 is only a schematic device diagram. The automatic driving decision processing apparatus 300 may include an obtaining module 310, a first determining module 320, a first determining module 330, and a generating module 340, and the functions of the functional modules of the automatic driving decision processing apparatus 300 are described in detail below.
The obtaining module 310 is configured to obtain status information of the driving object in the monitoring area of each status monitoring device, divide the status information in each monitoring area according to a predetermined status category, and generate status summary information of each status category respectively.
The first determining module 320 is configured to determine preset region feature points in each monitoring region according to the identity authentication information of the driving object, and determine floating change information of a floating region of the preset region feature points in the state summary information of the corresponding state category for the preset region feature points, respectively, to obtain a first state floating change result of the preset region feature points, where the preset region feature points are region feature points that are pre-matched with the identity authentication information of the driving object.
The second determining module 330 is configured to determine frequent region feature points in each monitoring region according to historical driving information of the driving object, obtain floating tracks of the frequent region feature points respectively for the frequent region feature points in each monitoring region, determine floating change information of the floating tracks in the status summary information of the corresponding status category, and obtain a second status floating change result of the frequent region feature points, where the frequent region feature points are region feature points whose change frequency in the historical driving information of the driving object is greater than a set frequency threshold, and the change frequency is used to indicate a change degree of the region feature points in unit time.
And the generating module 340 is configured to generate an automatic driving control instruction for the vehicle according to a matching relationship between the first state floating change result and the second state floating change result.
Further, fig. 4 is a schematic structural diagram of an automatic driving device 100 for executing the automatic driving decision processing method according to an embodiment of the present application. As shown in fig. 4, the autopilot device 100 may include a network interface 110, a machine-readable storage medium 120, a processor 130, and a bus 140. The processor 130 may be one or more, and one processor 130 is illustrated in fig. 4 as an example. The network interface 110, the machine-readable storage medium 120, and the processor 130 may be connected by a bus 140 or otherwise, as exemplified by the connection by the bus 140 in fig. 4.
The machine-readable storage medium 120 is a computer-readable storage medium, and can be used to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the automatic driving decision processing method in the embodiment of the present application (for example, the obtaining module 310, the first determining module 320, the first determining module 330, and the generating module 340 of the automatic driving decision processing apparatus 300 shown in fig. 3). The processor 130 executes various functional applications and data processing of the terminal device by detecting software programs, instructions and modules stored in the machine-readable storage medium 120, that is, the above-mentioned automatic driving decision processing method is implemented, and details are not described herein.
The machine-readable storage medium 120 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the machine-readable storage medium 120 may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be a Read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double data rate Synchronous Dynamic random access memory (DDR SDRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchronous link SDRAM (SLDRAM), and direct memory bus RAM (DR RAM). It should be noted that the memories of the systems and methods described herein are intended to comprise, without being limited to, these and any other suitable memory of a publishing node. In some examples, the machine-readable storage medium 120 may further include memory located remotely from the processor 130, which may be connected to the autopilot device 100 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor 130 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method embodiments may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 130. The processor 130 may be a general-purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor.
The autopilot device 100 may interact with other equipment (e.g., condition monitoring device 200) via the network interface 110. Network interface 110 may be a circuit, bus, transceiver, or any other device that may be used to exchange information. Processor 130 may send and receive information using network interface 110.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, autopilot device, or data center to another website site, computer, autopilot device, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device including one or more available media integrated automotive vehicles, data centers, and the like. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the embodiments of the present application without departing from the spirit and scope of the application. Thus, to the extent that such expressions and modifications of the embodiments of the application fall within the scope of the claims and their equivalents, the application is intended to embrace such alterations and modifications.

Claims (10)

1. An autopilot decision-making device for use with an autopilot device communicatively coupled to a plurality of condition monitoring devices within a vehicle, the device comprising:
the acquisition module is used for acquiring the state information of the driving object in the monitoring area of each state monitoring device, dividing the state information in each monitoring area according to preset state categories and respectively generating state summary information of each state category, wherein the preset categories comprise a clutch operation category, a steering wheel operation category and an electrical equipment control category;
the first determining module is used for determining preset region feature points in each monitoring region according to the identity authentication information of the driving object, and respectively determining floating change information of a floating region of the preset region feature points in state summary information of corresponding state types aiming at the preset region feature points in each monitoring region to obtain a first state floating change result of the preset region feature points, wherein the preset region feature points are region feature points matched with the identity authentication information of the driving object in advance, the identity authentication information comprises biological feature information, and the biological feature information is fingerprint feature information, face feature information, iris feature information or voice feature information;
a second determining module, configured to determine frequent region feature points in each monitored region according to historical driving information of the driving object, obtain, for the frequent region feature points in each monitored region, floating tracks of the frequent region feature points, respectively, determine floating change information of the floating tracks in state summary information of corresponding state types, and obtain a second state floating change result of the frequent region feature points, where the frequent region feature points are region feature points whose change frequency in the historical driving information of the driving object is greater than a set frequency threshold, and the change frequency is used to indicate a change degree of the region feature points in unit time;
and the generating module is used for generating an automatic driving control instruction for the automobile according to the matching relation between the first state floating change result and the second state floating change result.
2. An autopilot system comprising an autopilot device and a plurality of condition monitoring devices in an automobile communicatively coupled to the autopilot device:
the state monitoring device is used for monitoring state information of the driving object in the monitored area;
the automatic driving device is used for acquiring the state information of a driving object in the monitoring area of each state monitoring device, dividing the state information in each monitoring area according to preset state categories and respectively generating state summary information of each state category, wherein the preset categories comprise a clutch operation category, a steering wheel operation category and an electrical equipment control category;
the automatic driving device is used for determining preset region feature points in each monitoring region according to identity authentication information of the driving object, respectively determining floating change information of floating regions of the preset region feature points in state summary information of corresponding state types aiming at the preset region feature points in each monitoring region, and obtaining a first state floating change result of the preset region feature points, wherein the preset region feature points are region feature points matched with the identity authentication information of the driving object in advance, the identity authentication information comprises biological feature information, and the biological feature information is fingerprint feature information, human face feature information, iris feature information or voice feature information;
the automatic driving device is used for determining frequent region feature points in each monitoring region according to historical driving information of the driving object, respectively acquiring floating tracks of the frequent region feature points aiming at the frequent region feature points in each monitoring region, determining floating change information of the floating tracks in state summary information of corresponding state types, and obtaining a second state floating change result of the frequent region feature points, wherein the frequent region feature points are region feature points of which the change frequency in the historical driving information of the driving object is greater than a set frequency threshold value, and the change frequency is used for expressing the change degree of the region feature points in unit time;
and the automatic driving device is used for generating an automatic driving control instruction for the automobile according to the matching relation between the first state floating change result and the second state floating change result.
3. An autopilot decision processing method, for use with an autopilot device communicatively coupled to a plurality of condition monitoring devices within a vehicle, the method comprising:
acquiring state information of a driving object in a monitoring area of each state monitoring device, dividing the state information in each monitoring area according to preset state categories, and respectively generating state summary information of each state category, wherein the preset categories comprise a clutch operation category, a steering wheel operation category and an electrical equipment control category;
determining preset region feature points in each monitoring region according to the identity authentication information of the driving object, and respectively determining floating change information of a floating region of the preset region feature points in state summary information of corresponding state types aiming at the preset region feature points in each monitoring region to obtain a first state floating change result of the preset region feature points, wherein the preset region feature points are region feature points which are matched with the identity authentication information of the driving object in advance, the identity authentication information comprises biological feature information, and the biological feature information is fingerprint feature information, human face feature information, iris feature information or voice feature information;
determining frequent region feature points in each monitoring region according to historical driving information of the driving object, respectively obtaining floating tracks of the frequent region feature points aiming at the frequent region feature points in each monitoring region, determining floating change information of the floating tracks in state summarizing information of corresponding state types, and obtaining a second state floating change result of the frequent region feature points, wherein the frequent region feature points are region feature points of which the change frequency in the historical driving information of the driving object is greater than a set frequency threshold value, and the change frequency is used for expressing the change degree of the region feature points in unit time;
and generating an automatic driving control instruction for the automobile according to the matching relation between the first state floating change result and the second state floating change result.
4. The automated driving decision processing method according to claim 3, wherein the step of dividing the status information in each monitoring area according to a predetermined status category and generating status summary information for each status category respectively comprises:
acquiring state category characteristic points corresponding to each preset state category, forming a characteristic point set of each preset state category, and acquiring coincidence characteristic point information of the target characteristic points of each monitoring area and the characteristic points of the characteristic point set;
calculating the number of key feature points of each target state category according to the superposition feature point information of the target feature point number and the feature point number of the feature point set, and selecting state category feature points from the feature point set according to the number of the key feature points of each target state category to obtain an initial feature point matrix;
if the total feature point distribution quantity of the initial feature point matrix is greater than the maximum total feature point distribution quantity meeting the total feature point distribution quantity requirement, reducing the coarse-range key feature points in the initial feature point matrix by a first set quantity, and increasing the fine-range key feature points in the initial feature point matrix by the first set quantity, wherein the fine-range key feature points refer to key feature points of which the unit intensity degree of the key feature points in the detection area is less than the set degree, and the coarse-range key feature points refer to key feature points of which the unit intensity degree of the key feature points in the detection area is not less than the set degree;
calculating the total characteristic point distribution quantity of the updated initial characteristic point matrix;
if the total characteristic point distribution quantity of the initial characteristic point matrix after the updating is larger than the maximum total characteristic point distribution quantity, the initial characteristic point matrix after the updating is executed with the processing again;
if the total feature point distribution quantity of the initial feature point matrix after the updating is less than or equal to the maximum total feature point distribution quantity, taking the initial feature point matrix before the updating as a first updating matrix, and sequencing all the target state classes according to the sequence of the state classes from low priority to high priority to obtain a target state class sequence;
grouping the target state categories according to the target state category sequence, wherein each group comprises a first state category and a second state category which are arranged at two sides of a target position of the target state category sequence and consistent with the difference of the target position, and the priority of the first state category is smaller than that of the second state category;
and according to the sequence from low priority to high priority of the difference with the target position, sequentially taking each packet as a target packet, and performing the following second updating processing on the target packet: adding one more key feature point of a first state category of the target group in the first update matrix, and reducing one less key feature point of a second state category of the target group in the first update matrix;
judging whether the total characteristic point distribution quantity of the updated first updating matrix meets the total characteristic point distribution quantity requirement or not;
if the total characteristic point distribution quantity of the updated first updating matrix meets the total characteristic point distribution quantity requirement, taking the updated first updating matrix as a final characteristic point matrix;
if the total characteristic point distribution quantity of the updated first updating matrix does not meet the total characteristic point distribution quantity requirement, taking the next group as a new target group, and performing the second updating processing on the new target group;
if the total feature point distribution quantity of the initial feature point matrix is less than the minimum total feature point distribution quantity meeting the total feature point distribution quantity requirement, performing the following third updating processing on the initial feature point matrix: increasing the coarse range key feature points in the initial feature point matrix by a first set number, and decreasing the fine range key feature points in the initial feature point matrix by the first set number;
calculating the total characteristic point distribution quantity of the updated initial characteristic point matrix;
if the total characteristic point distribution quantity of the initial characteristic point matrix after the updating is less than the minimum total characteristic point distribution quantity, executing the third updating treatment on the initial characteristic point matrix after the updating again;
if the total feature point distribution quantity of the initial feature point matrix after the updating is greater than or equal to the minimum total feature point distribution quantity, taking the initial feature point matrix before the updating as a second updating matrix, and sequencing all the target state classes according to the sequence of the state classes from low priority to high priority to obtain a target state class sequence;
grouping the target state categories according to the target state category sequence, wherein each group comprises a first state category and a second state category which are arranged at two sides of a target position of the target state category sequence and consistent with the difference of the target position, and the priority of the first state category is smaller than that of the second state category;
and sequentially taking each packet as a target packet according to the sequence from low priority to high priority of the difference with the target position, and performing the following fourth updating processing on the target packet: reducing the key feature points of the first state category of the target grouping in the second update matrix by one, and increasing the key feature points of the second state category of the target grouping in the second update matrix by one;
judging whether the total characteristic point distribution quantity of the second updated matrix after the updating meets the total characteristic point distribution quantity requirement or not;
if the total feature point distribution quantity of the second updated matrix meets the total feature point distribution quantity requirement, taking the second updated matrix as the final feature point matrix;
if the total feature point distribution quantity of the second updated matrix after the updating does not meet the total feature point distribution quantity requirement, taking the next group as a new target group, and performing the fourth updating processing on the new target group;
and classifying the state information of each feature point in the final feature point matrix of each target state category into the state summary information of the state category.
5. The automated driving decision processing method according to claim 3, wherein the identification information includes biometric information, and the step of determining the preset area feature points in the respective monitoring areas according to the identification information of the driving object includes:
collecting biological characteristic information of the driving object;
and obtaining preset area characteristic points in each monitoring area according to the biological characteristic information and the corresponding relation between each preset area characteristic point in each detection area and each preset biological characteristic information configured in advance.
6. The automatic driving decision processing method according to claim 3, wherein the step of obtaining a first state floating change result of the preset area feature point by determining floating change information of a floating area of the preset area feature point in the state summary information of the corresponding state category for the preset area feature point in the preset area feature point comprises:
respectively acquiring three-dimensional fixed points matched with the preset region feature points aiming at the preset region feature points in each monitoring region, and acquiring a corresponding three-dimensional space region as a target three-dimensional space region when the three-dimensional fixed points continuously fall into a coordinate range corresponding to one three-dimensional space region in the monitoring region within a preset time period;
judging whether the area range of the target three-dimensional space area is the same as the area range input by a preset automatic driving control model;
if the area ranges are different, the area range of the target three-dimensional space area is zoomed to a three-dimensional space area which is consistent with the area range input by the model of the automatic driving control model, and the area range is input into the automatic driving control model;
calculating an input three-dimensional space region by adopting the automatic driving control model, and acquiring floating change information corresponding to the input three-dimensional space region;
tracking each floating position of the preset region characteristic points in the target three-dimensional space region to obtain a floating characteristic vector of each floating position in the target three-dimensional space region;
determining a region with the floating position frequency greater than a preset threshold value in the floating change information corresponding to the input three-dimensional space region as a floating region;
converting the vector value of each floating position in the input three-dimensional space region to obtain a floating feature vector of each floating position in the input three-dimensional space region;
calculating a first floating vector mean value of the whole three-dimensional space region according to the floating feature vector of each floating position in the target three-dimensional space region;
calculating a second floating vector mean value of the floating region according to the floating feature vector of each floating position in the floating region;
calculating the first floating vector mean value, the second floating vector mean value and a preset coefficient to obtain a floating reference coefficient of the floating region, calculating a ratio of a floating feature vector of each floating position in the target three-dimensional space region to the floating reference coefficient, and obtaining a first floating strength of each floating position in the target three-dimensional space region according to the ratio;
calculating the first floating intensity and the floating change information of each floating position in the target three-dimensional space region to obtain the floating intensity of each floating position in the target three-dimensional space region;
or, calculating a ratio of a floating feature vector of each floating position in the target three-dimensional space region to the floating reference coefficient to obtain a first floating strength of each floating position in the target three-dimensional space region, calculating the first floating strength of each floating position in the target three-dimensional space region according to a preset floating range to obtain a second floating strength of each floating position in the target three-dimensional space region, wherein a difference value between the second floating strength and the first floating strength is smaller than the preset floating range, calculating the second floating strength of each floating position in the target three-dimensional space region and the floating change information to obtain the floating strength of each floating position in the target three-dimensional space region;
determining a target coefficient of each floating position in the target three-dimensional space region according to a target feature point, floating strength and the floating change information of a specified space position, and calculating a ratio of the floating strength of each floating position in the target three-dimensional space region to a preset constant, wherein the target coefficient is a value obtained by multiplying a feature vector value of the target feature point of the specified space position by the floating strength and dividing the feature vector value by the floating change information;
calculating the product of the ratio of the floating strength of each floating position to a preset constant and a corresponding target coefficient, and obtaining a first state floating result of each floating position in the target three-dimensional space region;
performing color editing processing on the target three-dimensional space region according to the first state floating result of each floating position to output the target three-dimensional space region;
or calculating the ratio of the floating intensity of each floating position in the target three-dimensional space region to a preset constant;
calculating the product of the ratio of the floating intensity of each floating position to a preset constant and the corresponding target dyeing value, and obtaining a first state floating result of each floating position in the target three-dimensional space region;
calculating a first state floating result of each floating position in the target three-dimensional space region, the target three-dimensional space region and the floating change information to obtain a second state floating result of each floating position in the target three-dimensional space region;
and arranging the second state floating results of each floating position to obtain a first state floating change result of the preset region characteristic point.
7. The automated driving decision processing method according to any one of claims 3 to 6, wherein the step of determining frequent region feature points within the respective monitored regions from the historical driving information of the driving object includes:
acquiring historical driving information of the driving object, wherein the historical driving information comprises a plurality of position change information corresponding to a plurality of area characteristic points respectively;
when it is determined that a plurality of position change information corresponding to any one area feature point all meet a preset position change condition, determining an initial position of a first position change interval matched with the preset position change condition according to the position change information of the area feature point and the amplitude of the position change interval, wherein the preset position change condition comprises: the position change amplitude is larger than a set amplitude threshold value;
determining a plurality of position change intervals matched with the preset position change condition corresponding to the initial positions of the area feature points according to the position change information of the area feature points, the amplitude of the position change intervals, the initial position of the first position change interval and the number of preset position change intervals;
if the position of the area characteristic point of the tracking node corresponding to the area characteristic point in the area characteristic point is matched with the initial position of a target position change interval, and if the tracking node is the first tracking node of the target position change interval, acquiring the area characteristic point matched with the previous position change interval adjacent to the target position change interval as a screening area characteristic point, and identifying one area characteristic point without the screening area characteristic point in the tracking node as a target area characteristic point matched with the target position change interval;
if the tracking node is not the first tracking node of the target position change interval, acquiring a target area characteristic point matched with the target position change interval, identifying the target area characteristic point in the tracking node, and identifying at least one active position node of the target area characteristic point, wherein the area characteristic point corresponds to a plurality of position change intervals;
in the position change interval, calculating a moving space distance between any two adjacent tracking nodes of at least one active position node of the target area characteristic point in the position change interval and a position vector of the at least one active position node of the target area characteristic point in the position change interval according to position information of the at least one active position node of the target area characteristic point in a plurality of tracking nodes;
counting the duration of the position change interval, determining the average change frequency and the change frequency variance of the target area feature point in the position change interval according to the movement space distance and the position vector, and calculating the frequent feature parameter of the target area feature point in the position change interval according to the average change frequency and the change frequency variance;
and calculating the frequent feature score of each region feature point according to the frequent feature parameter of each region feature point in the matched position change interval, and determining the region feature point with the frequent feature score larger than the set score as the frequent region feature point.
8. The automated driving decision processing method according to claim 1, wherein the step of generating an automated driving control command for the vehicle based on a matching relationship between the first state floating change result and the second state floating change result comprises:
matching the state floating result of each first floating position in the first state floating change results with the state floating result of each matched second floating position in the second state floating change results to obtain a plurality of matching degrees, wherein each matched second floating position in the second state floating change results is matched with the corresponding first floating position in the arrangement sequence of the state floating change results, and the matching degrees are determined according to the coincidence degree between the state floating results of the first floating positions and the state floating results of the matched second floating positions;
and generating an automatic driving control instruction for the automobile according to the matching degrees.
9. The automated driving decision processing method of claim 1, wherein the step of generating automated driving control commands for the vehicle based on the plurality of matching degrees comprises:
determining a first number of the plurality of matching degrees which is lower than a first set matching degree, a second number which is greater than a second set matching degree and a third number of intervals between the first set matching degree and the second set matching degree;
if the first number is larger than the sum of the second number and the third number, generating a first automatic driving control instruction for the automobile, wherein the first automatic driving control instruction is used for controlling the automobile to enter a preset deceleration mode;
if the third quantity is larger than the sum of the first quantity and the second quantity, generating a second automatic driving control instruction for the automobile, wherein the second automatic driving control instruction is used for controlling the automobile to enter a preset acceleration mode;
and if the second quantity is greater than the sum of the first quantity and the third quantity, generating a third automatic driving control instruction for the automobile, wherein the third automatic driving control instruction is used for controlling the automobile to enter a preset constant speed mode.
10. An autopilot device comprising a processor, a machine-readable storage medium, a network interface, the machine-readable storage medium, the network interface, and the processor coupled by a bus system, the network interface configured to communicatively couple to at least one condition monitoring device within a vehicle, the machine-readable storage medium configured to store a program, instructions, or code, and the processor configured to execute the program, instructions, or code in the machine-readable storage medium to perform the autopilot decision processing method of any one of claims 3-9.
CN202010791018.3A 2020-02-17 2020-02-17 Automatic driving device, system, automatic driving decision processing method and device Pending CN111880545A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010791018.3A CN111880545A (en) 2020-02-17 2020-02-17 Automatic driving device, system, automatic driving decision processing method and device
CN202010094861.6A CN111208821B (en) 2020-02-17 2020-02-17 Automobile automatic driving control method and device, automatic driving device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010791018.3A CN111880545A (en) 2020-02-17 2020-02-17 Automatic driving device, system, automatic driving decision processing method and device

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202010094861.6A Division CN111208821B (en) 2020-02-17 2020-02-17 Automobile automatic driving control method and device, automatic driving device and system

Publications (1)

Publication Number Publication Date
CN111880545A true CN111880545A (en) 2020-11-03

Family

ID=70789724

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202010791010.7A Pending CN111942396A (en) 2020-02-17 2020-02-17 Automatic driving control device and method and automatic driving system
CN202010094861.6A Active CN111208821B (en) 2020-02-17 2020-02-17 Automobile automatic driving control method and device, automatic driving device and system
CN202010791018.3A Pending CN111880545A (en) 2020-02-17 2020-02-17 Automatic driving device, system, automatic driving decision processing method and device

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CN202010791010.7A Pending CN111942396A (en) 2020-02-17 2020-02-17 Automatic driving control device and method and automatic driving system
CN202010094861.6A Active CN111208821B (en) 2020-02-17 2020-02-17 Automobile automatic driving control method and device, automatic driving device and system

Country Status (1)

Country Link
CN (3) CN111942396A (en)

Family Cites Families (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1754621B1 (en) * 2005-08-18 2009-10-14 Honda Research Institute Europe GmbH Driver assistance system
CN100462047C (en) * 2007-03-21 2009-02-18 汤一平 Safe driving auxiliary device based on omnidirectional computer vision
US7839292B2 (en) * 2007-04-11 2010-11-23 Nec Laboratories America, Inc. Real-time driving danger level prediction
JP4591541B2 (en) * 2008-05-14 2010-12-01 横浜ゴム株式会社 Vehicle running condition evaluation method and evaluation apparatus therefor
EP2402226B1 (en) * 2010-07-02 2014-03-05 Harman Becker Automotive Systems GmbH Computer based system and method for providing a driver assist information
KR20120117232A (en) * 2011-04-14 2012-10-24 현대자동차주식회사 System for selecting emotional music in vehicle and method thereof
CN102433811B (en) * 2011-10-15 2013-07-31 天津市市政工程设计研究院 Method for determining minimum distance of road intersections in harbor district
JP5942761B2 (en) * 2012-10-03 2016-06-29 トヨタ自動車株式会社 Driving support device and driving support method
JP2016532948A (en) * 2013-08-22 2016-10-20 インテル コーポレイション Regional adaptive computer controlled assistance or autonomous driving of vehicles
JP6791616B2 (en) * 2015-04-27 2020-11-25 トヨタ自動車株式会社 Self-driving vehicle system
JP6237725B2 (en) * 2015-07-27 2017-11-29 トヨタ自動車株式会社 Crew information acquisition device and vehicle control system
JP6678311B2 (en) * 2015-12-24 2020-04-08 パナソニックIpマネジメント株式会社 Driving support method, driving support device using the same, information presentation device, and vehicle
JP6330842B2 (en) * 2016-03-31 2018-05-30 マツダ株式会社 Driving assistance device
CN105809152B (en) * 2016-04-06 2019-05-21 清华大学 A kind of driver's cognition based on Multi-source Information Fusion is divert one's attention monitoring method
CN105741586B (en) * 2016-04-29 2018-05-04 刘学 Road vehicle situation automatic judging method and system
CN106080590B (en) * 2016-06-12 2018-04-03 百度在线网络技术(北京)有限公司 The acquisition methods and device of control method for vehicle and device and decision model
JP6778872B2 (en) * 2016-06-28 2020-11-04 パナソニックIpマネジメント株式会社 Driving support device and driving support method
KR101851155B1 (en) * 2016-10-12 2018-06-04 현대자동차주식회사 Autonomous driving control apparatus, vehicle having the same and method for controlling the same
US20200005060A1 (en) * 2017-01-31 2020-01-02 The Regents Of The University Of California Machine learning based driver assistance
CN107168303A (en) * 2017-03-16 2017-09-15 中国科学院深圳先进技术研究院 A kind of automatic Pilot method and device of automobile
CN106950956B (en) * 2017-03-22 2020-02-14 合肥工业大学 Vehicle track prediction system integrating kinematics model and behavior cognition model
CN107628108B (en) * 2017-09-11 2020-01-03 清华大学 Vehicle full-electric transmission control system
CN107943046A (en) * 2017-12-08 2018-04-20 珠海横琴小可乐信息技术有限公司 A kind of automatic driving vehicle Human-machine Control power hand-over method and system
CN108196535A (en) * 2017-12-12 2018-06-22 清华大学苏州汽车研究院(吴江) Automated driving system based on enhancing study and Multi-sensor Fusion
KR20190073207A (en) * 2017-12-18 2019-06-26 현대모비스 주식회사 Apparatus and method for supporting safe driving
CN110146100A (en) * 2018-02-13 2019-08-20 华为技术有限公司 Trajectory predictions method, apparatus and storage medium
CN108881409A (en) * 2018-05-31 2018-11-23 北京智行者科技有限公司 The monitoring method and system of vehicle
EP3576074A1 (en) * 2018-06-01 2019-12-04 Volvo Car Corporation Method and system for assisting drivers to drive with precaution
CN108845579A (en) * 2018-08-14 2018-11-20 苏州畅风加行智能科技有限公司 A kind of automated driving system and its method of port vehicle
CN109341713A (en) * 2018-11-30 2019-02-15 北京小马智行科技有限公司 A kind of automated driving system, method and device
CN110059582A (en) * 2019-03-28 2019-07-26 东南大学 Driving behavior recognition methods based on multiple dimensioned attention convolutional neural networks
CN110103817A (en) * 2019-04-09 2019-08-09 江苏大学 Vehicle stabilization running intelligent control method under a kind of state of emergency
CN110308718A (en) * 2019-04-11 2019-10-08 长沙理工大学 A kind of pilotless automobile behaviour decision making method based on two type fuzzy comprehensive evoluations
CN110427850A (en) * 2019-07-24 2019-11-08 中国科学院自动化研究所 Driver's super expressway lane-changing intention prediction technique, system, device

Also Published As

Publication number Publication date
CN111942396A (en) 2020-11-17
CN111208821A (en) 2020-05-29
CN111208821B (en) 2020-11-03

Similar Documents

Publication Publication Date Title
CN108698595B (en) For controlling the method for vehicle movement and the control system of vehicle
JP6207723B2 (en) Collision prevention device
CN103863321B (en) Collision judgment equipment and anti-collision equipment
US10275955B2 (en) Methods and systems for utilizing information collected from multiple sensors to protect a vehicle from malware and attacks
CN104321237B (en) control system and method
US9676395B2 (en) Incapacitated driving detection and prevention
CN104540701B (en) For the method for the mode of operation for determining driver
CN102320301B (en) For the method making the ride characteristic of vehicle adapt to chaufeur conversion
US9975550B2 (en) Movement trajectory predicting device and movement trajectory predicting method
CN102997900B (en) Vehicle systems, devices, and methods for recognizing external worlds
Pilutti et al. Identification of driver state for lane-keeping tasks
US20140195093A1 (en) Autonomous Driving Merge Management System
CN105225500B (en) A kind of traffic control aid decision-making method and device
US20150274161A1 (en) Method for operating a driver assistance system of a vehicle
CN103996068B (en) Statistical method and device for passenger flow distribution
CN106004860B (en) Controlling device for vehicle running
CN107650862B (en) Automobile keyless entry system based on proximity sensing of smart phone and control method
US7177743B2 (en) Vehicle control system having an adaptive controller
Chae et al. Autonomous braking system via deep reinforcement learning
US8775359B2 (en) System and method for occupancy estimation
JP2007534041A (en) Lane change driving recognition method and apparatus for vehicles
CN105761329B (en) Driver's discriminating conduct based on driving habit
US8040247B2 (en) System for rapid detection of drowsiness in a machine operator
Hongchao et al. Analytical approach to evaluating transit signal priority
DE112010003678T5 (en) TRAFFIC EVALUATION SYSTEM, VEHICLE MOUNTED MACHINE AND INFORMATION PROCESSING CENTER

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination