CN116279504A - AR-based vehicle speed assist system and method thereof - Google Patents

AR-based vehicle speed assist system and method thereof Download PDF

Info

Publication number
CN116279504A
CN116279504A CN202310324918.0A CN202310324918A CN116279504A CN 116279504 A CN116279504 A CN 116279504A CN 202310324918 A CN202310324918 A CN 202310324918A CN 116279504 A CN116279504 A CN 116279504A
Authority
CN
China
Prior art keywords
vehicle speed
feature
vector
classification
time sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310324918.0A
Other languages
Chinese (zh)
Inventor
任全森
周林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Seres New Energy Automobile Design Institute Co Ltd
Original Assignee
Chongqing Seres New Energy Automobile Design Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Seres New Energy Automobile Design Institute Co Ltd filed Critical Chongqing Seres New Energy Automobile Design Institute Co Ltd
Priority to CN202310324918.0A priority Critical patent/CN116279504A/en
Publication of CN116279504A publication Critical patent/CN116279504A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • B60W30/0956Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/04Traffic conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/10Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to vehicle motion
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/52Scale-space analysis, e.g. wavelet analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/404Characteristics
    • B60W2554/4042Longitudinal speed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/80Spatial relation or speed relative to objects
    • B60W2554/802Longitudinal distance
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the field of intelligent early warning, and particularly discloses an AR-based vehicle speed auxiliary system and an AR-based vehicle speed auxiliary method, which are used for carrying out auxiliary early warning on the condition that whether a vehicle collision occurs at the current vehicle speed or not by adopting a deep learning-based neural network model to dig out a complex mapping relation between relative time sequence change of a vehicle distance and vehicle speed value change in a front vehicle detection image so as to ensure the driving safety of a driver.

Description

AR-based vehicle speed assist system and method thereof
Technical Field
The present application relates to the field of intelligent early warning, and more particularly, to an AR-based vehicle speed assistance system and method thereof.
Background
In the past, the judgment of other vehicles by a driver is determined based on the indication lamps of surrounding vehicles and the experience of the driver, and besides the potential safety hazard caused by misjudgment caused by the attention of the driver and subjective factors, the indication lamps of the surrounding vehicles, which do not give corresponding indication before the action, are also an important potential safety hazard. For example, when the tail lights of the surrounding vehicles are damaged, an indicator light does not appear when the vehicle brakes and decelerates to prompt the rear vehicle to pay attention to braking and decelerating.
The AR enhancement display technology can obtain information of peripheral targets through the Internet technology in the normal visual field range of human eyes, and display the information of the peripheral targets on a specific display interface, so that crowd interaction in a specific range is increased. The display interface is mainly a display device, such as glasses, which is close to the eyeballs of the user at the present stage, and is a display screen of the mobile phone at the other type. With the development of vehicle interior projection technology, a technology of projecting information of vehicle own parameters such as current speed, gear and the like onto a front windshield of a vehicle has emerged, and if the two technologies are fused, a new application field of the AR technology will be opened up.
Therefore, an AR-based vehicle speed assistance system is desired.
Disclosure of Invention
The present application has been made in order to solve the above technical problems. The embodiment of the application provides an AR-based vehicle speed auxiliary system and a method thereof, which excavate a complex mapping relation between relative time sequence change of a vehicle distance and vehicle speed value change in a front vehicle detection image by adopting a neural network model based on deep learning so as to assist early warning on whether a vehicle collision occurs at the current vehicle speed or not, so as to ensure the driving safety of a driver.
According to one aspect of the present application, there is provided an AR-based vehicle speed assistance system comprising:
the data acquisition module is used for acquiring front vehicle detection images of a plurality of preset time points in a preset time period acquired by the camera and vehicle speed values of the preset time points;
the image feature extraction module is used for respectively passing the front vehicle detection images at a plurality of preset time points through a convolutional neural network model serving as a filter to obtain a plurality of front vehicle detection feature vectors;
the time sequence associated coding module is used for enabling the plurality of front vehicle detection feature vectors to pass through a context encoder based on a converter to obtain front vehicle relative position time sequence feature vectors;
the vehicle speed time sequence feature extraction module is used for arranging the vehicle speed values of the plurality of preset time points into vehicle speed input vectors according to time dimensions and then obtaining vehicle speed time sequence feature vectors through the multi-scale neighborhood feature extraction module;
the association module is used for carrying out association coding on the vehicle speed time sequence feature vector and the front vehicle relative position time sequence feature vector so as to obtain a classification feature matrix;
the optimizing module is used for carrying out feature distribution modulation on the classification feature matrix to obtain an optimized classification feature matrix;
The early warning module is used for enabling the optimized classification feature matrix to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether a vehicle speed early warning prompt is generated or not; and
and the display module is used for responding to the classification result to generate a vehicle speed early warning prompt and displaying the vehicle speed early warning prompt on a vehicle-mounted screen.
In the above AR-based vehicle speed assistance system, the image feature extraction module is configured to: each layer of the convolutional neural network model used as the filter performs the following steps on input data in forward transfer of the layer: carrying out convolution processing on input data to obtain a convolution characteristic diagram; pooling the convolution feature images based on a feature matrix to obtain pooled feature images; performing nonlinear activation on the pooled feature map to obtain an activated feature map; wherein the output of the last layer of the convolutional neural network as a filter is the plurality of front vehicle detection feature vectors, and the input of the first layer of the convolutional neural network as a filter is the front vehicle detection images of the plurality of predetermined time points.
In the above AR-based vehicle speed assistance system, the timing-related encoding module includes: a context coding unit, configured to perform global context semantic coding based on a converter concept on the plurality of front car detection feature vectors by using a converter of the context encoder including the embedded layer to obtain a plurality of global context semantic front car detection feature vectors; and the cascading unit is used for cascading the global context semantic front vehicle detection feature vectors to obtain the front vehicle relative position time sequence feature vector.
In the above AR-based vehicle speed assistance system, the context encoding unit includes: the query vector construction subunit is used for carrying out one-dimensional arrangement on the plurality of front vehicle detection feature vectors to obtain global front vehicle detection feature vectors; a self-attention subunit, configured to calculate a product between the global front-vehicle detection feature vector and a transpose vector of each front-vehicle detection feature vector in the plurality of front-vehicle detection feature vectors to obtain a plurality of self-attention correlation matrices; the normalization subunit is used for respectively performing normalization processing on each self-attention correlation matrix in the plurality of self-attention correlation matrices to obtain a plurality of normalized self-attention correlation matrices; the attention calculating subunit is used for obtaining a plurality of probability values through a Softmax classification function by each normalized self-attention correlation matrix in the normalized self-attention correlation matrices; the attention applying subunit is used for weighting each front vehicle detection feature vector in the front vehicle detection feature vectors by taking each probability value in the probability values as a weight so as to obtain the context semantic front vehicle detection feature vectors; and the cascading subunit is used for cascading the context semantic front vehicle detection feature vectors to obtain the global context semantic front vehicle detection feature vector.
In the above AR-based vehicle speed assistance system, the multi-scale neighborhood feature extraction module includes: the device comprises a first convolution layer, a second convolution layer parallel to the first convolution layer and a multi-scale feature fusion layer connected with the first convolution layer and the second convolution layer, wherein the first convolution layer uses a one-dimensional convolution kernel with a first length, and the second convolution layer uses a one-dimensional convolution kernel with a second length.
In the above AR-based vehicle speed assistance system, the vehicle speed timing feature extraction module includes: a first neighborhood scale feature extraction unit, configured to input the vehicle speed input vector into a first convolution layer of the multi-scale neighborhood feature extraction module to obtain a first neighborhood scale vehicle speed time sequence feature vector, where the first convolution layer has a first one-dimensional convolution kernel with a first length; a second neighborhood scale feature extraction unit, configured to input the vehicle speed input vector into a second convolution layer of the multi-scale neighborhood feature extraction module to obtain a second neighborhood scale vehicle speed time sequence feature vector, where the second convolution layer has a second one-dimensional convolution kernel with a second length, and the first length is different from the second length; and the multiscale fusion unit is used for cascading the first neighborhood scale vehicle speed time sequence feature vector and the second neighborhood scale vehicle speed time sequence feature vector to obtain the vehicle speed time sequence feature vector. The first neighborhood scale feature extraction unit is configured to: using a first convolution layer of the multi-scale neighborhood feature extraction module to perform one-dimensional convolution coding on the vehicle speed input vector according to the following formula so as to obtain a first neighborhood scale vehicle speed time sequence feature vector; wherein, the formula is:
Figure BDA0004152933490000031
Wherein a is the width of the first convolution kernel in the X direction, F (a) is a first convolution kernel parameter vector, G (X-a) is a local vector matrix calculated by a convolution kernel function, w is the size of the first convolution kernel, and X represents the vehicle speed input vector; and the second neighborhood scale feature extraction unit is configured to: performing one-dimensional convolution encoding on the vehicle speed input vector by using a second convolution layer of the multi-scale neighborhood feature extraction module according to the following formula to obtain a second neighborhood scale vehicle speed time sequence feature vector; wherein, the formula is:
Figure BDA0004152933490000041
wherein b is the width of the second convolution kernel in the X direction, F (b) is a second convolution kernel parameter vector, G (X-b) is a local vector matrix calculated by a convolution kernel function, m is the size of the second convolution kernel, and X represents the vehicle speed input vector.
In the above AR-based vehicle speed assistance system, the association module is configured to: performing association coding on the vehicle speed time sequence feature vector and the front vehicle relative position time sequence feature vector by using the following formula to obtain a classification feature matrix; wherein, the formula is:
Figure BDA0004152933490000042
wherein V is m Represents the time sequence feature vector of the vehicle speed,
Figure BDA0004152933490000043
a transpose vector representing the vehicle speed time sequence feature vector, V n Representing the relative position time sequence feature vector of the front vehicle, M 1 Representing the classification feature matrix,/->
Figure BDA0004152933490000044
Representing vector multiplication.
In the above-described AR-based vehicle speed assistance system, the optimization module includes: the unfolding unit is used for unfolding the classification characteristic matrix into classification characteristic vectors according to rows or columns; the feature optimization unit is used for carrying out vector-normalized Hilbert probability spatialization on the classification feature vectors according to the following formula to obtain optimized classification feature vectors; wherein, the formula is:
Figure BDA0004152933490000045
wherein V is the classification feature vector, |V| | 2 Representing the two norms of the classification feature vector,
Figure BDA0004152933490000046
representing the square of the two norms of the classification feature vector, v i Is the ith eigenvalue of the classification eigenvector, exp (·) represents the exponential operation of the vector, which represents the calculation of the natural exponential function value raised to the power of the eigenvalue at each position in the vector, and v i ' is the ith eigenvalue of the optimized classification eigenvector; and the matrix reconstruction unit is used for carrying out matrix reconstruction on the optimized classification characteristic vector so as to obtain the optimized classification characteristic matrix.
In the above AR-based vehicle speed assistance system, the early warning module includes: a classification feature vector generation unit for expanding the classification feature matrix into classification feature vectors based on row vectors or column vectors; the full-connection coding unit is used for carrying out full-connection coding on the classification characteristic vectors by using a plurality of full-connection layers of the classifier so as to obtain coded classification characteristic vectors; and the classification result generating unit is used for enabling the coding classification feature vector to pass through a Softmax classification function of the classifier to obtain the classification result.
According to another aspect of the present application, there is provided an AR-based vehicle speed assistance method, including:
acquiring front vehicle detection images of a plurality of preset time points in a preset time period acquired by a camera and vehicle speed values of the preset time points;
respectively passing the front vehicle detection images at a plurality of preset time points through a convolutional neural network model serving as a filter to obtain a plurality of front vehicle detection feature vectors;
passing the plurality of front vehicle detection feature vectors through a context encoder based on a converter to obtain front vehicle relative position time sequence feature vectors;
the vehicle speed values of the plurality of preset time points are arranged into vehicle speed input vectors according to time dimensions, and then the vehicle speed input vectors are processed through a multi-scale neighborhood feature extraction module to obtain vehicle speed time sequence feature vectors;
performing association coding on the vehicle speed time sequence feature vector and the front vehicle relative position time sequence feature vector to obtain a classification feature matrix;
performing feature distribution modulation on the classification feature matrix to obtain an optimized classification feature matrix;
the classification feature matrix passes through a classifier to obtain a classification result, and the classification result is used for indicating whether a vehicle speed early warning prompt is generated or not; and
And responding to the classification result to generate a vehicle speed early warning prompt, and displaying the vehicle speed early warning prompt on a vehicle-mounted screen.
According to still another aspect of the present application, there is provided an electronic apparatus including: a processor; and a memory having stored therein computer program instructions that, when executed by the processor, cause the processor to perform the AR-based vehicle speed assistance method as described above.
According to yet another aspect of the present application, there is provided a computer readable medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform an AR-based vehicle speed assistance method as described above.
Compared with the prior art, the AR-based vehicle speed auxiliary system and the AR-based vehicle speed auxiliary method provided by the application have the advantages that the complex mapping relation between the relative time sequence change of the vehicle distance and the vehicle speed value change in the front vehicle detection image is dug by adopting the neural network model based on deep learning, so that auxiliary early warning is carried out on whether the vehicle collision occurs at the current vehicle speed or not, and the driving safety of a driver is ensured.
Drawings
The foregoing and other objects, features and advantages of the present application will become more apparent from the following more particular description of embodiments of the present application, as illustrated in the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
FIG. 1 is an application scenario diagram of an AR-based vehicle speed assist system according to an embodiment of the present application;
FIG. 2 is a block diagram of an AR-based vehicle speed assist system according to an embodiment of the present application;
FIG. 3 is a system architecture diagram of an AR-based vehicle speed assist system in accordance with an embodiment of the present application;
FIG. 4 is a flowchart of convolutional neural network encoding in an AR-based vehicle speed assist system in accordance with an embodiment of the present application;
FIG. 5 is a block diagram of a vehicle speed timing feature extraction module in an AR-based vehicle speed assist system in accordance with an embodiment of the present application;
FIG. 6 is a block diagram of an optimization module in an AR-based vehicle speed assist system in accordance with an embodiment of the present application;
FIG. 7 is a block diagram of an early warning module in an AR-based vehicle speed assist system in accordance with an embodiment of the present application;
FIG. 8 is a flowchart of an AR-based vehicle speed assist method according to an embodiment of the present application;
fig. 9 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application and not all of the embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
Summary of the application
As described above, the AR enhancement display technology can obtain information of peripheral targets through the Internet technology in the normal visual field of human eyes, and display the information of the peripheral targets on a specific display interface, so that crowd interaction in a specific range is increased. The display interface is mainly a display device, such as glasses, which is close to the eyeballs of the user, and a display screen of the mobile phone, such as another type of display interface. With the development of vehicle interior projection technology, a technology of projecting information of vehicle own parameters such as current speed, gear and the like onto a front windshield of a vehicle has emerged, and if the two technologies are fused, a new application field of the AR technology will be opened up. Therefore, an AR-based vehicle speed assistance system is desired.
Accordingly, considering that when the AR enhancement display technology and the vehicle interior projection technology are combined to perform auxiliary early warning of the vehicle speed, that is, an accident that may cause a collision of the vehicle when the vehicle speed is too fast, in order to avoid such an accident, it is necessary to comprehensively analyze information such as the vehicle speed and the distance of the vehicle based on the AR enhancement display technology and the vehicle interior projection technology, and generate an early warning prompt that the vehicle speed may cause the collision of the vehicle, so as to ensure the driving safety of the driver. Specifically, analysis may be performed by collecting a preceding vehicle detection image in which relative position information of two vehicles exists and change information of a vehicle speed value in a time dimension to early warn a vehicle speed. However, since the amount of information in the image data is large, it is difficult to capture effective information, and in this process, there is a difficulty in how to establish a mapping relationship between the relative time sequence change of the vehicle distance and the vehicle speed value change in the front vehicle detection image, so as to assist in early warning whether a collision of the vehicle occurs at the current vehicle speed, so as to ensure the driving safety of the driver.
In recent years, deep learning and neural networks have been widely used in the fields of computer vision, natural language processing, text signal processing, and the like. In addition, deep learning and neural networks have also shown levels approaching and even exceeding humans in the fields of image classification, object detection, semantic segmentation, text translation, and the like.
The development of deep learning and neural networks provides new solutions and schemes for mining complex mapping relationships between relative time sequence changes about vehicle distances and vehicle speed value changes in the front vehicle detection images. Those of ordinary skill in the art will appreciate that a deep learning based deep neural network model may adjust parameters of the deep neural network model by appropriate training strategies, such as by a gradient-descent back-propagation algorithm, to enable modeling of complex nonlinear correlations between things, which is obviously suitable for modeling and establishing complex mappings between relative time-series changes in the preceding vehicle detection image with respect to vehicle distance and changes in the vehicle speed values.
Specifically, in the technical scheme of the application, first, front vehicle detection images at a plurality of preset time points in a preset time period are acquired through a camera, and vehicle speed values at the preset time points are acquired. Next, considering that since the preceding vehicle detection image is image data including a large amount of information, it is difficult to effectively capture relative position information of two vehicles, in the technical solution of the present application, feature mining of the preceding vehicle detection image at each predetermined time point is further performed by a convolutional neural network model as a filter having excellent expression in terms of implicit feature extraction of images, so that implicit feature information about the relative position of vehicles, that is, distance feature information of two vehicles, in the preceding vehicle detection image at each predetermined time point is extracted, thereby obtaining a plurality of preceding vehicle detection feature vectors.
Then, considering that the relative position feature information of the vehicles has a rule of dynamics in the time dimension, that is, when the speed of one of the two vehicles is reduced, the distance between the two vehicles is increased, and when the speed of the two vehicles is kept unchanged, the distance between the two vehicles is kept unchanged, so that the relative position feature of the vehicles has time sequence dynamic association feature information. Based on this, in the technical solution of the present application, the plurality of front vehicle detection feature vectors are encoded by a context encoder based on a converter, so as to extract dynamic associated feature information of the front vehicle detection image on a time dimension about a relative position feature of the vehicle, thereby obtaining a front vehicle relative position time sequence feature vector.
Further, for the vehicle speed values at the plurality of predetermined time points, the vehicle speed values have different pattern state change characteristic information at different time period spans within the predetermined time period in consideration of the fluctuation and uncertainty in the time dimension. Therefore, in the technical scheme of the application, in order to fully dig out the dynamic change characteristics of the vehicle speed value in time sequence, the vehicle speed values at a plurality of preset time points are further arranged into the vehicle speed input vector according to the time dimension and then are processed in the multi-scale neighborhood characteristic extraction module, so that the dynamic multi-scale neighborhood associated characteristics of the vehicle speed values in different time spans are extracted, and the vehicle speed time sequence characteristic vector is obtained.
Then, the vehicle speed time sequence feature vector and the front vehicle relative position time sequence feature vector can be subjected to association coding to obtain a classification feature matrix, namely, vector multiplication of the vehicle speed time sequence feature vector and the front vehicle relative position time sequence feature vector is calculated, so that association feature distribution information of time sequence multi-scale dynamic features of the vehicle speed values and relative position time sequence association features of the vehicles is represented, and the association feature distribution information is used as the classification feature matrix. And then, the classification feature matrix passes through a classifier to obtain a classification result used for indicating whether the vehicle speed early warning prompt is generated. That is, in the technical scheme of the application, the label of the classifier includes generating a vehicle speed early warning prompt and not generating a vehicle speed early warning prompt, wherein the classifier determines which classification label the classification feature matrix belongs to through a soft maximum function, so that early warning is performed on collision of vehicles when the vehicle speed is too fast, and the vehicle speed early warning prompt is displayed on a vehicle-mounted screen in response to the classification result to ensure the driving safety of a driver.
Particularly, in the technical scheme of the application, when the vehicle speed time sequence feature vector and the front vehicle relative position time sequence feature vector are subjected to association coding to obtain the classification feature matrix, the classification feature matrix can be fused with heterogeneous association features of different orders because the front vehicle relative position time sequence feature vector expresses the cross-time-domain associated high-order association feature of the front vehicle image feature (essentially represents the high-order association feature of the spatial position relationship of the front vehicle and the vehicle in the time dimension), and the vehicle speed time sequence feature vector expresses the time sequence association feature of the vehicle speed data of the vehicle, so that the feature expression capability of the vehicle is improved. However, on the other hand, the feature domain superposition of different orders and source domains may cause the overall feature distribution of the classification feature matrix as the inter-domain fusion feature to be more discrete, so that the classification feature matrix has poor dependence on a single classification result when classified by classification, and the accuracy of the classification result is affected.
Therefore, the classification feature vector obtained by expanding the classification feature matrix is preferably subjected to vector-weighted hilbert probability spatialization, which is specifically expressed as:
Figure BDA0004152933490000091
V is the classification feature vector, |V| | 2 Representing the two norms of the classification feature vector,
Figure BDA0004152933490000092
representing the square thereof, i.e. the inner product of the classification feature vector itself, v i Is the ith eigenvalue of the classification eigenvector V, and V i 'is the ith eigenvalue of the optimized classification eigenvector V'. Here, the hilbert probability spatialization of the vector assignment performs probabilistic interpretation of the classification feature vector V in the hilbert space defining the vector inner product by assignment of the classification feature vector V itself, and reduces hidden disturbance of class expression of each local distribution of concatenation of the classification feature vector V to class expression of the whole hilbert space topology, byThis improves the robustness of the feature distribution of the classification feature vector V converging to a single predetermined classification result, while relying on the establishment of a metric-induced probability spatial structure to promote long-range dependence of the feature distribution of the classification feature vector V on the cross-classifier of the single classification result. Therefore, the dependence of the optimized classification feature vector V' on a single classification result when the classification is carried out through the classifier is improved, and the accuracy of the classification result is improved. Therefore, auxiliary early warning can be accurately carried out on whether the collision of the vehicle occurs at the current speed in real time, so that the driving safety of a driver is ensured.
Based on this, the present application proposes an AR-based vehicle speed assistance system comprising: the data acquisition module is used for acquiring front vehicle detection images of a plurality of preset time points in a preset time period acquired by the camera and vehicle speed values of the preset time points; the image feature extraction module is used for respectively passing the front vehicle detection images at a plurality of preset time points through a convolutional neural network model serving as a filter to obtain a plurality of front vehicle detection feature vectors; the time sequence associated coding module is used for enabling the plurality of front vehicle detection feature vectors to pass through a context encoder based on a converter to obtain front vehicle relative position time sequence feature vectors; the vehicle speed time sequence feature extraction module is used for arranging the vehicle speed values of the plurality of preset time points into vehicle speed input vectors according to time dimensions and then obtaining vehicle speed time sequence feature vectors through the multi-scale neighborhood feature extraction module; the association module is used for carrying out association coding on the vehicle speed time sequence feature vector and the front vehicle relative position time sequence feature vector so as to obtain a classification feature matrix; the optimizing module is used for carrying out feature distribution modulation on the classification feature matrix to obtain an optimized classification feature matrix; the early warning module is used for enabling the optimized classification feature matrix to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether a vehicle speed early warning prompt is generated or not; and the display module is used for responding to the classification result to generate a vehicle speed early warning prompt and displaying the vehicle speed early warning prompt on a vehicle-mounted screen.
Fig. 1 is an application scenario diagram of an AR-based vehicle speed assistance system according to an embodiment of the present application. As shown in fig. 1, in this application scenario. A preceding vehicle detection image at a plurality of predetermined time points within a predetermined period is acquired by a camera (e.g., C as illustrated in fig. 1), and vehicle speed values at the plurality of predetermined time points are acquired by a speed sensor (e.g., S1 as illustrated in fig. 1). Then, the information is input to a server (e.g., S2 in fig. 1) deployed with an AR-based vehicle speed assist algorithm, wherein the server is capable of processing the input information with the AR-based vehicle speed assist algorithm to generate a classification result indicating whether a vehicle speed warning prompt is generated, and in response to the classification result being that the vehicle speed warning prompt is generated, the vehicle speed warning prompt is displayed on a vehicle-mounted screen.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
Exemplary System
FIG. 2 is a block diagram of an AR-based vehicle speed assist system according to an embodiment of the present application. As shown in fig. 2, an AR-based vehicle speed assistance system 300 according to an embodiment of the present application includes: a data acquisition module 310; an image feature extraction module 320; a timing-related encoding module 330; a vehicle speed timing feature extraction module 340; an association module 350; an optimization module 360; an early warning module 370; and a display module 380.
The data acquisition module 310 is configured to acquire front vehicle detection images at a plurality of predetermined time points in a predetermined time period acquired by the camera and vehicle speed values at the plurality of predetermined time points; the image feature extraction module 320 is configured to pass the front vehicle detection images at the plurality of predetermined time points through a convolutional neural network model serving as a filter to obtain a plurality of front vehicle detection feature vectors; the timing correlation encoding module 330 is configured to pass the plurality of front car detection feature vectors through a context encoder based on a converter to obtain front car relative position timing feature vectors; the vehicle speed time sequence feature extraction module 340 is configured to arrange the vehicle speed values at the plurality of predetermined time points into a vehicle speed input vector according to a time dimension, and then obtain a vehicle speed time sequence feature vector through the multi-scale neighborhood feature extraction module; the association module 350 is configured to perform association encoding on the vehicle speed time sequence feature vector and the front vehicle relative position time sequence feature vector to obtain a classification feature matrix; the optimizing module 360 is configured to perform feature distribution modulation on the classification feature matrix to obtain an optimized classification feature matrix; the early warning module 370 is configured to pass the optimized classification feature matrix through a classifier to obtain a classification result, where the classification result is used to indicate whether a vehicle speed early warning prompt is generated; and the display module 380 is configured to respond to the classification result to generate a vehicle speed early warning prompt, and display the vehicle speed early warning prompt on a vehicle-mounted screen.
Fig. 3 is a system architecture diagram of an AR-based vehicle speed assistance system according to an embodiment of the present application. Referring to fig. 2 and 3, in the network architecture, first, a front vehicle detection image of a plurality of predetermined time points in a predetermined period of time acquired by a camera and vehicle speed values of the plurality of predetermined time points are acquired by the data acquisition module 310; next, the image feature extraction module 320 obtains a plurality of front vehicle detection feature vectors by passing the front vehicle detection images at a plurality of predetermined time points acquired by the data acquisition module 310 through a convolutional neural network model as a filter, respectively; the timing-related encoding module 330 obtains the timing feature vector of the relative position of the front vehicle by passing the plurality of front vehicle detection feature vectors obtained by the image feature extraction module 320 through a context encoder based on a converter; then, the vehicle speed time sequence feature extraction module 340 arranges the vehicle speed values at a plurality of preset time points acquired by the data acquisition module 310 into a vehicle speed input vector according to a time dimension, and then the vehicle speed input vector is passed through a multi-scale neighborhood feature extraction module to obtain a vehicle speed time sequence feature vector; the association module 350 performs association encoding on the vehicle speed time sequence feature vector obtained by the vehicle speed time sequence feature extraction module 340 and the front vehicle relative position time sequence feature vector obtained by the time sequence association encoding module 330 to obtain a classification feature matrix; the optimizing module 360 performs feature distribution modulation on the classification feature matrix calculated by the associating module 350 to obtain an optimized classification feature matrix; the early warning module 370 passes the optimized classification feature matrix obtained by the optimizing module 360 through a classifier to obtain a classification result, wherein the classification result is used for indicating whether a vehicle speed early warning prompt is generated; further, the display module 380 displays the vehicle speed warning prompt on the vehicle-mounted screen in response to the classification result being that the vehicle speed warning prompt is generated.
Specifically, during operation of the AR-based vehicle speed assistance system 300, the data acquisition module 310 is configured to acquire a front vehicle detection image at a plurality of predetermined time points and a vehicle speed value at the plurality of predetermined time points within a predetermined time period acquired by a camera. It should be understood that in the process of performing the early warning and prompting of the collision of the vehicle, the information such as the speed and the distance of the vehicle may be comprehensively analyzed by combining the AR enhancement display technology and the internal projection technology of the vehicle, so as to generate the early warning and prompting that the vehicle may cause the collision of the vehicle.
Specifically, during operation of the AR-based vehicle speed assistance system 300, the image feature extraction module 320 is configured to pass the front vehicle detection images at the plurality of predetermined time points through a convolutional neural network model as a filter, respectively, to obtain a plurality of front vehicle detection feature vectors. In the technical solution of the present application, since the front vehicle detection image is image data, which contains a large amount of information, it is difficult to effectively capture the relative position information of two vehicles, so in the technical solution of the present application, feature mining of the front vehicle detection image at each predetermined time point is further performed by using a convolutional neural network model as a filter, which has excellent performance in terms of implicit feature extraction of images, so as to extract the implicit feature information of the relative position of the vehicle, that is, the distance feature information of two vehicles, in the front vehicle detection image at each predetermined time point, thereby obtaining a plurality of front vehicle detection feature vectors. In one particular example, the convolutional neural network includes a plurality of neural network layers that are cascaded with one another, wherein each neural network layer includes a convolutional layer, a pooling layer, and an activation layer. In the coding process of the convolutional neural network, each layer of the convolutional neural network carries out convolutional processing based on a convolutional kernel on input data by using the convolutional layer in the forward transmission process of the layer, carries out pooling processing on a convolutional feature map output by the convolutional layer by using the pooling layer and carries out activating processing on the pooled feature map output by the pooling layer by using the activating layer, wherein the input of the first layer of the convolutional neural network is a front car detection image of a plurality of preset time points, and the output of the last layer of the convolutional neural network is a plurality of front car detection feature vectors.
FIG. 4 is a flowchart of convolutional neural network coding in an AR-based vehicle speed assist system in accordance with an embodiment of the present application. As shown in fig. 4, in the encoding process of the convolutional neural network, the method includes: each layer of the convolutional neural network model used as the filter performs the following steps on input data in forward transfer of the layer: s210, carrying out convolution processing on input data to obtain a convolution characteristic diagram; s220, pooling the convolution feature map based on a feature matrix to obtain a pooled feature map; s230, carrying out nonlinear activation on the pooled feature map to obtain an activated feature map; wherein the output of the last layer of the convolutional neural network as a filter is the plurality of front vehicle detection feature vectors, and the input of the first layer of the convolutional neural network as a filter is the front vehicle detection images of the plurality of predetermined time points.
Specifically, during operation of the AR-based vehicle speed assist system 300, the timing correlation encoding module 330 is configured to pass the plurality of front vehicle detection feature vectors through a transducer-based context encoder to obtain a front vehicle relative position timing feature vector. In the technical scheme of the application, the fact that the relative position characteristic information of the vehicles has a dynamic rule in the time dimension is considered, namely, when the speed of one of the two vehicles is reduced, the distance between the two vehicles is increased, and when the speed of the two vehicles is kept unchanged, the distance between the two vehicles is kept unchanged, so that the relative position characteristic of the vehicles has time sequence dynamic associated characteristic information. Based on this, in the technical solution of the present application, the plurality of front vehicle detection feature vectors are encoded by a context encoder based on a converter, so as to extract dynamic associated feature information of the front vehicle detection image on a time dimension about a relative position feature of the vehicle, thereby obtaining a front vehicle relative position time sequence feature vector. That is, based on the transformer concept, the converter is used to capture the characteristic of long-distance context dependence, and the global context semantic-based encoding is performed on the plurality of front car detection feature vectors to obtain a context semantic association feature representation with the global semantic association of the plurality of front car detection feature vectors as the context, that is, the global context semantic front car detection feature vectors. More specifically, the front vehicle detection feature vectors are passed through a context encoder based on a converter to obtain front vehicle relative position timing feature vectors, first, global context semantic encoding based on a converter thought is performed on the front vehicle detection feature vectors by using the converter of the context encoder including an embedded layer to obtain a plurality of global context semantic front vehicle detection feature vectors, and then the plurality of global context semantic front vehicle detection feature vectors are cascaded to obtain the front vehicle relative position timing feature vectors. The method for performing global context semantic coding on the plurality of front car detection feature vectors based on a converter thought by using the converter of the context encoder comprising the embedded layer to obtain a plurality of global context semantic front car detection feature vectors comprises the following steps: one-dimensional arrangement is carried out on the plurality of front vehicle detection feature vectors so as to obtain global front vehicle detection feature vectors; calculating the product between the global front vehicle detection feature vector and the transpose vector of each front vehicle detection feature vector in the plurality of front vehicle detection feature vectors to obtain a plurality of self-attention association matrixes; respectively carrying out standardization processing on each self-attention correlation matrix in the plurality of self-attention correlation matrices to obtain a plurality of standardized self-attention correlation matrices; obtaining a plurality of probability values by using a Softmax classification function through each normalized self-attention correlation matrix in the normalized self-attention correlation matrices; weighting each front vehicle detection feature vector in the front vehicle detection feature vectors by taking each probability value in the probability values as a weight so as to obtain the context semantic front vehicle detection feature vectors; and cascading the context semantic front car detection feature vectors to obtain the global context semantic front car detection feature vector.
Specifically, during the operation of the AR-based vehicle speed assistance system 300, the vehicle speed time sequence feature extraction module 340 is configured to arrange the vehicle speed values at the plurality of predetermined time points into a vehicle speed input vector according to a time dimension, and then obtain a vehicle speed time sequence feature vector through the multi-scale neighborhood feature extraction module. Considering that the vehicle speed value has volatility and uncertainty in the time dimension, it has different pattern state change characteristic information at different time period spans within the predetermined time period. Therefore, in the technical scheme of the application, in order to fully dig out the dynamic change characteristics of the vehicle speed value in time sequence, the vehicle speed values at a plurality of preset time points are further arranged into the vehicle speed input vector according to the time dimension and then are processed in the multi-scale neighborhood characteristic extraction module, so that the dynamic multi-scale neighborhood associated characteristics of the vehicle speed values in different time spans are extracted, and the vehicle speed time sequence characteristic vector is obtained. Wherein, the multiscale neighborhood feature extraction module comprises: the device comprises a first convolution layer, a second convolution layer parallel to the first convolution layer and a multi-scale feature fusion layer connected with the first convolution layer and the second convolution layer, wherein the first convolution layer uses a one-dimensional convolution kernel with a first length, and the second convolution layer uses a one-dimensional convolution kernel with a second length.
FIG. 5 is a block diagram of a vehicle speed timing feature extraction module in an AR-based vehicle speed assist system in accordance with an embodiment of the present application. As shown in fig. 5, the vehicle speed time sequence feature extraction module 340 includes: a first neighborhood scale feature extraction unit 341, configured to input the vehicle speed input vector into a first convolution layer of the multi-scale neighborhood feature extraction module to obtain a first neighborhood scale vehicle speed time sequence feature vector, where the first convolution layer has a first one-dimensional convolution kernel with a first length; a second neighborhood scale feature extraction unit 342 configured to input the vehicle speed input vector into a second convolution layer of the multi-scale neighborhood feature extraction module to obtain a second neighborhood scale vehicle speed timing feature vector, where the second convolution layer has a second one-dimensional convolution kernel with a second length, and the first length is different from the second length; and a multiscale fusion unit 343 configured to concatenate the first neighborhood scale vehicle speed time sequence feature vector and the second neighborhood scale vehicle speed time sequence feature vector to obtain the vehicle speed time sequence feature vector. The first neighborhood scale feature extraction unit is configured to: using a first convolution layer of the multi-scale neighborhood feature extraction module to perform one-dimensional convolution coding on the vehicle speed input vector according to the following formula so as to obtain a first neighborhood scale vehicle speed time sequence feature vector; wherein, the formula is:
Figure BDA0004152933490000141
Wherein a is the width of the first convolution kernel in the X direction, F (a) is a first convolution kernel parameter vector, G (X-a) is a local vector matrix calculated by a convolution kernel function, w is the size of the first convolution kernel, and X represents the vehicle speed input vector; and the second neighborhood scale feature extraction unit is configured to: performing one-dimensional convolution encoding on the vehicle speed input vector by using a second convolution layer of the multi-scale neighborhood feature extraction module according to the following formula to obtain a second neighborhood scale vehicle speed time sequence feature vector; wherein, the formula is:
Figure BDA0004152933490000151
wherein b is the width of the second convolution kernel in the X direction, F (b) is a second convolution kernel parameter vector, G (X-b) is a local vector matrix calculated by a convolution kernel function, m is the size of the second convolution kernel, and X represents the vehicle speed input vector. More specifically, the multi-scale fusion unit is configured to: fusing the first neighborhood scale vehicle speed time sequence feature vector and the second neighborhood scale vehicle speed time sequence feature vector to obtain the vehicle speed time sequence feature vector by the following formula; wherein, the formula is:
V m =Concat[V 1 ,V 2 ]
wherein V is 1 Representing the first neighborhood scale vehicle speed time sequence feature vector, V 2 Representing the second neighborhood scale vehicle speed time sequence feature vector, concat [. Cndot.,. Cndot.) ]Representing a cascade function, V m And representing the time sequence feature vector of the vehicle speed.
Specifically, during operation of the AR-based vehicle speed assistance system 300, the association module 350 is configured to perform association encoding on the vehicle speed time sequence feature vector and the front vehicle relative position time sequence feature vector to obtain a classification feature matrix. That is, the vehicle speed time sequence feature vector and the front vehicle relative position time sequence feature vector are subjected to association coding to obtain a classification feature matrix, that is, vector multiplication of the vehicle speed time sequence feature vector and the front vehicle relative position time sequence feature vector is calculated, so that association feature distribution information of time sequence multi-scale dynamic features of the vehicle speed value and relative position time sequence association features of the vehicle is represented. In a specific example of the application, the vehicle speed time sequence feature vector and the front vehicle relative position time sequence feature vector are subjected to associated coding according to the following formula to obtain a classification feature matrix; wherein, the formula is:
Figure BDA0004152933490000152
wherein V is m Represents the time sequence feature vector of the vehicle speed,
Figure BDA0004152933490000153
a transpose vector representing the vehicle speed time sequence feature vector, V n Representing the relative position time sequence feature vector of the front vehicle, M 1 Representing the classification feature matrix,/->
Figure BDA0004152933490000154
Representing vector multiplication.
Specifically, during operation of the AR-based vehicle speed assistance system 300, the optimization module 360 is configured to perform feature distribution modulation on the classification feature matrix to obtain an optimized classification feature matrix. In the technical scheme of the application, when the vehicle speed time sequence feature vector and the front vehicle relative position time sequence feature vector are subjected to association coding to obtain the classification feature matrix, the classification feature matrix can be fused with heterogeneous association features of different orders because the front vehicle relative position time sequence feature vector expresses the cross-time-domain associated high-order association feature of the front vehicle image feature (basically represents the high-order association feature of the spatial position relationship of the front vehicle and the vehicle in the time dimension), and the vehicle speed time sequence feature vector expresses the time sequence association feature of the vehicle speed data of the vehicle, so that the feature expression capability of the vehicle speed time sequence feature vector is improved. However, on the other hand, the feature domain superposition of different orders and source domains may cause the overall feature distribution of the classification feature matrix as the inter-domain fusion feature to be more discrete, so that the classification feature matrix has poor dependence on a single classification result when classified by classification, and the accuracy of the classification result is affected. Therefore, the classification feature vector obtained by expanding the classification feature matrix is preferably subjected to vector-weighted hilbert probability spatialization, which is specifically expressed as:
Figure BDA0004152933490000161
Wherein V is the classification feature vector, |V| | 2 Representing the classificationThe two norms of the feature vector,
Figure BDA0004152933490000162
representing the square thereof, i.e. the inner product of the classification feature vector itself, v i Is the ith eigenvalue of the classification eigenvector V, and V i 'is the ith eigenvalue of the optimized classification eigenvector V'. Here, the hilbert probability spatialization of the vector assignment carries out probabilistic interpretation of the classification feature vector V in the hilbert space defining the vector inner product through assignment of the classification feature vector V itself, and reduces hidden disturbance of class expressions of respective local distributions of concatenation of the classification feature vector V to class expressions of the whole hilbert space topology, thereby improving robustness of feature distribution convergence of the classification feature vector V to a single predetermined classification result, and meanwhile improving long-range dependence of feature distribution of the classification feature vector V on the single classification result across classifiers by means of establishment of a metric-induced probability space structure. Therefore, the dependence of the optimized classification feature vector V' on a single classification result when the classification is carried out through the classifier is improved, and the accuracy of the classification result is improved. Therefore, auxiliary early warning can be accurately carried out on whether the collision of the vehicle occurs at the current speed in real time, so that the driving safety of a driver is ensured.
FIG. 6 is a block diagram of an optimization module in an AR-based vehicle speed assist system according to an embodiment of the present application, as shown in FIG. 6, the optimization module 360, comprising: a developing unit 361, configured to develop the classification feature matrix into classification feature vectors according to rows or columns; a feature optimization unit 362, configured to spatially model the hubert probability of the classification feature vector according to the following formula to obtain an optimized classification feature vector; wherein, the formula is:
Figure BDA0004152933490000163
wherein V is the classification feature vector, |V| | 2 Representing the two norms of the classification feature vector,
Figure BDA0004152933490000164
representing the square of the two norms of the classification feature vector, v i Is the ith eigenvalue of the classification eigenvector, exp (·) represents the exponential operation of the vector, which represents the calculation of the natural exponential function value raised to the power of the eigenvalue at each position in the vector, and v i ' is the ith eigenvalue of the optimized classification eigenvector; and a matrix reconstructing unit 363 configured to perform matrix reconstruction on the optimized classification feature vector to obtain the optimized classification feature matrix.
Specifically, during the operation of the AR-based vehicle speed assistance system 300, the early warning module 370 and the display module 380 are configured to pass the optimized classification feature matrix through a classifier to obtain a classification result, where the classification result is used to indicate whether to generate a vehicle speed early warning prompt, and display the vehicle speed early warning prompt on a vehicle screen in response to the classification result being that the vehicle speed early warning prompt is generated. That is, the optimized classification matrix is passed through a classifier to obtain a classification result indicating whether a vehicle speed warning prompt is generated. In a specific example of the present application, the classifier is used to process the optimized classification feature matrix to obtain a classification result according to the following formula:
O=softmax{(W n ,B n ):…:(W 1 ,B 1 ) Project (F)), where Project (F) represents projecting the optimized classification feature matrix as a vector, W 1 To W n Weight matrix for all the connection layers of each layer, B 1 To B n Representing the bias vector for each fully connected layer. Specifically, the classifier includes a plurality of fully connected layers and a Softmax layer cascaded with a last fully connected layer of the plurality of fully connected layers. In the classification process of the classifier, the optimized classification feature matrix is first projected as a vector, for example, in a specific example, the optimized classification feature matrix is expanded along a row vector or a column vector to form a classification feature vector; then, multiple of the classifier is usedThe classification feature vectors are subjected to multiple full-connection coding by the full-connection layers to obtain coding classification feature vectors; further, the encoded classification feature vector is input to a Softmax layer of the classifier, i.e., the encoded classification feature vector is classified using the Softmax classification function to obtain a classification label. In the technical scheme of the application, the label of classifier is including producing the speed of a motor vehicle early warning suggestion to and do not produce the speed of a motor vehicle early warning suggestion, wherein, the classifier is through soft maximum function confirms the classification feature matrix belongs to which classification label, so can lead to the vehicle to collide when too fast to the speed of a motor vehicle and carry out the early warning, and, responding to the classification result is for producing the speed of a motor vehicle early warning suggestion, will the speed of a motor vehicle early warning suggestion is shown in on-vehicle screen, in order to guarantee driver's driving security.
Fig. 7 is a block diagram of an early warning module in an AR-based vehicle speed assistance system according to an embodiment of the present application. As shown in fig. 7, the early warning module 370 includes: the classification feature vector generation unit 371 is configured to develop the classification feature matrix into a classification feature vector based on a row vector or a column vector; a full-connection encoding unit 372, configured to perform full-connection encoding on the classification feature vector by using multiple full-connection layers of the classifier to obtain an encoded classification feature vector; and a classification result generating unit 373, configured to pass the encoded classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
In summary, the AR-based vehicle speed assistance system 300 according to the embodiment of the present application is illustrated, which uses a neural network model based on deep learning to extract a complex mapping relationship between a relative time sequence change of a vehicle distance and the vehicle speed value change in a front vehicle detection image, so as to perform an assistance pre-warning on whether a vehicle collision occurs at the current vehicle speed, so as to ensure the driving safety of a driver.
As described above, the AR-based vehicle speed assistance system according to the embodiment of the present application may be implemented in various terminal devices. In one example, the AR-based vehicle speed assistance system 300 according to embodiments of the present application may be integrated into the terminal device as a software module and/or hardware module. For example, the AR-based vehicle speed assistance system 300 may be a software module in the operating system of the terminal device, or may be an application developed for the terminal device; of course, the AR-based vehicle speed assist system 300 could equally be one of the plurality of hardware modules of the terminal device.
Alternatively, in another example, the AR-based vehicle speed assist system 300 and the terminal device may be separate devices, and the AR-based vehicle speed assist system 300 may be connected to the terminal device through a wired and/or wireless network and transmit the interactive information in a agreed data format.
Exemplary method
Fig. 8 is a flowchart of an AR-based vehicle speed assistance method according to an embodiment of the present application. As shown in fig. 8, the AR-based vehicle speed assistance method according to the embodiment of the present application includes the steps of: s110, acquiring front vehicle detection images of a plurality of preset time points in a preset time period acquired by a camera and vehicle speed values of the preset time points; s120, passing the front vehicle detection images at a plurality of preset time points through a convolutional neural network model serving as a filter to obtain a plurality of front vehicle detection feature vectors; s130, passing the plurality of front vehicle detection feature vectors through a context encoder based on a converter to obtain front vehicle relative position time sequence feature vectors; s140, arranging the vehicle speed values of the plurality of preset time points into a vehicle speed input vector according to a time dimension, and then obtaining a vehicle speed time sequence feature vector through a multi-scale neighborhood feature extraction module; s150, performing association coding on the vehicle speed time sequence feature vector and the front vehicle relative position time sequence feature vector to obtain a classification feature matrix; s160, performing feature distribution modulation on the classification feature matrix to obtain an optimized classification feature matrix; s170, the classification feature matrix passes through a classifier to obtain a classification result, wherein the classification result is used for indicating whether a vehicle speed early warning prompt is generated or not; and S180, responding to the classification result to generate a vehicle speed early warning prompt, and displaying the vehicle speed early warning prompt on a vehicle-mounted screen.
In one example, in the above-mentioned AR-based vehicle speed assistance method, the step S120 includes: each layer of the convolutional neural network model used as the filter performs the following steps on input data in forward transfer of the layer: carrying out convolution processing on input data to obtain a convolution characteristic diagram; pooling the convolution feature images based on a feature matrix to obtain pooled feature images; performing nonlinear activation on the pooled feature map to obtain an activated feature map; wherein the output of the last layer of the convolutional neural network as a filter is the plurality of front vehicle detection feature vectors, and the input of the first layer of the convolutional neural network as a filter is the front vehicle detection images of the plurality of predetermined time points.
In one example, in the above-mentioned AR-based vehicle speed assistance method, the step S130 includes: performing global context semantic coding on the plurality of front car detection feature vectors based on a converter thought by using the converter of the context encoder comprising the embedded layer to obtain a plurality of global context semantic front car detection feature vectors; and cascading the global context semantic front car detection feature vectors to obtain the front car relative position sequence feature vector. Wherein performing global context semantic coding based on a converter concept on the plurality of front car detection feature vectors by using the converter of the context encoder including the embedded layer to obtain a plurality of global context semantic front car detection feature vectors, includes: one-dimensional arrangement is carried out on the plurality of front vehicle detection feature vectors so as to obtain global front vehicle detection feature vectors; calculating the product between the global front vehicle detection feature vector and the transpose vector of each front vehicle detection feature vector in the plurality of front vehicle detection feature vectors to obtain a plurality of self-attention association matrixes; respectively carrying out standardization processing on each self-attention correlation matrix in the plurality of self-attention correlation matrices to obtain a plurality of standardized self-attention correlation matrices; obtaining a plurality of probability values by using a Softmax classification function through each normalized self-attention correlation matrix in the normalized self-attention correlation matrices; weighting each front vehicle detection feature vector in the front vehicle detection feature vectors by taking each probability value in the probability values as a weight so as to obtain the context semantic front vehicle detection feature vectors; and cascading the context semantic front car detection feature vectors to obtain the global context semantic front car detection feature vector.
In one example, in the above-mentioned AR-based vehicle speed assistance method, the step S140 includes: inputting the vehicle speed input vector into a first convolution layer of the multi-scale neighborhood feature extraction module to obtain a first neighborhood scale vehicle speed time sequence feature vector, wherein the first convolution layer is provided with a first one-dimensional convolution kernel with a first length; inputting the vehicle speed input vector into a second convolution layer of the multi-scale neighborhood feature extraction module to obtain a second neighborhood scale vehicle speed time sequence feature vector, wherein the second convolution layer is provided with a second one-dimensional convolution kernel with a second length, and the first length is different from the second length; and cascading the first neighborhood scale vehicle speed time sequence feature vector and the second neighborhood scale vehicle speed time sequence feature vector to obtain the vehicle speed time sequence feature vector. Wherein, the multiscale neighborhood feature extraction module comprises: the device comprises a first convolution layer, a second convolution layer parallel to the first convolution layer and a multi-scale feature fusion layer connected with the first convolution layer and the second convolution layer, wherein the first convolution layer uses a one-dimensional convolution kernel with a first length, and the second convolution layer uses a one-dimensional convolution kernel with a second length. Specifically, inputting the vehicle speed input vector into a first convolution layer of the multi-scale neighborhood feature extraction module to obtain a first neighborhood-scale vehicle speed time sequence feature vector, including: using a first convolution layer of the multi-scale neighborhood feature extraction module to perform one-dimensional convolution coding on the vehicle speed input vector according to the following formula so as to obtain a first neighborhood scale vehicle speed time sequence feature vector; wherein, the formula is:
Figure BDA0004152933490000201
Wherein a is the width of the first convolution kernel in the X direction, F (a) is a first convolution kernel parameter vector, G (X-a) is a local vector matrix calculated by a convolution kernel function, w is the size of the first convolution kernel, and X represents the vehicle speed input vector; and inputting the vehicle speed input vector into a second convolution layer of the multi-scale neighborhood feature extraction module to obtain a second neighborhood scale vehicle speed time sequence feature vector, comprising: performing one-dimensional convolution encoding on the vehicle speed input vector by using a second convolution layer of the multi-scale neighborhood feature extraction module according to the following formula to obtain a second neighborhood scale vehicle speed time sequence feature vector; wherein, the formula is:
Figure BDA0004152933490000202
wherein b is the width of the second convolution kernel in the X direction, F (b) is a second convolution kernel parameter vector, G (X-b) is a local vector matrix calculated by a convolution kernel function, m is the size of the second convolution kernel, and X represents the vehicle speed input vector.
In one example, in the above-mentioned AR-based vehicle speed assistance method, the step S150 includes: performing association coding on the vehicle speed time sequence feature vector and the front vehicle relative position time sequence feature vector by using the following formula to obtain a classification feature matrix; wherein, the formula is:
Figure BDA0004152933490000203
Wherein V is m Represents the time sequence feature vector of the vehicle speed,
Figure BDA0004152933490000211
a transpose vector representing the vehicle speed time sequence feature vector, V n Representing the relative position time sequence feature vector of the front vehicle, M 1 Representing the classification feature matrix,/->
Figure BDA0004152933490000212
Representing vector multiplication. />
In one example, in the above-mentioned AR-based vehicle speed assistance method, the step S160 includes: expanding the classification feature matrix into classification feature vectors according to rows or columns; carrying out vector-weighted Hilbert probability spatialization on the classification feature vectors by using the following formula to obtain optimized classification feature vectors; wherein, the formula is:
Figure BDA0004152933490000213
wherein V is the classification feature vector, |V| | 2 Representing the two norms of the classification feature vector,
Figure BDA0004152933490000214
representing the square of the two norms of the classification feature vector, v i Is the ith eigenvalue of the classification eigenvector, exp (·) represents the exponential operation of the vector, which represents the calculation of the natural exponential function value raised to the power of the eigenvalue at each position in the vector, and v i ' is the ith eigenvalue of the optimized classification eigenvector; and performing matrix reconstruction on the optimized classification feature vector to obtain the optimized classification feature matrix.
In one example, in the above-mentioned AR-based vehicle speed assistance method, the step S170 includes: expanding the classification feature matrix into classification feature vectors based on row vectors or column vectors; performing full-connection coding on the classification feature vectors by using a plurality of full-connection layers of the classifier to obtain coded classification feature vectors; and passing the coding classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
In summary, the AR-based vehicle speed assistance method according to the embodiment of the present application is clarified, which uses a neural network model based on deep learning to extract a complex mapping relationship between a relative time sequence change of a vehicle distance and the vehicle speed value change in a front vehicle detection image, so as to perform assistance early warning on whether a vehicle collision occurs at the current vehicle speed, so as to ensure the driving safety of a driver.
Exemplary electronic device
Next, an electronic device according to an embodiment of the present application is described with reference to fig. 9.
Fig. 9 illustrates a block diagram of an electronic device according to an embodiment of the present application.
As shown in fig. 9, the electronic device 10 includes one or more processors 11 and a memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium that may be executed by the processor 11 to implement the functions in the AR-based vehicle speed assistance system and/or other desired functions of the various embodiments of the present application as described above. Various content, such as an optimized classification feature matrix, may also be stored in the computer-readable storage medium.
In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
The input means 13 may comprise, for example, a keyboard, a mouse, etc.
The output device 14 may output various information including the classification result and the like to the outside. The output means 14 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
Of course, only some of the components of the electronic device 10 relevant to the present application are shown in fig. 9 for simplicity, components such as buses, input/output interfaces, etc. being omitted. In addition, the electronic device 10 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer readable storage Medium
In addition to the methods and apparatus described above, embodiments of the present application may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform steps in the functions of the AR-based vehicle speed assist method according to the various embodiments of the present application described in the "exemplary systems" section of this specification.
The computer program product may write program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium, having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform steps in the functions of the AR-based vehicle speed assistance method according to various embodiments of the present application described in the above-mentioned "exemplary systems" section of the present specification.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present application have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present application are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present application. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the application is not intended to be limited to the details disclosed herein as such.
The block diagrams of the devices, apparatuses, devices, systems referred to in this application are only illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that in the apparatus, devices and methods of the present application, the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent to the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the application to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (10)

1. An AR-based vehicle speed assistance system, comprising:
the data acquisition module is used for acquiring front vehicle detection images of a plurality of preset time points in a preset time period acquired by the camera and vehicle speed values of the preset time points;
the image feature extraction module is used for respectively passing the front vehicle detection images at a plurality of preset time points through a convolutional neural network model serving as a filter to obtain a plurality of front vehicle detection feature vectors;
the time sequence associated coding module is used for enabling the plurality of front vehicle detection feature vectors to pass through a context encoder based on a converter to obtain front vehicle relative position time sequence feature vectors;
the vehicle speed time sequence feature extraction module is used for arranging the vehicle speed values of the plurality of preset time points into vehicle speed input vectors according to time dimensions and then obtaining vehicle speed time sequence feature vectors through the multi-scale neighborhood feature extraction module;
The association module is used for carrying out association coding on the vehicle speed time sequence feature vector and the front vehicle relative position time sequence feature vector so as to obtain a classification feature matrix;
the optimizing module is used for carrying out feature distribution modulation on the classification feature matrix to obtain an optimized classification feature matrix;
the early warning module is used for enabling the optimized classification feature matrix to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether a vehicle speed early warning prompt is generated or not; and
and the display module is used for responding to the classification result to generate a vehicle speed early warning prompt and displaying the vehicle speed early warning prompt on a vehicle-mounted screen.
2. The AR-based vehicle speed assistance system of claim 1, wherein the image feature extraction module is configured to: each layer of the convolutional neural network model used as the filter performs the following steps on input data in forward transfer of the layer:
carrying out convolution processing on input data to obtain a convolution characteristic diagram;
pooling the convolution feature images based on a feature matrix to obtain pooled feature images; and
non-linear activation is carried out on the pooled feature map so as to obtain an activated feature map;
wherein the output of the last layer of the convolutional neural network as a filter is the plurality of front vehicle detection feature vectors, and the input of the first layer of the convolutional neural network as a filter is the front vehicle detection images of the plurality of predetermined time points.
3. The AR-based vehicle speed assistance system of claim 2, wherein the timing-related encoding module comprises:
a context coding unit, configured to perform global context semantic coding based on a converter concept on the plurality of front car detection feature vectors by using a converter of the context encoder including the embedded layer to obtain a plurality of global context semantic front car detection feature vectors; and
and the cascading unit is used for cascading the global context semantic front vehicle detection feature vectors to obtain the front vehicle relative position time sequence feature vector.
4. The AR-based vehicle speed assistance system according to claim 3, wherein the context encoding unit includes:
the query vector construction subunit is used for carrying out one-dimensional arrangement on the plurality of front vehicle detection feature vectors to obtain global front vehicle detection feature vectors;
a self-attention subunit, configured to calculate a product between the global front-vehicle detection feature vector and a transpose vector of each front-vehicle detection feature vector in the plurality of front-vehicle detection feature vectors to obtain a plurality of self-attention correlation matrices;
the normalization subunit is used for respectively performing normalization processing on each self-attention correlation matrix in the plurality of self-attention correlation matrices to obtain a plurality of normalized self-attention correlation matrices;
The attention calculating subunit is used for obtaining a plurality of probability values through a Softmax classification function by each normalized self-attention correlation matrix in the normalized self-attention correlation matrices;
the attention applying subunit is used for weighting each front vehicle detection feature vector in the front vehicle detection feature vectors by taking each probability value in the probability values as a weight so as to obtain the context semantic front vehicle detection feature vectors;
and the cascading subunit is used for cascading the context semantic front vehicle detection feature vectors to obtain the global context semantic front vehicle detection feature vector.
5. The AR-based vehicle speed assistance system of claim 4, wherein the multi-scale neighborhood feature extraction module comprises: the device comprises a first convolution layer, a second convolution layer parallel to the first convolution layer and a multi-scale feature fusion layer connected with the first convolution layer and the second convolution layer, wherein the first convolution layer uses a one-dimensional convolution kernel with a first length, and the second convolution layer uses a one-dimensional convolution kernel with a second length.
6. The AR-based vehicle speed assist system of claim 5 wherein the vehicle speed timing feature extraction module comprises:
A first neighborhood scale feature extraction unit, configured to input the vehicle speed input vector into a first convolution layer of the multi-scale neighborhood feature extraction module to obtain a first neighborhood scale vehicle speed time sequence feature vector, where the first convolution layer has a first one-dimensional convolution kernel with a first length;
a second neighborhood scale feature extraction unit, configured to input the vehicle speed input vector into a second convolution layer of the multi-scale neighborhood feature extraction module to obtain a second neighborhood scale vehicle speed time sequence feature vector, where the second convolution layer has a second one-dimensional convolution kernel with a second length, and the first length is different from the second length; and
and the multiscale fusion unit is used for cascading the first neighborhood scale vehicle speed time sequence feature vector and the second neighborhood scale vehicle speed time sequence feature vector to obtain the vehicle speed time sequence feature vector.
The first neighborhood scale feature extraction unit is configured to: using a first convolution layer of the multi-scale neighborhood feature extraction module to perform one-dimensional convolution coding on the vehicle speed input vector according to the following formula so as to obtain a first neighborhood scale vehicle speed time sequence feature vector;
wherein, the formula is:
Figure FDA0004152933480000031
Wherein a is the width of the first convolution kernel in the X direction, F (a) is a first convolution kernel parameter vector, G (X-a) is a local vector matrix calculated by a convolution kernel function, w is the size of the first convolution kernel, and X represents the vehicle speed input vector; and
the second neighborhood scale feature extraction unit is configured to: performing one-dimensional convolution encoding on the vehicle speed input vector by using a second convolution layer of the multi-scale neighborhood feature extraction module according to the following formula to obtain a second neighborhood scale vehicle speed time sequence feature vector;
wherein, the formula is:
Figure FDA0004152933480000032
wherein b is the width of the second convolution kernel in the X direction, F (b) is a second convolution kernel parameter vector, G (X-b) is a local vector matrix calculated by a convolution kernel function, m is the size of the second convolution kernel, and X represents the vehicle speed input vector.
7. The AR-based vehicle speed assistance system of claim 6, wherein the association module is configured to: performing association coding on the vehicle speed time sequence feature vector and the front vehicle relative position time sequence feature vector by using the following formula to obtain a classification feature matrix;
wherein, the formula is:
Figure FDA0004152933480000033
wherein V is m Represents the time sequence feature vector of the vehicle speed,
Figure FDA0004152933480000034
A transpose vector representing the vehicle speed time sequence feature vector, V n Representing the relative position time sequence feature vector of the front vehicle, M 1 Representing the classification feature matrix,/->
Figure FDA0004152933480000041
Representing vector multiplication.
8. The AR-based vehicle speed assistance system of claim 7, wherein the optimization module comprises:
the unfolding unit is used for unfolding the classification characteristic matrix into classification characteristic vectors according to rows or columns;
the feature optimization unit is used for carrying out vector-normalized Hilbert probability spatialization on the classification feature vectors according to the following formula to obtain optimized classification feature vectors;
wherein, the formula is:
Figure FDA0004152933480000042
wherein V is the classification feature vector, |V| | 2 Representing the two norms of the classification feature vector,
Figure FDA0004152933480000043
representing the square of the two norms of the classification feature vector, v i Is the ith eigenvalue of the classification eigenvector, exp (·) represents the exponential operation of the vector, which represents the calculation of the natural exponential function value raised to the power of the eigenvalue at each position in the vector, and v i ' is the ith eigenvalue of the optimized classification eigenvector; and
and the matrix reconstruction unit is used for carrying out matrix reconstruction on the optimized classification characteristic vector so as to obtain the optimized classification characteristic matrix.
9. The AR-based vehicle speed assistance system of claim 8, wherein the pre-warning module comprises:
a classification feature vector generation unit for expanding the classification feature matrix into classification feature vectors based on row vectors or column vectors;
the full-connection coding unit is used for carrying out full-connection coding on the classification characteristic vectors by using a plurality of full-connection layers of the classifier so as to obtain coded classification characteristic vectors; and
and the classification result generation unit is used for passing the coding classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
10. An AR-based vehicle speed assistance method, comprising:
acquiring front vehicle detection images of a plurality of preset time points in a preset time period acquired by a camera and vehicle speed values of the preset time points;
respectively passing the front vehicle detection images at a plurality of preset time points through a convolutional neural network model serving as a filter to obtain a plurality of front vehicle detection feature vectors;
passing the plurality of front vehicle detection feature vectors through a context encoder based on a converter to obtain front vehicle relative position time sequence feature vectors;
The vehicle speed values of the plurality of preset time points are arranged into vehicle speed input vectors according to time dimensions, and then the vehicle speed input vectors are processed through a multi-scale neighborhood feature extraction module to obtain vehicle speed time sequence feature vectors;
performing association coding on the vehicle speed time sequence feature vector and the front vehicle relative position time sequence feature vector to obtain a classification feature matrix;
performing feature distribution modulation on the classification feature matrix to obtain an optimized classification feature matrix;
the classification feature matrix passes through a classifier to obtain a classification result, and the classification result is used for indicating whether a vehicle speed early warning prompt is generated or not; and
and responding to the classification result to generate a vehicle speed early warning prompt, and displaying the vehicle speed early warning prompt on a vehicle-mounted screen.
CN202310324918.0A 2023-03-29 2023-03-29 AR-based vehicle speed assist system and method thereof Pending CN116279504A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310324918.0A CN116279504A (en) 2023-03-29 2023-03-29 AR-based vehicle speed assist system and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310324918.0A CN116279504A (en) 2023-03-29 2023-03-29 AR-based vehicle speed assist system and method thereof

Publications (1)

Publication Number Publication Date
CN116279504A true CN116279504A (en) 2023-06-23

Family

ID=86803116

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310324918.0A Pending CN116279504A (en) 2023-03-29 2023-03-29 AR-based vehicle speed assist system and method thereof

Country Status (1)

Country Link
CN (1) CN116279504A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116985793A (en) * 2023-09-26 2023-11-03 深圳市交投科技有限公司 Automatic driving safety control system and method based on deep learning algorithm
CN117382435A (en) * 2023-10-17 2024-01-12 浙江加力仓储设备股份有限公司 Vehicle speed control method and system based on dip angle monitoring
CN117429419A (en) * 2023-09-13 2024-01-23 江苏大块头智驾科技有限公司 Automatic driving method applied to port and driving vehicle

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117429419A (en) * 2023-09-13 2024-01-23 江苏大块头智驾科技有限公司 Automatic driving method applied to port and driving vehicle
CN117429419B (en) * 2023-09-13 2024-05-10 江苏大块头智驾科技有限公司 Automatic driving method applied to port and driving vehicle
CN116985793A (en) * 2023-09-26 2023-11-03 深圳市交投科技有限公司 Automatic driving safety control system and method based on deep learning algorithm
CN116985793B (en) * 2023-09-26 2023-12-22 深圳市交投科技有限公司 Automatic driving safety control system and method based on deep learning algorithm
CN117382435A (en) * 2023-10-17 2024-01-12 浙江加力仓储设备股份有限公司 Vehicle speed control method and system based on dip angle monitoring
CN117382435B (en) * 2023-10-17 2024-05-03 浙江加力仓储设备股份有限公司 Vehicle speed control method and system based on dip angle monitoring

Similar Documents

Publication Publication Date Title
CN116279504A (en) AR-based vehicle speed assist system and method thereof
US20210342997A1 (en) Computer Vision Systems and Methods for Vehicle Damage Detection with Reinforcement Learning
KR102436962B1 (en) An electronic device and Method for controlling the electronic device thereof
CN115783923B (en) Elevator fault mode identification system based on big data
CN116373732A (en) Control method and system for vehicle indicator lamp
CN114724386B (en) Short-time traffic flow prediction method and system under intelligent traffic and electronic equipment
Akai et al. Driving behavior modeling based on hidden markov models with driver's eye-gaze measurement and ego-vehicle localization
CN116015837A (en) Intrusion detection method and system for computer network information security
CN113935143A (en) Estimating collision probability by increasing severity level of autonomous vehicle
US20230230484A1 (en) Methods for spatio-temporal scene-graph embedding for autonomous vehicle applications
JP2009096365A (en) Risk recognition system
Al Mamun et al. Lane marking detection using simple encode decode deep learning technique: SegNet
CN116168243A (en) Intelligent production system and method for shaver
CN114021840A (en) Channel switching strategy generation method and device, computer storage medium and electronic equipment
CN116486622A (en) Traffic intelligent planning system and method based on road data
Mou et al. Driver emotion recognition with a hybrid attentional multimodal fusion framework
Zhang et al. Dynamic driving intention recognition of vehicles with different driving styles of surrounding vehicles
Zhang et al. Recognition method of abnormal driving behavior using the bidirectional gated recurrent unit and convolutional neural network
CN115586763A (en) Unmanned vehicle keeps away barrier test equipment
US11447127B2 (en) Methods and apparatuses for operating a self-driving vehicle
Kim et al. Explainable deep driving by visualizing causal attention
Liang et al. Multi-agent and driving behavior based rear-end collision alarm modeling and simulating
CN112115928A (en) Training method and detection method of neural network based on illegal parking vehicle labels
KR20220158187A (en) Deep learning-based simulation platform for detection of inadvertent driving linked to autonomous emergency braking system by driver state warning system
Barosan CarESP: an emotion vehicle with stress, personality and embodiment of emotions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination