CN109165562A - Training method, crosswise joint method, apparatus, equipment and the medium of neural network - Google Patents
Training method, crosswise joint method, apparatus, equipment and the medium of neural network Download PDFInfo
- Publication number
- CN109165562A CN109165562A CN201810845656.1A CN201810845656A CN109165562A CN 109165562 A CN109165562 A CN 109165562A CN 201810845656 A CN201810845656 A CN 201810845656A CN 109165562 A CN109165562 A CN 109165562A
- Authority
- CN
- China
- Prior art keywords
- neural network
- feature
- video sequence
- network
- application
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C5/00—Registering or indicating the working of vehicles
- G07C5/08—Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
- G07C5/0808—Diagnosing performance data
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Automation & Control Theory (AREA)
- Human Computer Interaction (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Traffic Control Systems (AREA)
Abstract
Crosswise joint method, apparatus, electronic equipment, computer readable storage medium and the computer program that presently filed embodiment discloses a kind of training method of neural network, Vehicular intelligent drives, the training method of neural network therein include: to be respectively supplied to video sequence sample to be used for the neural network to be trained and at least one auxiliary nervous network of vehicle lateral control prediction processing;Dimension transformation is carried out for the fisrt feature figure that the video sequence sample is formed at least one layer in the neural network, and/or, dimension transformation is carried out for the second feature figure that the video sequence sample is formed at least one layer in the auxiliary nervous network, to obtain at least a pair of fisrt feature figure and second feature figure with identical dimensional;According to the difference between the fisrt feature figure with identical dimensional and second feature figure, the network parameter of the neural network is adjusted.
Description
Technical field
This application involves computer vision technique, more particularly, to the training method of neural network a kind of, neural network
Transverse control device, the electronic equipment, calculating of crosswise joint method, Vehicular intelligent driving that training device, Vehicular intelligent drive
Machine readable storage medium storing program for executing and computer program.
Background technique
Crosswise joint is typically referred to perpendicular to the control in direction of vehicle movement, it is believed that crosswise joint is turning for vehicle
To control.
Crosswise joint is one of the core technology in Vehicular intelligent driving field, how to realize the accurate laterally control of vehicle
System, specifically how in the traffic environment of complex road condition (for example, wide-angle bend, the significant change of road light, dim light
Tunnel or the rapid straight road etc. of wagon flow), realize the accurate crosswise joint of vehicle, be that the technology to merit attention is asked
Topic.
Summary of the invention
A kind of technical side for the crosswise joint that the application embodiment provides trained neural network and Vehicular intelligent drives
Case.
According to the application embodiment wherein on the one hand, a kind of training method of neural network is provided, which comprises
Video sequence sample is respectively supplied to be used for the neural network to be trained and at least one of vehicle lateral control prediction processing
A auxiliary nervous network;The fisrt feature figure that the video sequence sample is formed is directed to at least one layer in the neural network
Dimension transformation is carried out, and/or, that at least one layer in the auxiliary nervous network is formed for the video sequence sample
Two characteristic patterns carry out dimension transformation, to obtain at least a pair of fisrt feature figure and second feature figure with identical dimensional;According to
Difference between fisrt feature figure with identical dimensional and second feature figure adjusts the network parameter of the neural network.
In one embodiment of the application, the auxiliary nervous network includes: for being known for traffic scene image
Not, it detects, divide, the neural network of classification and/or target following.
In the another embodiment of the application, the auxiliary nervous network and the neural network isomery to be trained.
In the application a further embodiment, at least one layer treated in trained neural network is directed to the video
The fisrt feature figure that sequence samples are formed carries out dimension transformation, and/or, at least one layer in the auxiliary nervous network is directed to
The second feature figure that the video sequence sample is formed carries out dimension transformation, comprising: by at least one layer in the neural network
Feature extraction is carried out to the video sequence sample, obtains fisrt feature figure;By at least one layer in the auxiliary nervous network
Feature extraction is carried out to the video sequence sample, obtains second feature figure;By feature transfer neural network unit to described
One characteristic pattern and/or the second feature figure carry out dimension transformation processing.
In the application a further embodiment, at least one layer by the neural network is to the video sequence sample
This progress feature extraction, obtaining fisrt feature figure includes: by multiple layers in the neural network respectively to the video sequence
Sample carries out feature extraction, obtains multiple fisrt feature figures;It is described by feature transfer neural network unit to the fisrt feature
It includes: by being separately connected with multiple layers in the neural network that figure and/or the second feature figure, which carry out dimension transformation processing,
Different characteristic transfer neural network unit in fisrt feature shift neural network unit, fisrt feature figure is tieed up respectively
Spend conversion process;At least one layer by the auxiliary nervous network carries out feature extraction to the video sequence sample,
Obtaining second feature figure includes: to carry out feature to the video sequence sample respectively by multiple layers in the auxiliary nervous network
It extracts, obtains multiple second feature figures;It is described by feature transfer neural network unit to the fisrt feature figure and/or described
It includes: the different characteristic by being separately connected with multiple layers in the auxiliary nervous network that two characteristic patterns, which carry out dimension transformation processing,
The second feature shifted in neural network unit shifts neural network unit, carries out at dimension transformation to second feature figure respectively
Reason.
In the application a further embodiment, the neural network to be trained and auxiliary nervous network are divided at least
Two feature hierarchies, the layer in different characteristic level are connect with different characteristic transfer neural network unit respectively.
In the application a further embodiment, the basis have identical dimensional fisrt feature figure and second feature figure it
Between difference, it includes: special according to the fisrt feature figure with identical dimensional and second for adjusting the network parameter of the neural network
The vehicle lateral control predictive information and nonproductive task prediction letter of difference and neural network output to be trained between sign figure
The breath difference between the crosswise joint markup information and nonproductive task markup information in video sequence sample respectively, described in adjustment
The network parameter of neural network and each feature transfer neural network unit.
In the application a further embodiment, the vehicle lateral control predictive information includes: steering wheel angle predicted value.
In the application a further embodiment, the nonproductive task predictive information includes: vehicle speed per hour predicted value and/or vehicle
Steering wheel angle torque predicted value.
In the application a further embodiment, it is described be respectively supplied to the video sequence sample to be used for vehicle lateral control it is pre-
It surveys before neural network to be trained and at least one auxiliary nervous network of processing, the method also includes: it is based on more
Neural network described in pre-training of being engaged in, wherein the multitask includes main task and at least one nonproductive task, and the main task is
Vehicle lateral control task, the nonproductive task are other vehicle control tasks different from the main task.
In the application a further embodiment, at least one described nonproductive task include: vehicle speed per hour prediction task and/or
Steering wheel for vehicle corner torque predicts task.
In the application a further embodiment, described based on neural network described in multitask pre-training includes: initialization institute
State neural network;Video sequence sample is supplied to the neural network;Multitasking, and root are carried out by the neural network
According to the respective prediction result of the multitasking difference between the multitask markup information of video sequence sample respectively, institute is adjusted
State the network parameter of neural network.
In the application a further embodiment, the case where the neural network is three-dimensional space-time residual error convolutional neural networks
Under, neural network to be trained described in the initialization includes: the two-dimentional residual error convolutional neural networks for utilizing the training that succeeded, right
Neural network to be trained is initialized.
According to the application embodiment wherein in another aspect, provide a kind of crosswise joint method that Vehicular intelligent drives,
The described method includes: obtaining video sequence to be processed;Wherein, the video sequence to be processed includes: to clap in vehicle travel process
The video sequence taken the photograph;The video sequence to be processed is supplied to neural network;By the neural network to the view to be processed
Frequency sequence carries out vehicle lateral control prediction processing, and exports vehicle lateral control predictive information;Wherein, the neural network benefit
With the training method of above-mentioned neural network, made of training.
In one embodiment of the application, the vehicle lateral control predictive information includes: steering wheel angle predicted value.
In the another embodiment of the application, the neural network includes: three-dimensional space-time residual error convolutional neural networks;Its
In, the three-dimensional includes: time dimension.
According to the application embodiment wherein in another aspect, providing a kind of training device of neural network, described device
It include: that video sequence sample module is provided, for video sequence sample to be respectively supplied to be used at vehicle lateral control prediction
The neural network to be trained and at least one auxiliary nervous network of reason;Dimension transformation module, for the neural network
In at least one layer for the video sequence sample formed fisrt feature figure carry out dimension transformation, and/or, to the auxiliary
At least one layer in neural network carries out dimension transformation for the second feature figure that the video sequence sample is formed, to obtain extremely
Few a pair of fisrt feature figure and second feature figure with identical dimensional;Network parameter module is adjusted, for identical according to having
Difference between the fisrt feature figure and second feature figure of dimension, adjusts the network parameter of the neural network.
In one embodiment of the application, the auxiliary nervous network includes: for being known for traffic scene image
Not, it detects, divide, the neural network of classification and/or target following.
In the another embodiment of the application, the auxiliary nervous network and the neural network isomery to be trained.
In the application a further embodiment, by the neural network it is at least one layer of to the video sequence sample into
Row feature extraction obtains fisrt feature figure;By in the auxiliary nervous network it is at least one layer of to the video sequence sample into
Row feature extraction obtains second feature figure;The dimension transformation module includes: feature transfer neural network unit, the feature
Neural network unit is shifted to be used to carry out dimension transformation processing to the fisrt feature figure and/or the second feature figure.
In the application a further embodiment, multiple layers in the neural network respectively to the video sequence sample into
Row feature extraction obtains multiple fisrt feature figures;The feature transfer neural network unit includes: multiple fisrt feature transfer minds
Through network unit, multiple fisrt feature transfer neural network units are separately connected with multiple layers in the neural network, and first
Feature transfer neural network unit is used to carry out dimension transformation to the fisrt feature figure of the equivalent layer output in the neural network
Processing;Multiple layers in the auxiliary nervous network carry out feature extraction to the video sequence sample respectively, obtain multiple the
Two characteristic patterns;The feature transfer neural network unit includes: multiple second feature transfer neural network units, and second feature turns
Neural network unit is moved to be used to carry out at dimension transformation the second feature figure of the equivalent layer output in the auxiliary nervous network
Reason.
In the application a further embodiment, the neural network to be trained and auxiliary nervous network are divided at least
Two feature hierarchies, the layer in different characteristic level are connect with different characteristic transfer neural network unit respectively.
In the application a further embodiment, the adjustment network parameter module is further used for, according to identical dimension
The vehicle lateral control of difference and neural network output to be trained between the fisrt feature figure and second feature figure of degree is pre-
Measurement information and nonproductive task predictive information respectively in video sequence sample crosswise joint markup information and nonproductive task mark
Difference between information adjusts the network parameter of the neural network.
In the application a further embodiment, the vehicle lateral control predictive information includes: steering wheel angle predicted value.
In the application a further embodiment, the nonproductive task predictive information includes: vehicle speed per hour predicted value and/or vehicle
Steering wheel angle torque predicted value.
In the application a further embodiment, described device further include: pre-training module, for being based on multitask pre-training
The neural network, wherein the multitask includes main task and at least one nonproductive task, and the main task is lateral direction of car
Control task, the nonproductive task are other vehicle control tasks different from the main task.
In the application a further embodiment, at least one described nonproductive task include: vehicle speed per hour prediction task and/or
Steering wheel for vehicle corner torque predicts task.
In the application a further embodiment, the pre-training module is further used for, and initializes the neural network;It will
Video sequence sample is supplied to the neural network;Multitasking is carried out by the neural network, and according to multitasking
The respective prediction result difference between the multitask markup information of video sequence sample respectively, adjusts the neural network
Network parameter.
In the application a further embodiment, the case where the neural network is three-dimensional space-time residual error convolutional neural networks
Under, neural network to be trained described in the initialization includes: the two-dimentional residual error convolutional neural networks for utilizing the training that succeeded, right
Neural network to be trained is initialized.
According to the application embodiment wherein in another aspect, providing a kind of transverse control device that Vehicular intelligent drives, institute
Stating device includes: to obtain video sequence module, for obtaining video sequence to be processed;Wherein, the video sequence packet to be processed
It includes: the video sequence shot in vehicle travel process;Video sequence module is provided, for providing the video sequence to be processed
To neural network;Neural network for carrying out vehicle lateral control prediction processing to the video sequence to be processed, and exports vehicle
Crosswise joint predictive information;Wherein, the neural network is the training device using above-mentioned neural network, made of training.
In one embodiment of the application, the vehicle lateral control predictive information includes: steering wheel angle predicted value.
In the another embodiment of the application, the neural network includes: three-dimensional space-time residual error convolutional neural networks;Its
In, the three-dimensional includes: time dimension.
According to the application embodiment in another aspect, providing a kind of electronic equipment, comprising: memory is calculated for storing
Machine program;Processor, for executing the computer program stored in the memory, and the computer program is performed,
Realize the application either method embodiment.
According to the application embodiment another aspect, a kind of computer readable storage medium is provided, is stored thereon with meter
Calculation machine program when the computer program is executed by processor, realizes the application either method embodiment.
According to another aspect of the application embodiment, a kind of computer program, including computer instruction are provided, works as institute
When stating computer instruction and running in the processor of equipment, the application either method embodiment is realized.
The transverse direction driven based on neural network training method provided by the present application, neural metwork training device, Vehicular intelligent
Transverse control device, electronic equipment, computer readable storage medium and the computer program that control method, Vehicular intelligent drive,
The neural network of the application in the training process, by means of the auxiliary nervous network of at least one training that succeeded, carries out feature
The study of figure allows neural network by study, has the ability for fast and accurately forming characteristic pattern, to be conducive to improve
The accuracy of the crosswise joint information of neural network output.
Below by drawings and embodiments, the technical solution of the application is described in further detail.
Detailed description of the invention
The attached drawing for constituting part of specification describes presently filed embodiment, and together with description for solving
Release the principle of the application.
The application can be more clearly understood according to following detailed description referring to attached drawing, in which:
Fig. 1 is the flow chart of an embodiment of the training method of the neural network of the application;
Fig. 2 is the flow chart of an embodiment of the pre-training process of the application;
Fig. 3 is the schematic diagram of an embodiment of the second stage training of the neural network of the application;
Fig. 4 is the flow chart for one embodiment of crosswise joint method that the Vehicular intelligent of the application drives;
Fig. 5 is the structural schematic diagram of one embodiment of training device of the neural network of the application;
Fig. 6 is the structural schematic diagram for one embodiment of transverse control device that the Vehicular intelligent of the application drives;
Fig. 7 is the block diagram for realizing an example devices of the application embodiment.
Specific embodiment
The various exemplary embodiments of the application are described in detail now with reference to attached drawing.It should also be noted that unless in addition having
Body explanation, the unlimited system of component and the positioned opposite of step, numerical expression and the numerical value otherwise illustrated in these embodiments is originally
The range of application.
Simultaneously, it should be appreciated that for ease of description, the size of various pieces shown in attached drawing is not according to reality
Proportionate relationship draw.
Be to the description only actually of at least one exemplary embodiment below it is illustrative, never as to the application
And its application or any restrictions used.
Technology, method known to person of ordinary skill in the relevant and equipment may be not discussed in detail, but
In appropriate situation, the technology, method and apparatus should be considered as part of specification.
It should be noticed that similar label and letter indicate similar terms in following attached drawing, therefore, once a certain item exists
It is defined in one attached drawing, then in subsequent attached drawing does not need that it is further discussed.
The embodiment of the present application can be applied to the electronic equipments such as terminal device, computer system and server, can be with crowd
Mostly other general or dedicated computing system environment or configuration operate together.Suitable for terminal device, computer system with
And the example of well-known terminal device, computing system, environment and/or configuration that the electronic equipments such as server are used together,
Including but not limited to: personal computer system, server computer system, thin client, thick client computer, hand-held or above-knee set
It is standby, microprocessor-based system, set-top box, programmable consumer electronics, NetPC Network PC, little type Ji calculate machine Xi Tong ﹑
Large computer system and the distributed cloud computing technology environment including above-mentioned any system, etc..
The electronic equipments such as terminal device, computer system and server can be in the computer executed by computer system
It is described under the general context of system executable instruction (such as program module).In general, program module may include routine, program,
Target program, component, logic and data structure etc., they execute specific task or realize specific abstract data class
Type.Computer system/server can be implemented in distributed cloud computing environment, in distributed cloud computing environment, task be by
What the remote processing devices being linked through a communication network executed.In distributed cloud computing environment, program module can be located at packet
On the Local or Remote computing system storage medium for including storage equipment.
Exemplary embodiment
Fig. 1 is the flow chart of training method one embodiment of the application neural network.As shown in Figure 1, the embodiment side
Method includes: step S100, step S110 and step S120.Steps are as follows for each in Fig. 1:
S100, video sequence sample is respectively supplied to neural network and at least one auxiliary nervous net to be trained
Network;Neural network to be trained therein is handled for vehicle lateral control prediction.
S110, dimension turn is carried out for the fisrt feature figure that video sequence sample is formed at least one layer in neural network
It changes, and/or, dimension is carried out for the second feature figure that video sequence sample is formed at least one layer in auxiliary nervous network and is turned
It changes, to obtain at least a pair of fisrt feature figure and second feature figure with identical dimensional;
S120, basis have the difference between the fisrt feature figure and second feature figure of identical dimensional, adjust neural network
Network parameter.
Neural network in the application is to learn at least one in the auxiliary nervous network that at least one has succeeded training
Layer, on the basis of being formed by characteristic pattern for video sequence sample, made of training.That is, the nerve net of the application
In the training process, by means of the auxiliary nervous network of at least one training that succeeded, neural network to be trained is in training for network
In the process, one layer or multilayer for needing to learn succeed in trained each auxiliary nervous network, for video sequence sample institute
The characteristic pattern of formation.
Seen from the above description, the neural network of the application in the training process, has successfully been trained by means of at least one
Auxiliary nervous network (for example, auxiliary nervous network of multiple isomeries or the auxiliary nervous network of multiple isomorphisms etc.), carry out
The study of characteristic pattern allows neural network by learning have the video sequence to be processed for being directed to input, fast and accurately shape
At the ability of characteristic pattern, to be conducive to improve the accuracy of the crosswise joint information of neural network output.
In an optional example, the neural network wait train in step S100 can be the nerve after pre-training
Network.The process of pre-training may refer to following descriptions for Fig. 2.It is no longer described in detail herein.
In an optional example, the application is used during treating trained neural network and being trained
The quantity of auxiliary nervous network is usually no less than two.Compared to the neural network using an auxiliary nervous network handles training
For the application scenarios being trained, using multiple auxiliary nervous networks, helps to improve neural network and fast and accurately formed
The ability of characteristic pattern.Each auxiliary nervous network in the application is the auxiliary nervous network of successful training, i.e., auxiliary nervous
Layer in network is formed by characteristic pattern with preferable accuracy for video sequence sample.
In an optional example, in the training process, used auxiliary nervous network can for the neural network of the application
To be the neural network with the neural network isomery of the application, to be conducive to the limitation for avoiding being trained neural network.
Auxiliary nervous network can be for being identified for traffic scene image, being detected, divided, classify and/or target following
Neural network is conducive to the accuracy for improving the crosswise joint information of neural network output using such auxiliary nervous network.
It is of course also possible to be the neural network with the neural network isomorphism of the application.Auxiliary nervous network in the application can use
PSPNet network structure, can also be using FlowNet network structure etc..The application does not limit the specific knot of auxiliary nervous network
Structure.
In an optional example, the auxiliary nervous network of the training that succeeded in the application is typically referred to, using being based on
The image pattern or video frame sample of traffic scene, are trained auxiliary nervous network, and successfully train after auxiliary mind
Through network.The auxiliary nervous network is properly termed as the auxiliary nervous network based on traffic scene.The application is by treating training
Neural network be trained during, using the auxiliary nervous network based on traffic scene, be conducive to make neural network shape
At accurate characteristic pattern.Since the auxiliary nervous network in the application can be different with the neural network to be trained in the application
The auxiliary nervous network of structure, can be to avoid the limitation for the neural fusion training that must use isomorphism, to be conducive to mention
The exploitativeness of high neural metwork training.
In an optional example, the application can use following three kinds of dimension transformation modes:
Mode one, at least one layer treated in trained neural network are directed to the fisrt feature figure that video sequence sample is formed
Dimension transformation processing is carried out, the fisrt feature figure after making conversion is directed to video sequence sample with the equivalent layer in auxiliary nervous network
Originally second feature figure, dimension having the same are formed by;
Mode two carries out at least one layer in auxiliary nervous network for the second feature figure that video sequence sample is formed
Dimension transformation processing, the second feature figure after making conversion are directed to video sequence sample with the equivalent layer in neural network to be trained
Originally fisrt feature figure, dimension having the same are formed by;
Mode three treats at least one layer in trained neural network and is directed to the fisrt feature figure that video sequence sample is formed
Dimension transformation processing is carried out, and the second feature figure that video sequence sample is formed is directed to at least one layer in auxiliary nervous network
Dimension transformation processing is carried out, to form at least a pair of fisrt feature figure and second feature figure with identical dimensional.
In an optional example, the application can be between the fisrt feature figure with identical dimensional and second feature figure
Difference (for example, similarity etc.) be the first tutorial message, for the purpose of reducing the difference, adjust neural network network ginseng
Number.Corresponding loss function (for example, L2 loss function) such as is utilized, adjusts the network parameter of neural network.Mind in the application
Network parameter through network may include: convolution nuclear parameter and weight matrix of neural network etc..The application does not limit nerve
The particular content that the network parameter of network is included.
In an optional example, since the neural network of the application can also export corresponding vehicle for video sequence sample
Crosswise joint predictive information, therefore, while the application can also use above-mentioned first specify information, with vehicle lateral control
The difference between crosswise joint markup information in predictive information and video sequence sample is the second tutorial message, all to reduce
For the purpose of difference, the network parameter of neural network is adjusted.Corresponding loss function (such as L2 loss function) such as is utilized, adjustment nerve
The network parameter of network.
In an optional example, since the neural network of the application can also export respective secondary for video sequence sample
Task predictive information is helped, therefore, the application can also be using the same of above-mentioned first specify information and above-mentioned second specify information
When, with the corresponding markup information of each nonproductive task in the corresponding predicted value of each nonproductive task and video sequence sample
Between difference be third tutorial message adjust the network parameter of neural network for the purpose of reducing all differences.Such as utilize phase
The loss function (such as L2 loss function) answered, adjusts the network parameter of neural network.By in conjunction with nonproductive task predictive information come
The network parameter for adjusting neural network makes neural network that multiple-task prediction may be implemented.
In an optional example, the application can use at least one feature transfer neural network unit, treat training
Neural network at least one layer for video sequence sample formed fisrt feature figure and/or auxiliary nervous network in extremely
Few one layer carry out dimension transformation for the second feature figure that video sequence sample is formed, to be formed multipair with identical dimensional
Fisrt feature figure and second feature figure.Under normal conditions, feature transfer neural network unit is the side based on neural network learning
Formula carries out dimension transformation to fisrt feature figure or second feature figure;For example, by fisrt feature figure or second feature figure
Average pondization operation and down-sampling operation are carried out, realizes fisrt feature figure or second feature figure dimension transformation;For another example logical
Cross to fisrt feature figure perhaps second feature figure carry out process of convolution and down-sampling processing realize fisrt feature figure or second
The dimension transformation of characteristic pattern.The application does not limit the mode based on neural network learning, to fisrt feature figure and second feature
Figure carries out the specific implementation of dimension transformation.
In an optional example, a feature transfer neural network unit may include: a fisrt feature transfer mind
Neural network unit is shifted through network unit and a second feature, therefore, feature transfer neural network unit is referred to as
Antithesis feature transfer neural network unit.The application can be made by utilizing antithesis feature transfer neural network unit wait train
Neural network carry out characteristic pattern to the auxiliary nervous network of isomery and learn (such as learning by imitation).
In an optional example, the fisrt feature figures of a1 layers of neural network to be trained output are provided to pair
Fisrt feature in even feature transfer neural network unit A shifts neural network unit A1, shifts neural network by fisrt feature
The fisrt feature figure received is converted to the fisrt feature figure of N1 dimension by unit A1;And a2 layers of output of auxiliary nervous network
Second feature figure, be provided in antithesis feature transfer neural network unit A second feature transfer neural network unit A2,
Received second feature figure is converted to the second feature figure of N1 dimension by second feature transfer neural network unit A2;From
And obtain a pair of fisrt feature figure and second feature figure with N1 dimension.
Correspondingly, b1 layers of neural network the to be trained fisrt feature figure exported, are provided to antithesis feature transfer
Fisrt feature in neural network unit B shifts neural network unit B 1, by fisrt feature transfer neural network unit B 1 by its
The fisrt feature figure received is converted to the fisrt feature figure of N2 dimension;And the second of b2 layers of output of auxiliary nervous network is special
Sign figure is provided to the transfer neural network unit B 2 of the second feature in antithesis feature transfer neural network unit B, by the second spy
Received second feature figure is converted to the second feature figure of N2 dimension by sign transfer neural network unit B 2;To obtain one
To fisrt feature figure and second feature figure with N2 dimension.
And so on, the application can obtain multipair fisrt feature figure and second feature figure with identical dimensional.
It should be strongly noted that the different antithesis feature transfer neural network units in the application are formed by multipair spy
The dimension for levying figure can be identical, can not also be identical.For example, the above-mentioned N1 and N2 in the application can be equal, it can not also phase
Deng.
In an optional example, the neural network to be trained in the application is generally divided at least two characteristic layers
It is secondary, likewise, each auxiliary nervous network in the application is also generally divided into same amount of feature hierarchy.Usual situation
Under, the corresponding 1 antithesis feature transfer neural network unit of a feature hierarchy in an auxiliary nervous network.Certainly, this Shen
A feature hierarchy being please also not excluded in an auxiliary nervous network corresponding 2 or greater number of antithesis feature transfer mind
The case where through network unit.By being divided into multiple feature hierarchies, is conducive to neural network to be trained and carries out feature learning, from
And be conducive to improve the accuracy of the crosswise joint information of neural network output.
In an optional example, the neural network of in this application to be trained, the first auxiliary nervous network and second
Auxiliary nervous network is divided into low feature hierarchy, middle feature hierarchy and high feature hierarchy, the feelings of 3 feature hierarchies respectively
Under condition:
The fisrt feature figure and that the application should export one layer in the low feature hierarchy of neural network to be trained
The second feature figure of one layer of output in the low feature hierarchy of one auxiliary nervous network, is respectively supplied to the first antithesis feature transfer
Neural network unit;
By the fisrt feature figure of one layer of output in the low feature hierarchy of neural network to be trained and the second auxiliary mind
The second feature figure of one layer of output in low feature hierarchy through network, is respectively supplied to the second antithesis feature transfer neural network
Unit;
By the fisrt feature figure of one layer of output in the middle feature hierarchy of neural network to be trained and the first auxiliary mind
The second feature figure of one layer of output in middle feature hierarchy through network, is respectively supplied to third antithesis feature transfer neural network
Unit;
By the fisrt feature figure of one layer of output in the middle feature hierarchy of neural network to be trained and the second auxiliary mind
The second feature figure of one layer of output in middle feature hierarchy through network, is respectively supplied to the 4th antithesis feature transfer neural network
Unit;
By the fisrt feature figure of one layer of output in the high feature hierarchy of neural network to be trained and the first auxiliary mind
The second feature figure of one layer of output in high feature hierarchy through network, is respectively supplied to the 5th antithesis feature transfer neural network
Unit;
By the fisrt feature figure of one layer of output in the high feature hierarchy of neural network to be trained and the second auxiliary mind
The second feature figure of one layer of output in high feature hierarchy through network, is respectively supplied to the 6th antithesis feature transfer neural network
Unit;
To which the application can have identical dimensional according to two of the first antithesis feature transfer neural network unit output
Fisrt feature figure and second feature figure between difference, the second antithesis feature transfer neural network unit output two have
What difference, third antithesis feature transfer neural network unit between the fisrt feature figure and second feature figure of identical dimensional exported
Difference, the 4th antithesis feature transfer neural network list between two fisrt feature figures and second feature figure with identical dimensional
Difference, the 5th antithesis feature transfer mind between two fisrt feature figures and second feature figure with identical dimensional of member output
Difference, the 6th antithesis between two fisrt feature figures and second feature figure with identical dimensional through network unit output is special
Difference between two fisrt feature figures and second feature figure with identical dimensional of sign transfer neural network unit output, to
Between crosswise joint markup information in the crosswise joint predictive information and video sequence sample of trained neural network output
The nonproductive task predictive information and the nonproductive task in video sequence sample of difference and neural network to be trained output mark
Difference between information adjusts neural network and six antithesis Feature Conversion nerve nets to be trained using L2 loss function
The network parameter of network unit.And the first auxiliary nervous network and the second auxiliary nervous network do not need to carry out the tune of network parameter
It is whole.
In an optional example, the neural network to be trained in the application is generally divided at least two characteristic layers
Secondary, the quantity of each auxiliary nervous divided feature hierarchy of network in the application can not be identical.Under normal conditions, one it is auxiliary
Help the corresponding 1 antithesis feature transfer neural network unit of a feature hierarchy in neural network.Certainly, the application is also not excluded for
A feature hierarchy in one auxiliary nervous network corresponds to the feelings of 2 or more antithesis feature transfer neural network units
Condition.
In an optional example, the neural network of in this application to be trained and the first auxiliary nervous network are distinguished
It is divided into low feature hierarchy, middle feature hierarchy and high feature hierarchy, this 3 feature hierarchies, and the second auxiliary nervous network
It is divided into low feature hierarchy and high feature hierarchy, in the case where the two feature hierarchies:
The fisrt feature figure and that the application should export one layer in the low feature hierarchy of neural network to be trained
The second feature figure of one layer of output in the low feature hierarchy of one auxiliary nervous network, is respectively supplied to the first antithesis feature transfer
Neural network unit;
By the fisrt feature figure of one layer of output in the low feature hierarchy of neural network to be trained and the second auxiliary mind
The second feature figure of one layer of output in low feature hierarchy through network, is respectively supplied to the second antithesis feature transfer neural network
Unit;
By the fisrt feature figure of one layer of output in the middle feature hierarchy of neural network to be trained and the first auxiliary mind
The second feature figure of one layer of output in middle feature hierarchy through network, is respectively supplied to third antithesis feature transfer neural network
Unit;
By the fisrt feature figure of one layer of output in the high feature hierarchy of neural network to be trained and the second auxiliary mind
The second feature figure of one layer of output in high feature hierarchy through network, is respectively supplied to the 4th antithesis feature transfer neural network
Unit;
By the fisrt feature figure of one layer of output in the high feature hierarchy of neural network to be trained and the first auxiliary mind
The second feature figure of one layer of output in high feature hierarchy through network, is respectively supplied to the 5th antithesis feature transfer neural network
Unit;
To which the application can have identical dimensional according to two of the first antithesis feature transfer neural network unit output
Fisrt feature figure and second feature figure between difference, the second antithesis feature transfer neural network unit output two have
What difference, third antithesis feature transfer neural network unit between the fisrt feature figure and second feature figure of identical dimensional exported
Difference, the 4th antithesis feature transfer neural network list between two fisrt feature figures and second feature figure with identical dimensional
Difference, the 5th antithesis feature transfer mind between two fisrt feature figures and second feature figure with identical dimensional of member output
Difference, mind to be trained between two fisrt feature figures and second feature figure with identical dimensional through network unit output
Through network output crosswise joint predictive information and video sequence sample in crosswise joint markup information between difference and
Between nonproductive task markup information in the nonproductive task predictive information and video sequence sample of neural network output to be trained
Difference adjust neural network to be trained and five antithesis Feature Conversion neural network units using L2 loss function
Network parameter.And the first auxiliary nervous network and the second auxiliary nervous network do not need to carry out the adjustment of network parameter.
In an optional example, loss function used by the application can be expressed as the form of following formula (1):
In above-mentioned formula (1), L indicates loss function;Indicate vehicle lateral control predictive information
The vehicle lateral control mark value of p and video sequence sampleBetween loss;L is indicated performed by neural network to be trained
First of task in multitask;The total quantity of M expression multitask;αlIndicate the weight of first of task;It indicates
For the predicted value b of first of tasklWith the mark value for first of task in video sequence sampleBetween loss;k
Indicate k-th of auxiliary nervous network;K indicates the quantity of auxiliary nervous network;J indicates that j-th of k-th of auxiliary nervous network is special
Levy level;NAkIndicate the quantity of the feature hierarchy of k-th of auxiliary nervous network;βkIndicate the weight of k-th of auxiliary nervous network;
Lmimic(Φ(ej),Ψ(fjk)) indicate characteristic pattern Φ (ej) and characteristic pattern Ψ (fjk) between loss;Φ(ej) indicate wait train
Neural network in j-th of feature hierarchy in one layer output fisrt feature figure ejVia corresponding feature transfer nerve
Fisrt feature transfer neural network unit in network unit carries out the fisrt feature figure formed after dimension transformation;Ψ(fjk) table
Show the second feature figure f of one layer of output in j-th of feature hierarchy in k-th of auxiliary nervous networkjkVia corresponding spy
Second feature transfer neural network unit in sign transfer neural network unit carries out the second feature formed after dimension transformation
Figure.
In an optional example, the application can treat trained neural network according to network depth and carry out uniform depth
Division (for example, the number of plies that each feature hierarchy includes is roughly the same) so that the supervision point of neural network to be trained is as far as possible
It is uniformly distributed.Supervision point in the application refers in neural network to be trained to be that antithesis Feature Conversion neural network unit mentions
For the point of fisrt feature figure.Certainly, the application can also treat trained neural network by the way of the division of non-homogeneous depth
And auxiliary nervous network carries out depth division.The application does not limit the quantity of supervision point and the specific location of supervision point.
In an optional example, when the training for the neural network wait train reaches predetermined iterated conditional, this
Training process terminates.Predetermined iterated conditional in the application may include: at least one layer of formation in neural network to be trained
Fisrt feature figure and auxiliary nervous network at least one layer of second feature figure formed between difference, crosswise joint prediction
The difference between difference and nonproductive task predicted value and nonproductive task mark value between information and crosswise joint markup information
It is different, meet predetermined difference requirement.In the case where each difference meets predetermined difference requirement, this successfully trains neural network
It completes.Predetermined iterated conditional in the application also may include: to treat trained neural network to be trained, used video
The quantity of sequence samples reaches predetermined quantity requirement etc..Reach predetermined quantity requirement in the quantity of the video sequence sample used,
However, this does not train successfully neural network in the case that each difference does not meet predetermined difference requirement.Success training is completed
Neural network can be used for Vehicular intelligent driving crosswise joint processing.
In an optional example, the application can treat before treating trained neural network and carrying out above-mentioned training
Trained neural network carries out the pre-training based on multitask, and successfully completes and then hold in the pre-training based on multitask
The above-mentioned training process of row.It follows that the training process that the application treats trained neural network can be divided into two stages,
First stage is the pre-training based on multitask, and second stage is the training based on auxiliary nervous network.
In an optional example, multitask performed by the neural network to be trained of the application may include: vehicle
Steering wheel angle predicts that task, vehicle speed per hour prediction task and steering wheel for vehicle corner torque predict task.Vehicle therein
Steering wheel angle predicts that task is the main task of neural network to be trained, and vehicle speed per hour therein prediction task and vehicle
Steering wheel angle torque predicts that task is the secondary task of neural network to be trained.
In an optional example, the first stage training process of the neural network to be trained of the application can be such as Fig. 2
It is shown.
In Fig. 2, S200, initialization neural network.Initialize neural network to be trained.
In an optional example, the case where neural network to be trained is three-dimensional space-time residual error convolutional neural networks
Under, the application can use the two-dimentional residual error convolutional neural networks for the training that succeeded, and treats trained neural network and carries out initially
Change.For example, the convolution kernel parameter matrix ww of the two-dimentional residual error convolutional neural networks for the training that succeeded can be replicated t times, from
And by convolution kernel parameter matrix ww, it is expanded into three dimensional convolution kernel parameter matrix wwt, the application can be by three dimensional convolution kernel
Convolution kernel parameter matrix of the parameter matrix wwt as neural network to be trained.Two-dimentional residual error convolution mind in the application
It can be the two-dimentional residual error convolutional neural networks for image classification or image detection through network.And for training the two dimension residual
The image pattern of poor convolutional neural networks can be the image pattern of traffic scene class, it is of course also possible to be other kinds of figure
Decent.The application treats trained neural network by the two-dimentional residual error convolutional neural networks using the training that succeeded and carries out just
Beginningization is conducive to the convergence of first stage training process.
S210, video sequence sample is supplied to neural network.
S220, carry out multitasking by neural network, and according to the respective prediction result of multitasking respectively with view
Difference between the multitask markup information of frequency sequence sample, adjusts the network parameter of neural network.
In an optional example, the application can with neural network to be trained export each task processing result, point
Difference not between the corresponding task markup information of video sequence sample is tutorial message, for the purpose of reducing each difference, benefit
With L2 loss function, the network parameter of neural network to be trained is adjusted.
In an optional example, loss function used by the application first stage training process can be expressed as following
The form of formula (2):
Meaning represented by each letter in formula (2), may refer to the above-mentioned explanation for each letter in formula (1),
This will not be repeated here.
One example of the second stage training process of the neural network to be trained of the application is as shown in Figure 3.
In Fig. 3, neural network to be trained includes: multiple convolutional layers (Conv x1, Convx2, Conv in such as Fig. 3
X3, Conv x4 and Conv x5), at least one full articulamentum and LSTM (LongShort-Term Memory, shot and long term note
Recall) neural network.Each convolutional layer in Fig. 3 only schematically illustrates 5 convolutional layers, this is not offered as nerve net to be trained
The quantity 5 for the convolutional layer that network is included, in addition, in practical applications, being likely between two adjacent convolutional layers of front and back in Fig. 3
It is spaced multiple layers.
The application utilizes auxiliary nervous network 1 and auxiliary nervous network 2, the two auxiliary nervous networks are to be trained to make
The study (such as learning by imitation) of neural network progress characteristic pattern.Video sequence sample is being provided mind to be trained by the application respectively
After network, auxiliary nervous network 1 and auxiliary nervous network 2, neural network to be trained, auxiliary nervous network 1, auxiliary mind
It is performed the following operations through network 2 and six antithesis feature transfer neural network units:
A characteristic pattern (the feature that such as size is 80 × 80 × 64 of Conv x1 output in neural network to be trained
Figure) it is provided to the fisrt feature transfer neural network unit 1 in the first antithesis feature transfer neural network unit, fisrt feature
After neural network unit 1 is shifted to a characteristic pattern progress dimension transformation processing (such as process of convolution and down-sampling are handled), shape
At a characteristic pattern with pre- dimensioning ' (characteristic pattern that such as size is 30 × 30 × 16).Conv in auxiliary nervous network 1
The b characteristic pattern (characteristic pattern that such as size is 256 × 320 × 64) of y1 output is provided to the first antithesis feature transfer nerve net
Second feature in network unit shifts neural network unit 1, and second feature shifts neural network unit 1 and carries out to b characteristic pattern
After dimension transformation processing (pondization that is such as averaged processing and down-sampling processing), the b characteristic pattern with pre- dimensioning is formed '
(characteristic pattern that such as size is 30 × 30 × 16).
C characteristic pattern (the feature that such as size is 80 × 80 × 64 of Conv x3 output in neural network to be trained
Figure) it is provided to the fisrt feature transfer neural network unit 2 in the second antithesis feature transfer neural network unit, fisrt feature
After neural network unit 2 is shifted to c characteristic pattern progress dimension transformation processing (such as process of convolution and down-sampling are handled), shape
At the c characteristic pattern with pre- dimensioning ' (characteristic pattern that such as size is 30 × 30 × 16).Conv in auxiliary nervous network 1
The d characteristic pattern (characteristic pattern that such as size is 256 × 320 × 64) of y3 output is provided to the second antithesis feature transfer nerve net
Second feature in network unit shifts neural network unit 2, and second feature shifts neural network unit 2 and carries out to d characteristic pattern
After dimension transformation processing (pondization that is such as averaged processing and down-sampling processing), the d characteristic pattern with pre- dimensioning is formed '
(characteristic pattern that such as size is 30 × 30 × 16).
E characteristic pattern (the feature that such as size is 80 × 80 × 64 of Conv x5 output in neural network to be trained
Figure) it is provided to the fisrt feature transfer neural network unit 3 in third antithesis feature transfer neural network unit, fisrt feature
After neural network unit 3 is shifted to e characteristic pattern progress dimension transformation processing (such as process of convolution and down-sampling are handled), shape
At the e characteristic pattern with pre- dimensioning ' (characteristic pattern that such as size is 30 × 30 × 16).Conv in auxiliary nervous network 1
The f characteristic pattern (characteristic pattern that such as size is 256 × 320 × 64) of y5 output is provided to third antithesis feature transfer nerve net
Second feature in network unit shifts neural network unit 3, and second feature shifts neural network unit 3 and carries out to f characteristic pattern
After dimension transformation processing (pondization that is such as averaged processing and down-sampling processing), the f characteristic pattern with pre- dimensioning is formed '
(characteristic pattern that such as size is 30 × 30 × 16).
G characteristic pattern (the feature that such as size is 80 × 80 × 64 of Conv x1 output in neural network to be trained
Figure) it is provided to the fisrt feature transfer neural network unit 4 in the 4th antithesis feature transfer neural network unit, fisrt feature
After neural network unit 4 is shifted to g characteristic pattern progress dimension transformation processing (such as process of convolution and down-sampling are handled), shape
At the g characteristic pattern with pre- dimensioning ' (characteristic pattern that such as size is 30 × 30 × 16).Conv in auxiliary nervous network 2
The h characteristic pattern (characteristic pattern that such as size is 256 × 320 × 64) of z1 output is provided to the 4th antithesis feature transfer nerve net
Second feature in network unit shifts neural network unit 4, and second feature shifts neural network unit 4 and carries out to h characteristic pattern
After dimension transformation processing (pondization that is such as averaged processing and down-sampling processing), the h characteristic pattern with pre- dimensioning is formed '
(characteristic pattern that such as size is 30 × 30 × 16).
The i-th characteristic pattern (feature that such as size is 80 × 80 × 64 of Conv x3 output in neural network to be trained
Figure) it is provided to the fisrt feature transfer neural network unit 5 in the 5th antithesis feature transfer neural network unit, fisrt feature
After neural network unit 5 is shifted to the i-th characteristic pattern progress dimension transformation processing (such as process of convolution and down-sampling are handled), shape
At the i-th characteristic pattern with pre- dimensioning ' (characteristic pattern that such as size is 30 × 30 × 16).Conv in auxiliary nervous network 2
The jth characteristic pattern (characteristic pattern that such as size is 256 × 320 × 64) of z3 output is provided to the 5th antithesis feature transfer nerve net
Second feature in network unit shifts neural network unit 5, and second feature shifts neural network unit 5 and carries out to jth characteristic pattern
After dimension transformation processing (pondization that is such as averaged processing and down-sampling processing), the jth characteristic pattern with pre- dimensioning is formed '
(characteristic pattern that such as size is 30 × 30 × 16).
Kth characteristic pattern (the feature that such as size is 80 × 80 × 64 of Conv x5 output in neural network to be trained
Figure) it is provided to the fisrt feature transfer neural network unit 6 in the 6th antithesis feature transfer neural network unit, fisrt feature
After neural network unit 6 is shifted to kth characteristic pattern progress dimension transformation processing (such as process of convolution and down-sampling are handled), shape
At the kth characteristic pattern with pre- dimensioning ' (characteristic pattern that such as size is 30 × 30 × 16).Conv in auxiliary nervous network 2
The l characteristic pattern (characteristic pattern that such as size is 256 × 320 × 64) of z5 output is provided to the 6th antithesis feature transfer nerve net
Second feature in network unit shifts neural network unit 6, and second feature shifts neural network unit 6 and carries out to l characteristic pattern
After dimension transformation processing (pondization that is such as averaged processing and down-sampling processing), the l characteristic pattern with pre- dimensioning is formed '
(characteristic pattern that such as size is 30 × 30 × 16).
Neural network to be trained can also be directed to video sequence sample final output steering wheel for vehicle corner predicted value, vehicle
Speed per hour predicted value and steering wheel for vehicle corner torque predicted value.
Difference, c characteristic pattern between the application can be according to a characteristic pattern ' and b characteristic pattern ' ' and d characteristic pattern '
Between difference, the difference between the difference between e characteristic pattern ' and f characteristic pattern ', g characteristic pattern ' and h characteristic pattern ',
Difference, steering wheel for vehicle between difference, kth characteristic pattern between i-th characteristic pattern ' and jth characteristic pattern ' ' and l characteristic pattern '
Difference, vehicle speed per hour predicted value and video between corner predicted value and the steering wheel for vehicle corner standard value of video sequence sample
Difference and steering wheel for vehicle corner torque predicted value and video sequence sample between the vehicle speed per hour standard value of sequence samples
Steering wheel for vehicle corner torque standard value between difference, using above-mentioned formula (1), adjust neural network to be trained and
The network parameter of six antithesis Feature Conversion neural network units.
Fig. 4 is the flow chart of crosswise joint method one embodiment that the application Vehicular intelligent drives.As shown in figure 4, should
Embodiment method includes: step S400, step S410 and step S420.Steps are as follows for each in Fig. 4:
S400, video sequence to be processed is obtained.
In an optional example, the video sequence to be processed in the application includes: the view shot in vehicle travel process
Frequency sequence.For example, the video sequence to be processed in the application can be the photographic device installed in vehicle by shooting, and formed
Video sequence.Video sequence in the application can be, and carry out video frame pumping by the video obtained to photographic device shooting
The video sequence for taking, and being formed.Certainly, the video sequence in the application is also possible to shoot the video obtained by photographic device
In all video frames be formed by video sequence.The application does not limit the specific manifestation form of video sequence.
S410, video sequence to be processed is supplied to neural network.Neural network in the application is to utilize above-mentioned implementation
The neural network that the training method training of neural network documented by example obtains.
S420, vehicle lateral control prediction processing is carried out to video sequence to be processed by neural network, and exports vehicle cross
To control forecasting information.
In an optional example, the neural network of the application is receiving video sequence to be processed, and executes vehicle cross
To after control forecasting processing, the vehicle lateral control predictive information exported may include: steering wheel for vehicle corner predicted value.
In an optional example, vehicle lateral control prediction be may be considered, the main task of the neural network of the application.
At least one nonproductive task can also be performed while executive director is engaged in the neural network of the application.For example, nonproductive task can
To include: vehicle speed per hour prediction task and/or steering wheel for vehicle corner prediction task dispatching.Although the neural network of the application is defeated
Out while vehicle lateral control predictive information, vehicle speed per hour predicted value and steering wheel for vehicle conversion estimation can also be exported
Value, however, in practical application scene, can ignore according to actual needs neural network output vehicle speed per hour predicted value and
The nonproductive tasks predicted values such as steering wheel for vehicle conversion estimation value, and only focus on vehicle lateral control predictive information.
In an optional example, the neural network of the application can be three-dimensional space-time residual error convolutional neural networks.Wherein
Three-dimensional typically refer on the basis of two-dimensional video frame, increase this dimension of time.The neural network of the application is usually
Deep neural network, for example, the neural network etc. that depth is 50-100 layers.Certainly, the application is also not excluded for the neural network and is
A possibility that shallow-layer neural network, for example, the neural network etc. that depth is 10 layers.It is existing for realizing vehicle lateral control
Neural network is often in wide-angle bend, the significant change of road light, the tunnel of dim light or the rapid straight way of wagon flow
In the case of the complex road conditions such as road, there are problems that vehicle lateral control low precision, the application is by the training process, making nerve
The characteristic pattern that network forms auxiliary nervous network is learnt (such as learning by imitation), can be in the depth for not increasing neural network
In the case where, improve vehicle lateral control precision.
Fig. 5 is the structural schematic diagram of training device one embodiment of the neural network of the application.As shown in figure 5, the reality
The device for applying example, which specifically includes that, provides video sequence sample module 500, dimension transformation module 510 and adjustment network parameter mould
Block 520.Optionally, which can also include: pre-training module 530.
Offer video sequence sample module 500 is used to for video sequence sample being respectively supplied to be used for vehicle lateral control pre-
Survey the neural network to be trained and at least one auxiliary nervous network of processing.
In an optional example, the auxiliary nervous network of the application may include: for for traffic scene image into
The neural network of row identification, detection, segmentation, classification and/or target following.The auxiliary nervous network of the application and mind to be trained
Through network isomery.
Dimension transformation module 510 is used for the first spy formed at least one layer in neural network for video sequence sample
It levies figure and carries out dimension transformation, and/or, the second spy that at least one layer in auxiliary nervous network is formed for video sequence sample
Sign figure carries out dimension transformation, to obtain at least a pair of fisrt feature figure and second feature figure with identical dimensional.
In an optional example, at least one layer in the neural network of the application carries out feature to video sequence sample and mentions
It takes, to obtain fisrt feature figure.It is at least one layer of to video sequence sample progress feature in the auxiliary nervous network of the application
It extracts, to obtain second feature figure.The dimension transformation module of the application can be with specifically: feature transfer neural network unit.
Feature transfer neural network unit is used to carry out dimension transformation processing to fisrt feature figure and/or second feature figure.
In an optional example, multiple layers in the neural network of the application carry out feature to video sequence sample respectively
It extracts, to obtain multiple fisrt feature figures.Feature transfer neural network unit includes: multiple fisrt feature transfer neural networks
Unit, multiple fisrt feature transfer neural network units are separately connected with multiple layers in neural network, fisrt feature transfer mind
It is used to carry out dimension transformation processing to the fisrt feature figure of the equivalent layer output in neural network through network unit.
In an optional example, multiple layers in the auxiliary nervous network of the application respectively carry out video sequence sample
Feature extraction, to obtain multiple second feature figures.Feature transfer neural network unit includes: multiple second feature transfer nerves
Network unit, second feature shift the second feature figure that neural network unit is used to export the equivalent layer in auxiliary nervous network
Carry out dimension transformation processing.
In an optional example, neural network to be trained and auxiliary nervous network can be divided at least two spies
Level is levied, the layer in different characteristic level is connect with different characteristic transfer neural network unit respectively.
Network parameter module 520 is adjusted to be used for according between the fisrt feature figure and second feature figure with identical dimensional
Difference adjusts the network parameter of neural network.
In an optional example, adjustment network parameter module is further used for, special according to first with identical dimensional
The vehicle lateral control predictive information of difference between sign figure and second feature figure and neural network to be trained output and auxiliary
The task predictive information of helping is respectively between the crosswise joint markup information and nonproductive task markup information in video sequence sample
Difference adjusts the network parameter of neural network.Vehicle lateral control predictive information therein may include: steering wheel angle prediction
Value.Nonproductive task predictive information therein may include: vehicle speed per hour predicted value and/or the prediction of steering wheel for vehicle corner torque
Value.
Pre-training module 530 is used to be based on multitask pre-training neural network.Multitask therein includes main task and extremely
A few nonproductive task, above-mentioned main task can be vehicle lateral control task, above-mentioned nonproductive task can for main task not
Other same vehicle control tasks.For example, at least one nonproductive task may include vehicle speed per hour prediction task and vehicle side
To at least one of disk corner torque prediction task.
In an optional example, pre-training module 530 can first initialize neural network;Later, pre-training module 530
Video sequence sample is supplied to neural network;Then, multitasking is carried out by neural network, and each according to multitasking
From the prediction result difference between the multitask markup information of video sequence sample respectively, adjust the network ginseng of neural network
Number.
In an optional example, the case where the neural network of the application is three-dimensional space-time residual error convolutional neural networks
Under, the above-mentioned operation for initializing neural network to be trained can be with are as follows: utilizes the trained two-dimentional residual error convolutional Neural net that succeed
Network is treated trained neural network and is initialized.
Video sequence sample module 500, dimension transformation module 510, adjustment network parameter module 520 and pre-training are provided
Concrete operations performed by module 530 may refer to the description that Fig. 1 to Fig. 3 is directed in above method embodiment.Herein no longer
Repeated explanation.
Fig. 6 is the structural schematic diagram of transverse control device one embodiment that the Vehicular intelligent of the application drives.Such as Fig. 6 institute
Show, the device of the embodiment, which specifically includes that, to be obtained video sequence module 600, provides video sequence module 610 and neural network
620。
Video sequence module 600 is obtained for obtaining video sequence to be processed.Video sequence to be processed therein includes: vehicle
The video sequence shot in driving process.
Video sequence module 610 is provided to be used to video sequence to be processed being supplied to neural network.The nerve net of the application
Network includes: three-dimensional space-time residual error convolutional neural networks.Three-dimensional therein includes: time dimension.
Neural network 620 is used to carry out vehicle lateral control prediction processing to video sequence to be processed, and exports vehicle cross
To control forecasting information.Vehicle lateral control predictive information therein includes: steering wheel angle predicted value.The nerve net of the application
Network can be the training technique scheme using above-mentioned neural network, made of training.
It obtains video sequence module 600, specific behaviour performed by video sequence module 610 and neural network 620 is provided
Make, may refer in above method embodiment for the description of the step in Fig. 4.This will not be repeated here.
Example devices
Fig. 7 shows the example devices 700 for being adapted for carrying out the application, and equipment 700 can be the control configured in automobile
System/electronic system, mobile terminal (for example, intelligent mobile phone etc.), personal computer (PC, for example, desktop computer or
Notebook computer etc.), tablet computer and server etc..In Fig. 7, equipment 700 includes one or more processor, communication
Portion etc., one or more of processors can be with are as follows: one or more central processing unit (CPU) 701, and/or, one
Or the Lateral Controller (GPU) that the Vehicular intelligent of multiple crosswise joints that Vehicular intelligent driving is carried out using neural network is driven
713 etc., processor can add according to the executable instruction being stored in read-only memory (ROM) 702 or from storage section 708
The executable instruction that is downloaded in random access storage device (RAM) 703 and execute various movements appropriate and processing.Communication unit 712
It can include but is not limited to network interface card, the network interface card can include but is not limited to IB (Infiniband) network interface card.Processor can with only
It reads communication in memory 702 and/or random access storage device 703 and passes through bus 704 and communication unit to execute executable instruction
712 are connected and communicate through communication unit 712 with other target devices, to complete the corresponding steps in the application.
Operation performed by above-mentioned each instruction may refer to the associated description in above method embodiment, herein no longer in detail
Explanation.In addition, in RAM 703, various programs and data needed for device operation can also be stored with.CPU701,
ROM702 and RAM703 is connected with each other by bus 704.
In the case where there is RAM703, ROM702 is optional module.RAM703 store executable instruction, or at runtime to
Executable instruction is written in ROM702, executable instruction executes central processing unit 701 included by above-mentioned method for segmenting objects
The step of.Input/output (I/O) interface 705 is also connected to bus 704.Communication unit 712 can integrate setting, also can be set
For with multiple submodule (for example, multiple IB network interface cards), and it is connect respectively with bus.
I/O interface 705 is connected to lower component: the importation 706 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 707 of spool (CRT), liquid crystal display (LCD) etc. and loudspeaker etc.;Storage section 708 including hard disk etc.;
And the communications portion 709 of the network interface card including LAN card, modem etc..Communications portion 709 via such as because
The network of spy's net executes communication process.Driver 710 is also connected to I/O interface 705 as needed.Detachable media 711, such as
Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on as needed on driver 710, in order to read from thereon
Computer program be installed in storage section 708 as needed.
It should be strongly noted that framework as shown in Figure 7 is only a kind of optional implementation, in concrete practice process
In, can the component count amount and type according to actual needs to above-mentioned Fig. 7 selected, deleted, increased or replaced;In different function
It in component setting, can also be used separately positioned or integrally disposed and other implementations, set for example, GPU713 and CPU701 is separable
Set, manage for another example, GPU713 can be integrated on CPU701, the separable setting of communication unit, can also be integrally disposed in CPU701 or
GPU713 is upper etc..These interchangeable embodiments each fall within the protection scope of the application.
Particularly, it according to presently filed embodiment, may be implemented as calculating below with reference to the process of flow chart description
Machine software program, for example, the application embodiment includes a kind of computer program product, it can it includes machine is tangibly embodied in
The computer program on medium is read, computer program includes the program code for step shown in execution flow chart, program generation
Code may include the corresponding corresponding instruction of step executed in method provided by the present application.
In such an embodiment, which can be downloaded and be pacified from network by communications portion 709
Dress, and/or be mounted from detachable media 711.When the computer program is executed by central processing unit (CPU) 701, execute
The instruction as described in this application for realizing above-mentioned corresponding steps.
In one or more optional embodiments, the embodiment of the present disclosure additionally provides a kind of computer program program production
Product, for storing computer-readable instruction, described instruction is performed so that computer executes described in above-mentioned any embodiment
Vehicular intelligent drive crosswise joint method or neural network training method.
The computer program product can be realized especially by hardware, software or its mode combined.In an alternative embodiment
In son, the computer program product is embodied as computer storage medium, in another optional example, the computer
Program product is embodied as software product, such as software development kit (Software Development Kit, SDK) etc..
In one or more optional embodiments, the embodiment of the present disclosure additionally provides the cross that another Vehicular intelligent drives
To the training method and its corresponding device and electronic equipment, computer storage medium, computer of control method and neural network
Program and computer program product, method therein include: the cross that first device sends Vehicular intelligent driving to second device
To control instructions or training neural network, the instruction is so that second device executes the vehicle in any of the above-described possible embodiment
The crosswise joint method or training neural network method of intelligent driving;First device receives the Vehicular intelligent that second device is sent
The crosswise joint result or neural metwork training result of driving.
In some embodiments, the crosswise joint instruction or training neural network instruction which drives can have
Body is call instruction, and first device can indicate that second device executes the crosswise joint that Vehicular intelligent drives by way of calling
Operation or training neural network operation, accordingly, in response to call instruction is received, second device can execute above-mentioned vehicle
The step and/or process in any embodiment in the crosswise joint method of intelligent driving or the method for training neural network.
It should be understood that the terms such as " first " in the embodiment of the present disclosure, " second " are used for the purpose of distinguishing, and be not construed as
Restriction to the embodiment of the present disclosure.It should also be understood that in the disclosure, " multiple " can refer to two or more, " at least one
It is a " can refer to one, two or more.It should also be understood that for the either component, data or the structure that are referred in the disclosure,
In no clearly restriction or in the case where context provides opposite enlightenment, one or more may be generally understood to.Also answer
Understand, the disclosure highlights the difference between each embodiment to the description of each embodiment, it is same or similar it
Place can mutually refer to, for sake of simplicity, no longer repeating one by one.
The present processes and device, electronic equipment and computer-readable storage medium may be achieved in many ways
Matter.For example, can be realized by any combination of software, hardware, firmware or software, hardware, firmware the present processes and
Device, electronic equipment and computer readable storage medium.The said sequence of the step of for method merely to be illustrated,
The step of the present processes, is not limited to sequence described in detail above, unless specifically stated otherwise.In addition, some
In embodiment, the application can be also embodied as recording program in the recording medium, these programs include for realizing basis
The machine readable instructions of the present processes.Thus, the application also covers storage for executing the journey according to the present processes
The recording medium of sequence.
The description of the present application is given for the purpose of illustration and description, and is not exhaustively or by this Shen
It please be limited to disclosed form.Many modifications and variations are obvious for the ordinary skill in the art.Selection and
Description embodiment is the principle and practical application in order to more preferably illustrate the application, and makes those skilled in the art
It will be appreciated that the embodiment of the present application can be so that design the various embodiments with various modifications for being suitable for special-purpose.
Claims (10)
1. a kind of training method of neural network, which is characterized in that the training method includes:
Video sequence sample is respectively supplied to be used for the neural network to be trained and extremely of vehicle lateral control prediction processing
A few auxiliary nervous network;
Dimension is carried out for the fisrt feature figure that the video sequence sample is formed at least one layer in the neural network to turn
Change, and/or, at least one layer in the auxiliary nervous network for the video sequence sample formed second feature figure into
Row dimension transformation, to obtain at least a pair of fisrt feature figure and second feature figure with identical dimensional;
According to the difference between the fisrt feature figure with identical dimensional and second feature figure, the network of the neural network is adjusted
Parameter.
2. the method according to claim 1, wherein the auxiliary nervous network includes: for for traffic field
Scape image is identified, detected, divided, classified and/or the neural network of target following.
3. method according to any one of claim 1 to 2, which is characterized in that the auxiliary nervous network and it is described to
Trained neural network isomery.
4. according to the method in any one of claims 1 to 3, which is characterized in that described to treat in trained neural network
At least one layer carry out dimension transformation for the fisrt feature figure that the video sequence sample is formed, and/or, it is refreshing to the auxiliary
Dimension transformation is carried out for the second feature figure that the video sequence sample is formed through at least one layer in network, comprising:
By at least one layer of to video sequence sample progress feature extraction, acquisition fisrt feature figure in the neural network;
By at least one layer of to video sequence sample progress feature extraction, acquisition second feature in the auxiliary nervous network
Figure;
Dimension transformation is carried out to the fisrt feature figure and/or the second feature figure by feature transfer neural network unit
Reason.
5. a kind of crosswise joint method that Vehicular intelligent drives characterized by comprising
Obtain video sequence to be processed;Wherein, the video sequence to be processed includes: the video sequence shot in vehicle travel process
Column;
The video sequence to be processed is supplied to neural network;
Vehicle lateral control prediction processing is carried out to the video sequence to be processed by the neural network, and exports lateral direction of car
Control forecasting information;
Wherein, the neural network using the neural network described in any one of any one of claims 1 to 44 training method, training and
At.
6. a kind of training device of neural network, which is characterized in that described device includes:
Video sequence sample module is provided, is used for vehicle lateral control prediction processing for video sequence sample to be respectively supplied to
Neural network to be trained and at least one auxiliary nervous network;
Dimension transformation module, for being directed to the first of video sequence sample formation at least one layer in the neural network
Characteristic pattern carries out dimension transformation, and/or, the video sequence sample shape is directed to at least one layer in the auxiliary nervous network
At second feature figure carry out dimension transformation, to obtain at least a pair of fisrt feature figure and second feature with identical dimensional
Figure;
Network parameter module is adjusted, there is the difference between the fisrt feature figure and second feature figure of identical dimensional for basis,
Adjust the network parameter of the neural network.
7. the transverse control device that a kind of Vehicular intelligent drives characterized by comprising
Video sequence module is obtained, for obtaining video sequence to be processed;Wherein, the video sequence to be processed includes: vehicle
The video sequence shot in driving process;
Video sequence module is provided, for the video sequence to be processed to be supplied to neural network;
Neural network for carrying out vehicle lateral control prediction processing to the video sequence to be processed, and exports lateral direction of car
Control forecasting information;
Wherein, the neural network utilizes the training device of neural network as claimed in claim 6, made of training.
8. a kind of electronic equipment, comprising:
Memory, for storing computer program;
Processor, for executing the computer program stored in the memory, and the computer program is performed, and is realized
Method described in any one of the claims 1-5.
9. a kind of computer readable storage medium, is stored thereon with computer program, when which is executed by processor,
Realize method described in any one of the claims 1-5.
10. a kind of computer program, including computer instruction, when the computer instruction is run in the processor of equipment,
Realize method described in any one of the claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810845656.1A CN109165562B (en) | 2018-07-27 | 2018-07-27 | Neural network training method, lateral control method, device, equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810845656.1A CN109165562B (en) | 2018-07-27 | 2018-07-27 | Neural network training method, lateral control method, device, equipment and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109165562A true CN109165562A (en) | 2019-01-08 |
CN109165562B CN109165562B (en) | 2021-06-04 |
Family
ID=64898463
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810845656.1A Active CN109165562B (en) | 2018-07-27 | 2018-07-27 | Neural network training method, lateral control method, device, equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109165562B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109889849A (en) * | 2019-01-30 | 2019-06-14 | 北京市商汤科技开发有限公司 | Video generation method, device, medium and equipment |
CN109934119A (en) * | 2019-02-19 | 2019-06-25 | 平安科技(深圳)有限公司 | Adjust vehicle heading method, apparatus, computer equipment and storage medium |
CN110060264A (en) * | 2019-04-30 | 2019-07-26 | 北京市商汤科技开发有限公司 | Neural network training method, video frame processing method, apparatus and system |
CN110091751A (en) * | 2019-04-30 | 2019-08-06 | 深圳四海万联科技有限公司 | Electric car course continuation mileage prediction technique, equipment and medium based on deep learning |
CN111103577A (en) * | 2020-01-07 | 2020-05-05 | 湖南大学 | End-to-end laser radar calibration method based on cyclic neural network |
CN111325343A (en) * | 2020-02-20 | 2020-06-23 | 北京市商汤科技开发有限公司 | Neural network determination, target detection and intelligent driving control method and device |
CN111738037A (en) * | 2019-03-25 | 2020-10-02 | 广州汽车集团股份有限公司 | Automatic driving method and system and vehicle |
CN113284144A (en) * | 2021-07-22 | 2021-08-20 | 深圳大学 | Tunnel detection method and device based on unmanned aerial vehicle |
WO2021169604A1 (en) * | 2020-02-28 | 2021-09-02 | 北京市商汤科技开发有限公司 | Method and device for action information recognition, electronic device, and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107679489A (en) * | 2017-09-29 | 2018-02-09 | 北京奇虎科技有限公司 | Automatic Pilot processing method, device and computing device based on scene cut |
CN107944375A (en) * | 2017-11-20 | 2018-04-20 | 北京奇虎科技有限公司 | Automatic Pilot processing method and processing device based on scene cut, computing device |
CN107958287A (en) * | 2017-11-23 | 2018-04-24 | 清华大学 | Towards the confrontation transfer learning method and system of big data analysis transboundary |
US20180129887A1 (en) * | 2016-11-07 | 2018-05-10 | Samsung Electronics Co., Ltd. | Method and apparatus for indicating lane |
CN108133484A (en) * | 2017-12-22 | 2018-06-08 | 北京奇虎科技有限公司 | Automatic Pilot processing method and processing device based on scene cut, computing device |
-
2018
- 2018-07-27 CN CN201810845656.1A patent/CN109165562B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180129887A1 (en) * | 2016-11-07 | 2018-05-10 | Samsung Electronics Co., Ltd. | Method and apparatus for indicating lane |
CN107679489A (en) * | 2017-09-29 | 2018-02-09 | 北京奇虎科技有限公司 | Automatic Pilot processing method, device and computing device based on scene cut |
CN107944375A (en) * | 2017-11-20 | 2018-04-20 | 北京奇虎科技有限公司 | Automatic Pilot processing method and processing device based on scene cut, computing device |
CN107958287A (en) * | 2017-11-23 | 2018-04-24 | 清华大学 | Towards the confrontation transfer learning method and system of big data analysis transboundary |
CN108133484A (en) * | 2017-12-22 | 2018-06-08 | 北京奇虎科技有限公司 | Automatic Pilot processing method and processing device based on scene cut, computing device |
Non-Patent Citations (2)
Title |
---|
ZHENGYUAN YANG: "End-to-end Multi-Modal Multi-Task Vehicle Control for Self-Driving Cars with Visual Perception", 《ARXIV》 * |
匿名: "迁移学习", 《博客园URL:HTTPS://WWW.CNBLOGS.COM/ WANGQIANG9/P/9244925.HTML》 * |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109889849B (en) * | 2019-01-30 | 2022-02-25 | 北京市商汤科技开发有限公司 | Video generation method, device, medium and equipment |
CN109889849A (en) * | 2019-01-30 | 2019-06-14 | 北京市商汤科技开发有限公司 | Video generation method, device, medium and equipment |
CN109934119A (en) * | 2019-02-19 | 2019-06-25 | 平安科技(深圳)有限公司 | Adjust vehicle heading method, apparatus, computer equipment and storage medium |
CN109934119B (en) * | 2019-02-19 | 2023-10-31 | 平安科技(深圳)有限公司 | Method, device, computer equipment and storage medium for adjusting vehicle running direction |
WO2020168660A1 (en) * | 2019-02-19 | 2020-08-27 | 平安科技(深圳)有限公司 | Method and apparatus for adjusting traveling direction of vehicle, computer device and storage medium |
CN111738037A (en) * | 2019-03-25 | 2020-10-02 | 广州汽车集团股份有限公司 | Automatic driving method and system and vehicle |
CN111738037B (en) * | 2019-03-25 | 2024-03-08 | 广州汽车集团股份有限公司 | Automatic driving method, system and vehicle thereof |
CN110060264B (en) * | 2019-04-30 | 2021-03-23 | 北京市商汤科技开发有限公司 | Neural network training method, video frame processing method, device and system |
CN110091751A (en) * | 2019-04-30 | 2019-08-06 | 深圳四海万联科技有限公司 | Electric car course continuation mileage prediction technique, equipment and medium based on deep learning |
CN110060264A (en) * | 2019-04-30 | 2019-07-26 | 北京市商汤科技开发有限公司 | Neural network training method, video frame processing method, apparatus and system |
CN111103577A (en) * | 2020-01-07 | 2020-05-05 | 湖南大学 | End-to-end laser radar calibration method based on cyclic neural network |
CN111325343A (en) * | 2020-02-20 | 2020-06-23 | 北京市商汤科技开发有限公司 | Neural network determination, target detection and intelligent driving control method and device |
CN111325343B (en) * | 2020-02-20 | 2022-09-09 | 北京市商汤科技开发有限公司 | Neural network determination, target detection and intelligent driving control method and device |
WO2021169604A1 (en) * | 2020-02-28 | 2021-09-02 | 北京市商汤科技开发有限公司 | Method and device for action information recognition, electronic device, and storage medium |
JP2022525723A (en) * | 2020-02-28 | 2022-05-19 | ベイジン センスタイム テクノロジー デベロップメント シーオー.,エルティーディー | Operation information identification method, device, electronic device and storage medium |
CN113284144A (en) * | 2021-07-22 | 2021-08-20 | 深圳大学 | Tunnel detection method and device based on unmanned aerial vehicle |
CN113284144B (en) * | 2021-07-22 | 2021-11-30 | 深圳大学 | Tunnel detection method and device based on unmanned aerial vehicle |
Also Published As
Publication number | Publication date |
---|---|
CN109165562B (en) | 2021-06-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109165562A (en) | Training method, crosswise joint method, apparatus, equipment and the medium of neural network | |
US20230229919A1 (en) | Learning to generate synthetic datasets for training neural networks | |
US11521060B2 (en) | Tensor-based computing system for quaternion operations | |
CN111797893B (en) | Neural network training method, image classification system and related equipment | |
CN106599789B (en) | The recognition methods of video classification and device, data processing equipment and electronic equipment | |
CN104685516B (en) | Apparatus and method for realizing the renewal based on event in spiking neuron network | |
CN106548192B (en) | Image processing method, device and electronic equipment neural network based | |
CN109299716A (en) | Training method, image partition method, device, equipment and the medium of neural network | |
CN109325972A (en) | Processing method, device, equipment and the medium of laser radar sparse depth figure | |
CN106953862A (en) | The cognitive method and device and sensor model training method and device of network safety situation | |
EP3847619B1 (en) | Unsupervised depth prediction neural networks | |
US20200410338A1 (en) | Multimodal data learning method and device | |
WO2022104178A1 (en) | Inverting neural radiance fields for pose estimation | |
CN109934247A (en) | Electronic device and its control method | |
CN108229532A (en) | Image-recognizing method, device and electronic equipment | |
US11954755B2 (en) | Image processing device and operation method thereof | |
TW201633181A (en) | Event-driven temporal convolution for asynchronous pulse-modulated sampled signals | |
CN109816001A (en) | A kind of more attribute recognition approaches of vehicle based on deep learning, device and equipment | |
CN110298394A (en) | A kind of image-recognizing method and relevant apparatus | |
CN113326826A (en) | Network model training method and device, electronic equipment and storage medium | |
CN116071817A (en) | Network architecture and training method of gesture recognition system for automobile cabin | |
CN108229680A (en) | Nerve network system, remote sensing images recognition methods, device, equipment and medium | |
Vogt | An overview of deep learning techniques | |
CN108830139A (en) | Depth context prediction technique, device, medium and the equipment of human body key point | |
CN108229650A (en) | Convolution processing method, device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |