CN107944375A - Automatic Pilot processing method and processing device based on scene cut, computing device - Google Patents
Automatic Pilot processing method and processing device based on scene cut, computing device Download PDFInfo
- Publication number
- CN107944375A CN107944375A CN201711156360.0A CN201711156360A CN107944375A CN 107944375 A CN107944375 A CN 107944375A CN 201711156360 A CN201711156360 A CN 201711156360A CN 107944375 A CN107944375 A CN 107944375A
- Authority
- CN
- China
- Prior art keywords
- layer
- scene cut
- nervus opticus
- network
- intermediate layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W2050/0001—Details of the control system
- B60W2050/0043—Signal treatments, identification of variables or parameters, parameter estimation or state estimation
Abstract
The invention discloses a kind of automatic Pilot processing method and processing device based on scene cut, computing device, its method includes:Real-time image acquisition collecting device is captured and/or the vehicle drive way recorded in video in current frame image;Current frame image is inputted into nervus opticus network, obtains the corresponding scene cut result of current frame image;Wherein, nervus opticus network carries out instructing training to obtain using the output data at least one layer of intermediate layer of first nerves network trained in advance, and the number of plies of first nerves network is more than the number of plies of nervus opticus network;According to scene cut as a result, determining travel route and/or driving instruction;According to identified travel route and/or driving instruction, automatic Pilot control is carried out to vehicle.Scene cut quick and precisely is calculated as a result, accurately determining travel route and/or driving instruction using scene cut result using the less neutral net of the number of plies after training in the present invention, helps to improve the security of automatic Pilot.
Description
Technical field
The present invention relates to deep learning field, and in particular to a kind of automatic Pilot processing method and dress based on scene cut
Put, computing device.
Background technology
In the prior art, the full convolutional neural networks being mainly based upon to image scene segmentation processing in deep learning,
These processing methods utilize the thought of transfer learning, the network migration that will be obtained on extensive categorized data set by pre-training
It is trained on to image partitioned data set, so as to obtain the segmentation network for scene cut, then utilizes the segmentation network
Scene cut is carried out to image.
Higher requirement is had to the timeliness and accuracy of scene cut based on the automatic Pilot of scene cut, to ensure
The security of automatic Pilot.The full convolutional neural networks used in the prior art often have multilayer intermediate layer, can so obtain
To more accurately scene cut result.But the calculating speed in multilayer intermediate layer can be slower, it is impossible to quickly scene is split,
Situation of the vehicle in way is driven can not be quickly obtained.And during using the less neutral net in intermediate layer, due to centre
Layer by layer count it is less, its calculating speed is very fast, but is limited by its number of plies, it is possible to cause computing capability is limited, capability of fitting is poor,
Obtain the problems such as result is inaccurate.
The content of the invention
In view of the above problems, it is proposed that the present invention overcomes the above problem in order to provide one kind or solves at least in part
State the automatic Pilot processing method and processing device based on scene cut, the computing device of problem.
According to an aspect of the invention, there is provided a kind of automatic Pilot processing method based on scene cut, it includes:
Real-time image acquisition collecting device is captured and/or the vehicle drive way recorded in video in present frame figure
Picture;
Current frame image is inputted into nervus opticus network, obtains the corresponding scene cut result of current frame image;Its
In, nervus opticus network carries out guidance instruction using the output data at least one layer of intermediate layer of first nerves network trained in advance
Get, the number of plies of first nerves network is more than the number of plies of nervus opticus network;
According to scene cut as a result, determining travel route and/or driving instruction;
According to identified travel route and/or driving instruction, automatic Pilot control is carried out to vehicle.
Alternatively, according to scene cut as a result, determining that travel route and/or driving instruction further comprise:
According to scene cut as a result, determining the profile information of special object;
According to the profile information of special object, the relative position relation of calculating vehicle and special object;
According to the relative position relation being calculated, travel route and/or driving instruction are determined.
Alternatively, the relative position relation of vehicle and special object include between vehicle and special object away from
From information and/or angle information.
Alternatively, according to scene cut as a result, determining that travel route and/or driving instruction further comprise:
According to the road signs information included in scene cut result, determine that vehicle travel route and/or traveling refer to
Order.
Alternatively, according to scene cut as a result, determining that travel route and/or driving instruction further comprise:
According to the traffic lights information included in scene cut result, travel route and/or driving instruction are determined.
Alternatively, the training process of nervus opticus network includes:
The training sample data of scene cut are inputted into trained obtained first nerves network, obtain first nerves
The output data in the first intermediate layer of at least one layer of network;
The training sample data of scene cut are inputted into nervus opticus network to be trained, obtain nervus opticus network
The second intermediate layer of at least one layer output data and final output data, at least one layer of second intermediate layer and at least one layer of the
One intermediate layer has correspondence;
Using between the output data at least one layer of second intermediate layer and the output data at least one layer of first intermediate layer
Loss between loss, and final output data and the output data that marks in advance, is trained nervus opticus network.
Alternatively, at least one layer of first intermediate layer includes the bottleneck layer of first nerves network;At least one layer of second intermediate layer
Bottleneck layer comprising nervus opticus network.
Alternatively, the output data and the output data at least one layer of first intermediate layer at least one layer of second intermediate layer are utilized
Between loss, and the loss between final output data and the output data that marks in advance instructs nervus opticus network
White silk further comprises:
According between the output data at least one layer of second intermediate layer and the output data at least one layer of first intermediate layer
The weight parameter of loss renewal nervus opticus network, according to the loss between final output data and the output data marked in advance more
The weight parameter of new nervus opticus network, is trained nervus opticus network.
Alternatively, the input data of training sample is being inputted into nervus opticus network to be trained, is obtaining the second god
Before the output data and final output data in the second intermediate layer of at least one layer through network, method further includes:
The training sample data of scene cut are subjected to down-sampling processing, using the data after processing as nervus opticus network
Scene cut training sample data.
Alternatively, the output data and the output data at least one layer of first intermediate layer at least one layer of second intermediate layer are utilized
Between loss, and the loss between final output data and the output data that marks in advance instructs nervus opticus network
White silk further comprises:
Using between the output data at least one layer of second intermediate layer and the output data at least one layer of first intermediate layer
Loss, and final output data and the output data to the pre- mark of the training sample data of scene cut after down-sampling processing
Between loss, nervus opticus network is trained.
Alternatively, method further includes:
Training sample input data of the current frame image as scene cut is collected, and, to current frame image into pedestrian
Work marks, using the image after mark as the output data marked in advance.
According to another aspect of the present invention, there is provided a kind of automatic Pilot processing unit based on scene cut, it is wrapped
Include:
Acquisition module, suitable for regarding in captured by the real-time image acquisition collecting device and/or vehicle drive way recorded
Current frame image in frequency;
Identification module, suitable for inputting current frame image into nervus opticus network, obtains the corresponding field of current frame image
Scape segmentation result;Wherein, nervus opticus network utilizes the output at least one layer of intermediate layer of first nerves network trained in advance
Data carry out instructing training to obtain, and the number of plies of first nerves network is more than the number of plies of nervus opticus network;
Determining module, suitable for according to scene cut as a result, determining travel route and/or driving instruction;
Control module, suitable for according to identified travel route and/or driving instruction, automatic Pilot is carried out to vehicle
Control.
Optionally it is determined that module is further adapted for:
According to scene cut as a result, determining the profile information of special object;According to the profile information of special object, calculate certainly
The relative position relation of body vehicle and special object;According to the relative position relation being calculated, travel route and/or row are determined
Sail instruction.
Alternatively, the relative position relation of vehicle and special object include between vehicle and special object away from
From information and/or angle information.
Optionally it is determined that module is further adapted for:
According to the road signs information included in scene cut result, determine that vehicle travel route and/or traveling refer to
Order.
Optionally it is determined that module is further adapted for:
According to the traffic lights information included in scene cut result, travel route and/or driving instruction are determined.
Alternatively, device further includes:Scene cut guiding via network training module;
Scene cut guiding via network training module includes:
First output unit, suitable for inputting the training sample data of scene cut to trained obtained first nerves net
In network, the output data in the first intermediate layer of at least one layer of first nerves network is obtained;
Second output unit, suitable for inputting the training sample data of scene cut to nervus opticus network to be trained
In, obtain the output data and final output data in second intermediate layer of at least one layer of nervus opticus network, at least one layer the
Two intermediate layers have correspondence with least one layer of first intermediate layer;
Training unit is instructed, suitable for utilizing the output data at least one layer of second intermediate layer and at least one layer of first intermediate layer
Output data between loss, and the loss between final output data and the output data that marks in advance, to nervus opticus
Network is trained.
Alternatively, at least one layer of first intermediate layer includes the bottleneck layer of first nerves network;At least one layer of second intermediate layer
Bottleneck layer comprising nervus opticus network.
Alternatively, training unit is instructed to be further adapted for:
According between the output data at least one layer of second intermediate layer and the output data at least one layer of first intermediate layer
The weight parameter of loss renewal nervus opticus network, according to the loss between final output data and the output data marked in advance more
The weight parameter of new nervus opticus network, is trained nervus opticus network.
Alternatively, scene cut guiding via network training module further includes:
Downsampling unit, suitable for the training sample data of scene cut are carried out down-sampling processing, by the data after processing
Training sample data as the scene cut of nervus opticus network.
Alternatively, training unit is instructed to be further adapted for:
Using between the output data at least one layer of second intermediate layer and the output data at least one layer of first intermediate layer
Loss, and final output data and the output data to the pre- mark of the training sample data of scene cut after down-sampling processing
Between loss, nervus opticus network is trained.
Alternatively, device further includes:
Collection module, suitable for collecting training sample input data of the current frame image as scene cut, and, to current
Two field picture is manually marked, using the image after mark as the output data marked in advance.
According to another aspect of the invention, there is provided a kind of computing device, including:Processor, memory, communication interface and
Communication bus, processor, memory and communication interface complete mutual communication by communication bus;
Memory is used to store an at least executable instruction, and executable instruction makes processor execution is above-mentioned to be based on scene cut
The corresponding operation of automatic Pilot processing method.
In accordance with a further aspect of the present invention, there is provided a kind of computer-readable storage medium, is stored with least one in storage medium
Executable instruction, executable instruction make processor perform such as the above-mentioned corresponding behaviour of automatic Pilot processing method based on scene cut
Make.
The automatic Pilot processing method and processing device based on scene cut that there is provided according to the present invention, computing device, are obtained in real time
Take the current frame image in the video captured by image capture device and/or in the vehicle drive way recorded;By present frame figure
As inputting into nervus opticus network, the corresponding scene cut result of current frame image is obtained;Wherein, nervus opticus network utilizes
The output data at least one layer of intermediate layer of trained first nerves network carries out instructing training to obtain in advance, first nerves network
The number of plies be more than nervus opticus network the number of plies;According to scene cut as a result, determining travel route and/or driving instruction;According to
Identified travel route and/or driving instruction, automatic Pilot control is carried out to vehicle.The present invention is higher using the number of plies
The nervus opticus network that the output data at least one layer of intermediate layer of first nerves network is less to the number of plies carries out guidance training, makes
The nervus opticus network that must be trained greatly improves its accuracy in the case where keeping its quick calculating.Utilize second
Neutral net fast and accurately can calculate current frame image, obtain scene cut as a result, utilizing scene cut result
Accurately determine travel route and/or driving instruction, help to improve the security of automatic Pilot.
Described above is only the general introduction of technical solution of the present invention, in order to better understand the technological means of the present invention,
And can be practiced according to the content of specification, and in order to allow above and other objects of the present invention, feature and advantage can
Become apparent, below especially exemplified by the embodiment of the present invention.
Brief description of the drawings
By reading the detailed description of hereafter preferred embodiment, it is various other the advantages of and benefit it is common for this area
Technical staff will be clear understanding.Attached drawing is only used for showing the purpose of preferred embodiment, and is not considered as to the present invention
Limitation.And in whole attached drawing, identical component is denoted by the same reference numerals.In the accompanying drawings:
Fig. 1 shows the flow of the automatic Pilot processing method according to an embodiment of the invention based on scene cut
Figure;
Fig. 2 shows the flow chart of scene cut guiding via network training method in accordance with another embodiment of the present invention;
Fig. 3 shows the flow of the automatic Pilot processing method in accordance with another embodiment of the present invention based on scene cut
Figure;
Fig. 4 shows the functional block of the automatic Pilot processing unit according to an embodiment of the invention based on scene cut
Figure;
Fig. 5 shows the function of the automatic Pilot processing unit in accordance with another embodiment of the present invention based on scene cut
Block diagram;
Fig. 6 shows a kind of structure diagram of computing device according to an embodiment of the invention.
Embodiment
The exemplary embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although the disclosure is shown in attached drawing
Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here
Limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure
Completely it is communicated to those skilled in the art.
Fig. 1 shows the flow of the automatic Pilot processing method according to an embodiment of the invention based on scene cut
Figure.As shown in Figure 1, the automatic Pilot processing method based on scene cut specifically comprises the following steps:
Step S101, real-time image acquisition collecting device is captured and/or the vehicle drive way recorded in video in
Current frame image.
Image capture device is illustrated by taking the camera set on automatic driving vehicle as an example in the present embodiment.For reality
Existing automatic Pilot, can pass through the traffic information around the camera collection vehicle that is set on automatic driving vehicle, then in step
In S101, current frame image when current frame image or shooting video of the camera in recorded video is obtained in real time.
Step S102, current frame image is inputted into nervus opticus network, obtains the corresponding scene point of current frame image
Cut result.
Nervus opticus network is shallow-layer neutral net, its number of plies is less, and calculating speed is fast, apply in general to mobile equipment,
The equipment such as slim calculator.The number of plies of first nerves network is more than the number of plies of nervus opticus network.First nerves network it is accurate
Rate higher, therefore, using first nerves network trained in advance at least one layer of intermediate layer output data to nervus opticus net
Network carries out guidance training so that the final output data of nervus opticus network and the final output data one of first nerves network
Cause, on the premise of nervus opticus network calculations speed is retained, greatly improve the calculated performance of nervus opticus network.Second god
By using the output data at least one layer of intermediate layer of first nerves network trained in advance instruct through network trained
Arrive, wherein, the sample that first nerves network and nervus opticus network training use is the training sample of scene cut.
Current frame image is inputted into nervus opticus network, the corresponding scene cut knot of current frame image can be obtained
Fruit.
Step S103, according to scene cut as a result, determining travel route and/or driving instruction.
Various objects are contained in scene cut result, according to the relation between various objects and vehicle, various right
As prompting message to vehicle etc., the travel route of vehicle within a preset time interval can determine, and/or determine
Driving instruction.Specifically, driving instruction may include to start running, stop traveling, be travelled or according to certain according to a certain travel speed
One acceleration carries out the instruction such as acceleration or deceleration traveling.Those skilled in the art can be set according to actual needs between preset time
Every not limiting herein.
Step S104, according to identified travel route and/or driving instruction, automatic Pilot control is carried out to vehicle
System.
After travel route and/or driving instruction is determined, so that it may refer to according to identified travel route and/or traveling
Order, automatic Pilot control is carried out to vehicle.Assuming that definite driving instruction is according to 6m/s2Acceleration carry out deceleration row
Sail, then in step S104, automatic Pilot control is carried out to vehicle, controls the brake system of vehicle so that from
Body vehicle is according to 6m/s2Acceleration carry out Reduced Speed Now.
The automatic Pilot processing method based on scene cut provided according to the present invention, real-time image acquisition collecting device institute
The current frame image in video in shooting and/or the vehicle drive way recorded;Current frame image is inputted to nervus opticus
In network, the corresponding scene cut result of current frame image is obtained;Wherein, nervus opticus network utilizes the first god trained in advance
The output data at least one layer of intermediate layer through network carries out instructing training to obtain, and the number of plies of first nerves network is more than the second god
The number of plies through network;According to scene cut as a result, determining travel route and/or driving instruction;According to identified travel route
And/or driving instruction, automatic Pilot control is carried out to vehicle.The present invention utilizes the higher first nerves network of the number of plies extremely
The nervus opticus network that the output data in few one layer of intermediate layer is less to the number of plies carries out guidance training so that trains second obtained
Neutral net greatly improves its accuracy in the case where keeping its quick calculating.Can be quick using nervus opticus network
Accurately current frame image is calculated, obtains scene cut as a result, accurately determining traveling road using scene cut result
Line and/or driving instruction, help to improve the security of automatic Pilot.
Fig. 2 shows the flow diagram of scene cut guiding via network training method according to an embodiment of the invention,
As shown in Fig. 2, scene cut network instructs training step to include the following steps:
Step S201, the training sample data of scene cut are inputted into trained obtained first nerves network, are obtained
Obtain the output data in the first intermediate layer of at least one layer of first nerves network.
First nerves network is to first pass through the neutral net that training has been cured in advance.Specifically, first nerves network is advance
The training sample data of multiple scene cuts have been used to have been able to be suitable for scene well by training, first nerves network
Segmentation.Wherein, first nerves network is preferably using deep-neural-network, such as neutral net applied to cloud server, its property
Can be good, computationally intensive, accuracy rate is high, and speed can be slower.First nerves network can export the output in the first intermediate layer of multilayer
Data, are respectively the 4th layer of the first intermediate layer, the 3rd layer of the first intermediate layer, the as first nerves network includes 4 layer of first intermediate layer
2 layer of first intermediate layer and the 1st layer of the first intermediate layer, wherein, the 1st layer of the first intermediate layer is the bottleneck layer of first nerves network.
The training sample data of scene cut are inputted into first nerves network, first nerves network can be obtained extremely
The output data in few one layer of first intermediate layer.Here it is possible to only obtain the output data in one layer of first intermediate layer, can also obtain
The output data in the first intermediate layer of adjacent multilayer, or the output data in the first intermediate layer of spaced multilayer is obtained,
It is configured with specific reference to the actual conditions of implementation, does not limit herein.
Step S202, the training sample data of scene cut are inputted into nervus opticus network to be trained, and obtain the
The output data and final output data in the second intermediate layer of at least one layer of two neutral nets.
Nervus opticus network is neutral net to be trained in the guidance training of scene cut network, is shallow-layer nerve net
Network, is such as applied to the neutral net of mobile terminal, its computing capability is limited, and performance is bad.The number of plies of first nerves network is more than
Nervus opticus network.As first nerves network the number of plies be 4 layers, be respectively the 4th layer of the first intermediate layer, the 3rd layer of the first intermediate layer,
2nd layer of the first intermediate layer and the 1st layer of the first intermediate layer;The number of plies of nervus opticus network is 2 layers, is respectively among the 2nd layer second
Layer and the 1st layer of the second intermediate layer.
The training sample data of scene cut are inputted into nervus opticus network, obtain at least the one of nervus opticus network
The output data in the second intermediate layer of layer.Wherein, at least one layer of second intermediate layer has corresponding close with least one layer of first intermediate layer
System.As the 1st layer of the first intermediate layer of first nerves network and the 1st layer of the second intermediate layer of nervus opticus network are corresponding, the first god
The 2nd layer of the first intermediate layer through network is corresponding with the 2nd layer of the second intermediate layer of nervus opticus network.
The output data in the second intermediate layer of the nervus opticus network of acquisition needs the with the first nerves network that obtains
The output data in one intermediate layer is corresponding, if obtaining the output data in two layers of first intermediate layers of first nerves network, it is also desirable to
Obtain the output data in two layers of second intermediate layers of nervus opticus network.Such as obtain the layers 1 and 2 of first nerves network
The output data in one intermediate layer, the output data in corresponding the second intermediate layer of layers 1 and 2 for obtaining nervus opticus network.
Preferably, at least one layer of first intermediate layer can include the bottleneck layer of first nerves network, i.e. first nerves network
The 1st layer of the first intermediate layer, at least one layer of second intermediate layer includes the bottleneck layer of nervus opticus network, i.e. nervus opticus network
1st layer of the second intermediate layer.Hidden layer is top in bottleneck layer, that is, neutral net, one layer of minimum centre of the vector dimension of output
Layer.Use bottleneck layer, it is ensured that subsequently when being trained, make final output data more accurate, preferably trained
As a result.
Inputted by the training sample data of scene cut into nervus opticus network to be trained, except acquisition nervus opticus
Outside the output data in the second intermediate layer of at least one layer of network, it is also necessary to the final output data of nervus opticus network are obtained, with
Easy to utilize final output data counting loss, nervus opticus network is trained.
It is shallow-layer neutral net in view of nervus opticus network, when the training sample data of scene cut are larger, directly
The training sample data of usage scenario segmentation can influence the arithmetic speed of nervus opticus network.It is alternatively possible to first to scene point
The training sample data cut carry out down-sampling processing, when such as the training sample data of scene cut be picture, at progress down-sampling
Reason can first reduce photo resolution, using the training sample data of the scene cut after processing as nervus opticus network inputs
The training sample data of scene cut.When so handling, the scene of low resolution after the processing of nervus opticus Web vector graphic down-sampling
The training sample data of segmentation are trained, the training sample data of the high-resolution scene cut of first nerves Web vector graphic into
Row training, when being trained using the output data of two neutral nets so that scene of the nervus opticus network to low resolution
The training sample data of segmentation can also obtain high-resolution output result.
Step S203, utilizes the output data and the output number at least one layer of first intermediate layer at least one layer of second intermediate layer
Loss between loss between, and final output data and the output data that marks in advance, carries out nervus opticus network
Training.
According between the output data at least one layer of second intermediate layer and the output data at least one layer of first intermediate layer
Loss, can update the weight parameter of nervus opticus network, make the output number at least one layer of second intermediate layer of nervus opticus network
According to the output data gone as far as possible close at least one layer of first intermediate layer of first nerves network.
Meanwhile according to the loss between the final output data of nervus opticus network and the output data marked in advance, can be with
The weight parameter of nervus opticus network is updated, nervus opticus network final output data is gone as far as possible defeated close to marking in advance
Go out data, ensure the accuracy of nervus opticus network final output data.In the above manner, complete to nervus opticus network into
Row training.Alternatively, when the training sample data of the scene cut after the processing of the second Web vector graphic down-sampling, it is also necessary to under
The training sample data of scene cut after sampling processing are marked in advance, obtain the training sample of scene cut after down-sampling processing
The output data of the pre- mark of notebook data.According to the pre- mark after the final output data of nervus opticus network and down-sampling processing
Output data between loss, the weight parameter of nervus opticus network can be updated, make nervus opticus network final output number
According to the output data gone as far as possible close to the pre- mark of data after down-sampling processing, ensure nervus opticus network final output number
According to accuracy.
The scene cut guiding via network training method provided according to the present invention, the training sample data of scene cut are inputted
Into trained obtained first nerves network, the output data in the first intermediate layer of at least one layer of first nerves network is obtained;
The training sample data of scene cut are inputted into nervus opticus network to be trained, obtain at least the one of nervus opticus network
The output data and final output data in the second intermediate layer of layer, at least one layer of second intermediate layer and at least one layer of first intermediate layer
With correspondence;Using at least one layer of second intermediate layer output data and at least one layer of first intermediate layer output data it
Between loss, and the loss between final output data and the output data that marks in advance is trained nervus opticus network.
By using the output data in the first intermediate layer of at least one layer of first nerves network corresponding to nervus opticus network at least one
The output data in the second intermediate layer of layer is trained, and can keep nervus opticus network in the case where its calculation amount is constant, greatly
The performance of big lifting nervus opticus network, the training time of effective reduction training nervus opticus network, improves the second network
Training effectiveness.
Fig. 3 shows the flow of the automatic Pilot processing method in accordance with another embodiment of the present invention based on scene cut
Figure.As shown in figure 3, the automatic Pilot processing method based on scene cut specifically comprises the following steps:
Step S301, real-time image acquisition collecting device is captured and/or the vehicle drive way recorded in video in
Current frame image.
Step S302, current frame image is inputted into nervus opticus network, obtains the corresponding scene point of current frame image
Cut result.
Above step is with reference to the step S101-S102 in Fig. 1 embodiments, and details are not described herein.
Step S303, according to scene cut as a result, determining the profile information of special object.
Specifically, special object may include the objects such as vehicle, pedestrian, road, barrier.Those skilled in the art can basis
It is actually needed and special object is set, does not limit herein.After scene cut result corresponding with current frame image has been obtained,
Can be according to scene cut corresponding with current frame image as a result, determining the profile letter of the special objects such as vehicle, pedestrian, road
Breath, subsequently to calculate the relative position relation of vehicle and special object.
Step S304, according to the profile information of special object, the relative position relation of calculating vehicle and special object.
Assuming that determine to have obtained the profile information of the profile information of vehicle 1 and vehicle 2 in step S303, then in step
, can be according to the profile information of vehicle 1 and the profile information of vehicle 2 in S304, the relative position for calculating vehicle and vehicle 1 is closed
System and the relative position relation of vehicle and vehicle 2.
The relative position relation of vehicle and special object includes the distance between vehicle and special object information,
If the air line distance of vehicle and vehicle 1 is 200 meters;The relative position relation of vehicle and special object further comprises certainly
Angle information between body vehicle and special object, if vehicle is in 10 degree of the right rear side of vehicle 1 angular direction.
Step S305, according to the relative position relation being calculated, determines travel route and/or driving instruction.
According to the vehicle and the relative position relation of special object being calculated, it can determine the vehicle pre-
If the travel route in time interval, and/or definite driving instruction.Specifically, driving instruction may include to start running, stop row
Sail, travelled according to a certain travel speed or carry out the instruction such as acceleration or deceleration traveling according to a certain acceleration.People in the art
Member can set prefixed time interval according to being actually needed, and not limit herein.
Such as according to the relative position relation being calculated, 10 meter Chu You a group traveling together immediately ahead of vehicle, then
It can be to carry out Reduced Speed Now according to the acceleration of 6m/s2 to determine driving instruction;Or according to the relative position relation being calculated
Understand there is vehicle 1 immediately ahead of vehicle at 200 meters of distance, has vehicle 2 at 45 degree of 2 meters of the angular direction distances in vehicle left side,
Then it is determined that travel route can be along front route running.
Step S306, according to the road signs information included in scene cut result, determine vehicle travel route and/
Or driving instruction.
Various road signs informations, such as caution sign are contained in scene cut result:Traffic circle, to the left racing
Curved, consecutive curve, Tunnel ahead etc.;Prohibitory sign:Forbid straight trip, No entry;Warning Mark:Speed limit, divide to Travel vehicle
Road, allow to turn around;Road construction safety sign:Men working, the closing of left road etc.;Also fingerpost, tourism distinctive emblem, auxiliary
Help mark etc..According to these specific road signs informations, it may be determined that vehicle travel route and/or driving instruction.
For example, current vehicle speed 100km/h, according to the speed limit of the front 500m included in scene cut result
The road signs information of 80km/h, determines that vehicle Reduced Speed Now instructs;Or according to being included in scene cut result before
The road signs information of the left road closings of face 200m, determines vehicle road driving to the right.
Step S307, according to the traffic lights information included in scene cut result, determines travel route and/or traveling
Instruction.
Traffic lights information is contained in scene cut result, can be true according to traffic lights information such as traffic lights information
Whether prolong current route calmly to continue to travel, or the travel route such as ramp to stop and/or driving instruction.
Such as according to the red light information of front 10m in scene cut result, vehicle ramp to stop is determined;Alternatively, according to
The green light information of front 10m in scene cut result, determines vehicle after the present road traveling that renews.
Further, above step S305, S306 and S307 can be performed parallel, and comprehensive consideration is according to scene cut result meter
Obtained relative position relation, the road signs information included and/or traffic lights information, determine travel route and/or
Driving instruction.
Step S308, according to identified travel route and/or driving instruction, automatic Pilot control is carried out to vehicle
System.
After travel route and/or driving instruction is determined, so that it may refer to according to identified travel route and/or traveling
Order, automatic Pilot control is carried out to vehicle.
Step S309, collects training sample input data of the current frame image as scene cut, and, to present frame figure
As manually being marked, using the image after mark as the output data marked in advance.
Image after current frame image and mark can input number as the training sample for being used for scene cut in sample storehouse
According to and output data.Instruction can be optimized to nervus opticus network using the image after the current frame image and mark of collection
Practice, so that the output result of nervus opticus network is more accurate.
The automatic Pilot processing method based on scene cut provided according to the present invention, utilizes trained nervus opticus
Network can quickly and accurately obtain the corresponding scene cut of current frame image in video as a result, being effectively improved picture field
The accuracy rate of scape segmentation, while ensure the treatment effeciency of nervus opticus network.Further, based on obtained scene cut result energy
Enough relative position relations for more accurately calculating the special object such as vehicle and other vehicles, pedestrian, road, according to calculating
Obtained relative position relation more can accurately determine travel route and/or driving instruction.Based on obtained scene cut
As a result the road signs information that is included in, traffic lights information, can preferably observe traffic laws rule beneficial to vehicle, peace
The full automatic Pilot accurately observed disciplines and obey laws, improves the security of automatic Pilot, optimizes automatic Pilot processing mode.
Fig. 4 shows the functional block of the automatic Pilot processing unit according to an embodiment of the invention based on scene cut
Figure, as shown in figure 4, the device includes:
Acquisition module 410, suitable in captured by the real-time image acquisition collecting device and/or vehicle drive way recorded
Current frame image in video.
Image capture device is illustrated by taking the camera set on automatic driving vehicle as an example in the present embodiment.For reality
Existing automatic Pilot, can pass through the traffic information around the camera collection vehicle that is set on automatic driving vehicle, acquisition module 410
Current frame image when current frame image or shooting video of the camera in recorded video is obtained in real time.
Identification module 420, suitable for inputting current frame image into nervus opticus network, it is corresponding to obtain current frame image
Scene cut result.
Nervus opticus network is shallow-layer neutral net, its number of plies is less, and calculating speed is fast, apply in general to mobile equipment,
The equipment such as slim calculator.The number of plies of first nerves network is more than the number of plies of nervus opticus network.First nerves network it is accurate
Rate higher, therefore, using first nerves network trained in advance at least one layer of intermediate layer output data to nervus opticus net
Network carries out guidance training so that the final output data of nervus opticus network and the final output data one of first nerves network
Cause, on the premise of nervus opticus network calculations speed is retained, greatly improve the calculated performance of nervus opticus network.Second god
By using the output data at least one layer of intermediate layer of first nerves network trained in advance instruct through network trained
Arrive, wherein, the sample that first nerves network and nervus opticus network training use is the training sample of scene cut.
Identification module 420 inputs current frame image into nervus opticus network, and it is corresponding can to obtain current frame image
Scene cut result.Determining module 430, suitable for according to scene cut as a result, determining travel route and/or driving instruction.
Various objects are contained in scene cut result, determining module 430 is according between various objects and vehicle
Relation, various objects can determine the traveling road of vehicle within a preset time interval to prompting message of vehicle etc.
Line, and/or definite driving instruction.Specifically, driving instruction may include to start running, stop traveling, according to a certain travel speed
Traveling carries out the instruction such as acceleration or deceleration traveling according to a certain acceleration.Those skilled in the art can be set according to being actually needed
Prefixed time interval, does not limit herein.
Determining module 430 is further adapted for according to scene cut as a result, determining the profile information of special object;According to specific
The profile information of object, calculates the relative position relation of vehicle and special object;Closed according to the relative position being calculated
System, determines travel route and/or driving instruction.
Specifically, special object may include the objects such as vehicle, pedestrian, road, barrier.Those skilled in the art can basis
It is actually needed and special object is set, does not limit herein.Scene corresponding with current frame image point has been obtained in identification module 420
After cutting result, determining module 430 can according to scene cut corresponding with current frame image as a result, determine vehicle, pedestrian,
The profile information of the special objects such as road.As determining module 430 determines to have obtained the profile of the profile information of vehicle 1 and vehicle 2
Information, according to the profile information of vehicle 1 and the profile information of vehicle 2, calculate the relative position relation of vehicle and vehicle 1 with
And the relative position relation of vehicle and vehicle 2.
The relative position relation of vehicle and special object includes the distance between vehicle and special object information,
As determining module 430 determines that the air line distance of vehicle and vehicle 1 is 200 meters;The opposite position of vehicle and special object
The relation of putting further comprises the angle information between vehicle and special object, as determining module 430 determines vehicle in car
1 10 degree of right rear side angular direction.
Determining module 430 can determine this according to the vehicle being calculated and the relative position relation of special object
The travel route of vehicle within a preset time interval, and/or definite driving instruction.Such as determining module 430 is according to calculating
Obtained relative position relation understands that 10 meter Chu You a group traveling together immediately ahead of vehicle, determining module 430 determines that driving instruction can
To carry out Reduced Speed Now according to the acceleration of 6m/s2;Or determining module 430 can according to the relative position relation being calculated
Know there is vehicle 1 immediately ahead of vehicle at 200 meters of distance, has vehicle 2 at 45 degree of 2 meters of the angular direction distances in vehicle left side, really
The definite travel route of cover half block 430 can be along front route running.
Determining module 430 is also further adapted for, according to the road signs information included in scene cut result, determining itself
Route or travel by vehicle and/or driving instruction.
Various road signs informations, such as caution sign are contained in scene cut result:Traffic circle, to the left racing
Curved, consecutive curve, Tunnel ahead etc.;Prohibitory sign:Forbid straight trip, No entry;Warning Mark:Speed limit, divide to Travel vehicle
Road, allow to turn around;Road construction safety sign:Men working, the closing of left road etc.;Also fingerpost, tourism distinctive emblem, auxiliary
Help mark etc..Determining module 430 is according to these specific road signs informations, it may be determined that vehicle travel route and/or
Driving instruction.
For example, current vehicle speed 100km/h, determining module 430 is according to the front included in scene cut result
The road signs information of the speed limit 80km/h of 500m, determines that vehicle Reduced Speed Now instructs;Or determining module 430 is according to field
The road signs information of the left road closings of 200m, determines vehicle road driving to the right before being included in scape segmentation result.
Determining module 430 is further adapted for according to the traffic lights information included in scene cut result, determines traveling
Route and/or driving instruction.
Traffic lights information is contained in scene cut result, such as traffic lights information, determining module 430 can be according to red
Green light information determines whether that prolonging current route continues to travel, or the travel route such as ramp to stop and/or driving instruction.
As determining module 430 according in scene cut result front 10m red light information, determine vehicle slow down stop
Car;Alternatively, green light information of the determining module 430 according to front 10m in scene cut result, determines that vehicle is current after reneing
Road driving.
Control module 440, suitable for according to identified travel route and/or driving instruction, being carried out to vehicle automatic
Driving control.
After determining module 430 determines travel route and/or driving instruction, control module 440 can according to really
Fixed travel route and/or driving instruction, automatic Pilot control is carried out to vehicle.Assuming that the row that determining module 430 is definite
It is according to 6m/s to sail instruction2Acceleration carry out Reduced Speed Now, control module 440 to vehicle carry out automatic Pilot control,
Control the brake system of vehicle so that vehicle is according to 6m/s2Acceleration carry out Reduced Speed Now.
The automatic Pilot processing unit based on scene cut provided according to the present invention, real-time image acquisition collecting device institute
The current frame image in video in shooting and/or the vehicle drive way recorded;Current frame image is inputted to nervus opticus
In network, the corresponding scene cut result of current frame image is obtained;Wherein, nervus opticus network utilizes the first god trained in advance
The output data at least one layer of intermediate layer through network carries out instructing training to obtain, and the number of plies of first nerves network is more than the second god
The number of plies through network;According to scene cut as a result, determining travel route and/or driving instruction;According to identified travel route
And/or driving instruction, automatic Pilot control is carried out to vehicle.The present invention utilizes the higher first nerves network of the number of plies extremely
The nervus opticus network that the output data in few one layer of intermediate layer is less to the number of plies carries out guidance training so that trains second obtained
Neutral net greatly improves its accuracy in the case where keeping its quick calculating.Can be quick using nervus opticus network
Accurately current frame image is calculated, obtains scene cut as a result, accurately determining traveling road using scene cut result
Line and/or driving instruction, help to improve the security of automatic Pilot.Further, can based on obtained scene cut result
The relative position relation of the special object such as vehicle and other vehicles, pedestrian, road is more accurately calculated, according to calculating
The relative position relation arrived more can accurately determine travel route and/or driving instruction.Based on obtained scene cut knot
The road signs information that is included in fruit, traffic lights information, can preferably observe traffic laws rule beneficial to vehicle, safety
The automatic Pilot accurately observed disciplines and obey laws.
Fig. 5 shows the function of the automatic Pilot processing unit in accordance with another embodiment of the present invention based on scene cut
Block diagram, as shown in figure 5, compared with Fig. 4, which further includes:
Scene cut instructs training module 450, and scene cut instructs training module 450 to include:First output unit 451,
Second output unit 452 and training unit 453 is instructed, downsampling unit 454 can also be included.
First output unit 451, suitable for inputting the training sample data of scene cut to trained the first obtained god
Through in network, obtaining the output data in the first intermediate layer of at least one layer of first nerves network.
First nerves network is to first pass through the neutral net that training has been cured in advance.Specifically, first nerves network is advance
The training sample data of multiple scene cuts have been used to have been able to be suitable for scene well by training, first nerves network
Segmentation.Wherein, first nerves network is preferably using deep-neural-network, such as neutral net applied to cloud server, its property
Can be good, computationally intensive, accuracy rate is high, and speed can be slower.First nerves network can export the output in the first intermediate layer of multilayer
Data, are respectively the 4th layer of the first intermediate layer, the 3rd layer of the first intermediate layer, the as first nerves network includes 4 layer of first intermediate layer
2 layer of first intermediate layer and the 1st layer of the first intermediate layer, wherein, the 1st layer of the first intermediate layer is the bottleneck layer of first nerves network.
First output unit 451 inputs the training sample data of scene cut into first nerves network, can obtain
The output data in the first intermediate layer of at least one layer of first nerves network.Here, the first output unit 451 can only obtain one layer
The output data in the first intermediate layer, can also obtain the output data in the first intermediate layer of adjacent multilayer, or the first output list
Member 451 obtains the output data in the first intermediate layer of spaced multilayer, is configured with specific reference to the actual conditions of implementation,
Do not limit herein.
Second output unit 452, suitable for inputting the training sample data of scene cut to nervus opticus net to be trained
In network, the output data and final output data in the second intermediate layer of at least one layer of nervus opticus network are obtained, it is at least one layer of
Second intermediate layer has correspondence with least one layer of first intermediate layer.
Nervus opticus network is neutral net to be trained in the guidance training of scene cut network, is shallow-layer nerve net
Network, is such as applied to the neutral net of mobile terminal, its computing capability is limited, and performance is bad.The number of plies of first nerves network is more than
Nervus opticus network.As first nerves network the number of plies be 4 layers, be respectively the 4th layer of the first intermediate layer, the 3rd layer of the first intermediate layer,
2nd layer of the first intermediate layer and the 1st layer of the first intermediate layer;The number of plies of nervus opticus network is 2 layers, is respectively among the 2nd layer second
Layer and the 1st layer of the second intermediate layer.
Second output unit 452 inputs the training sample data of scene cut into nervus opticus network, obtains second
The output data in the second intermediate layer of at least one layer of neutral net.Wherein, at least one layer of second intermediate layer and at least one layer first
Intermediate layer has correspondence.In the 1st layer of the first intermediate layer of first nerves network and the 1st layer second of nervus opticus network
Interbed corresponds to, and the 2nd layer of the first intermediate layer of first nerves network is corresponding with the 2nd layer of the second intermediate layer of nervus opticus network.
The output data in the second intermediate layer of the nervus opticus network that the second output unit 452 obtains need with obtain the
The output data in the first intermediate layer of one neutral net is corresponding, if the first output unit 451 obtains the two of first nerves network
The output data in the first intermediate layer of layer, the second output unit 452 are also required to obtain two layers of second intermediate layers of nervus opticus network
Output data.As the first output unit 451 obtains the output number in the first intermediate layer of layers 1 and 2 of first nerves network
According to corresponding second output unit 452 obtains the output data in the second intermediate layer of layers 1 and 2 of nervus opticus network.
Preferably, at least one layer of first intermediate layer can include the bottleneck layer of first nerves network, i.e. first nerves network
The 1st layer of the first intermediate layer, at least one layer of second intermediate layer includes the bottleneck layer of nervus opticus network, i.e. nervus opticus network
1st layer of the second intermediate layer.Hidden layer is top in bottleneck layer, that is, neutral net, one layer of minimum centre of the vector dimension of output
Layer.Use bottleneck layer, it is ensured that training unit 453 subsequently is instructed when being trained, and makes final output data more accurate,
Obtain preferable training result.
The training sample data of scene cut are inputted to nervus opticus network to be trained in the second output unit 452
In, in addition to the output data in the second intermediate layer of at least one layer of nervus opticus network is obtained, the second output unit 452 also needs to obtain
Nervus opticus network final output data, facilitate the use final output data counting loss, to nervus opticus network into
Row training.
Downsampling unit 454, suitable for the training sample data of scene cut are carried out down-sampling processing, by the number after processing
According to the training sample data of the scene cut as nervus opticus network.
It is shallow-layer neutral net in view of nervus opticus network, when the training sample data of scene cut are larger, directly
The training sample data of usage scenario segmentation can influence the arithmetic speed of nervus opticus network.Alternatively, downsampling unit 454 can
First to carry out down-sampling processing to the training sample data of scene cut, when such as the training sample data of scene cut being picture,
Downsampling unit 454, which carries out down-sampling processing, can first reduce photo resolution, by the training sample of the scene cut after processing
Training sample data of the data as the scene cut of nervus opticus network inputs.Adopted under such second output unit, 452 use
The training sample data of the scene cut of low resolution are trained after sample processing, and the first output unit 451 uses high-resolution
The training sample data of scene cut be trained, instruct training unit 453 using two neutral nets output data into
During row training so that nervus opticus network can also obtain high-resolution to the training sample data of the scene cut of low resolution
Output result.Training unit 453 is instructed, suitable for utilizing the output data at least one layer of second intermediate layer and at least one layer first
Loss between loss between the output data in intermediate layer, and final output data and the output data that marks in advance, to
Two neutral nets are trained.
Instruct output data and at least one layer of first intermediate layer of the training unit 453 according at least one layer of second intermediate layer
Loss between output data, can update the weight parameter of nervus opticus network, make nervus opticus network at least one layer second
The output data in intermediate layer goes the output data close at least one layer of first intermediate layer of first nerves network as far as possible.
Meanwhile training unit 453 is instructed according to the final output data of nervus opticus network and the output data marked in advance
Between loss, the weight parameter of nervus opticus network can be updated, make nervus opticus network final output data as far as possible
Go close to the output data marked in advance, the accuracy of guarantee nervus opticus network final output data.Pass through each list more than performing
Member, completes to be trained nervus opticus network.Alternatively, when scene cut instructs training module 450 to include downsampling unit
When 454, downsampling unit 454 also needs to mark the training sample data of the scene cut after down-sampling processing in advance, obtains
The output data of the pre- mark of the training sample data of scene cut after to down-sampling processing.Training unit 453 is instructed according to
Loss between the output data of pre- mark after final output data and the down-sampling processing of two neutral nets, can update the
The weight parameter of two neutral nets, makes nervus opticus network final output data go as far as possible close to data after down-sampling processing
Pre- mark output data, ensure nervus opticus network final output data accuracy.
Collection module 460, suitable for collecting training sample input data of the current frame image as scene cut, and, it is right
Current frame image is manually marked, using the image after mark as the output data marked in advance.
Image after current frame image and mark can input number as the training sample for being used for scene cut in sample storehouse
According to and output data.Image after the current frame image and mark collected using collection module 460 can be to nervus opticus network
Training is optimized, so that the output result of nervus opticus network is more accurate.
The automatic Pilot processing unit based on scene cut provided according to the present invention, utilizes trained nervus opticus
Network can quickly and accurately obtain the corresponding scene cut of current frame image in video as a result, being effectively improved picture field
The accuracy rate of scape segmentation, while ensure the treatment effeciency of nervus opticus network.Further, current frame image is collected, to present frame
Image is manually marked, and the image after current frame image and mark is put into sample storehouse, nervus opticus network can be carried out
Optimization training, so that the output result of nervus opticus network is more accurate.
Present invention also provides a kind of nonvolatile computer storage media, the computer-readable storage medium is stored with least
One executable instruction, the computer executable instructions can perform in above-mentioned any means embodiment based on the automatic of scene cut
Drive processing method.
Fig. 6 shows a kind of structure diagram of computing device according to an embodiment of the invention, and the present invention is specific real
Specific implementation of the example not to computing device is applied to limit.
As shown in fig. 6, the computing device can include:Processor (processor) 602, communication interface
(Communications Interface) 604, memory (memory) 606 and communication bus 608.
Wherein:
Processor 602, communication interface 604 and memory 606 complete mutual communication by communication bus 608.
Communication interface 604, for communicating with the network element of miscellaneous equipment such as client or other servers etc..
Processor 602, for executive program 610, can specifically perform the above-mentioned automatic Pilot processing based on scene cut
Correlation step in embodiment of the method.
Specifically, program 610 can include program code, which includes computer-managed instruction.
Processor 602 is probably central processor CPU, or specific integrated circuit ASIC (Application
Specific Integrated Circuit), or be arranged to implement the embodiment of the present invention one or more integrate electricity
Road.The one or more processors that computing device includes, can be same type of processors, such as one or more CPU;Also may be used
To be different types of processor, such as one or more CPU and one or more ASIC.
Memory 606, for storing program 610.Memory 606 may include high-speed RAM memory, it is also possible to further include
Nonvolatile memory (non-volatile memory), for example, at least a magnetic disk storage.
Program 610 specifically can be used for so that processor 602 performs dividing based on scene in above-mentioned any means embodiment
The automatic Pilot processing method cut.The specific implementation of each step may refer to above-mentioned based on the automatic of scene cut in program 610
Corresponding description in corresponding steps and the unit in Processing Example is driven, this will not be repeated here.Those skilled in the art can
To be well understood, for convenience and simplicity of description, the equipment of foregoing description and the specific work process of module, may be referred to
Corresponding process description in preceding method embodiment, details are not described herein.
Algorithm and display be not inherently related to any certain computer, virtual system or miscellaneous equipment provided herein.
Various general-purpose systems can also be used together with teaching based on this.As described above, required by constructing this kind of system
Structure be obvious.In addition, the present invention is not also directed to any certain programmed language.It should be understood that it can utilize various
Programming language realizes the content of invention described herein, and the description done above to language-specific is to disclose this hair
Bright preferred forms.
In the specification that this place provides, numerous specific details are set forth.It is to be appreciated, however, that the implementation of the present invention
Example can be put into practice in the case of these no details.In some instances, known method, structure is not been shown in detail
And technology, so as not to obscure the understanding of this description.
Similarly, it will be appreciated that in order to simplify the disclosure and help to understand one or more of each inventive aspect,
Above in the description to the exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes
In example, figure or descriptions thereof.However, the method for the disclosure should be construed to reflect following intention:I.e. required guarantor
The application claims of shield features more more than the feature being expressly recited in each claim.It is more precisely, such as following
Claims reflect as, inventive aspect is all features less than single embodiment disclosed above.Therefore,
Thus the claims for following embodiment are expressly incorporated in the embodiment, wherein each claim is in itself
Separate embodiments all as the present invention.
Those skilled in the art, which are appreciated that, to carry out adaptively the module in the equipment in embodiment
Change and they are arranged in one or more equipment different from the embodiment.Can be the module or list in embodiment
Member or component be combined into a module or unit or component, and can be divided into addition multiple submodule or subelement or
Sub-component.In addition at least some in such feature and/or process or unit exclude each other, it can use any
Combination is disclosed to all features disclosed in this specification (including adjoint claim, summary and attached drawing) and so to appoint
Where all processes or unit of method or equipment are combined.Unless expressly stated otherwise, this specification (including adjoint power
Profit requires, summary and attached drawing) disclosed in each feature can be by providing the alternative features of identical, equivalent or similar purpose come generation
Replace.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments
In included some features rather than further feature, but the combination of the feature of different embodiments means in of the invention
Within the scope of and form different embodiments.For example, in the following claims, embodiment claimed is appointed
One of meaning mode can use in any combination.
The all parts embodiment of the present invention can be with hardware realization, or to be run on one or more processor
Software module realize, or realized with combinations thereof.It will be understood by those of skill in the art that it can use in practice
Microprocessor or digital signal processor (DSP) are come one of some or all components in realizing according to embodiments of the present invention
A little or repertoire.The present invention is also implemented as setting for performing some or all of method as described herein
Standby or program of device (for example, computer program and computer program product).Such program for realizing the present invention can deposit
Storage on a computer-readable medium, or can have the form of one or more signal.Such signal can be from because of spy
Download and obtain on net website, either provide on carrier signal or provided in the form of any other.
It should be noted that the present invention will be described rather than limits the invention for above-described embodiment, and ability
Field technique personnel can design alternative embodiment without departing from the scope of the appended claims.In the claims,
Any reference symbol between bracket should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not
Element or step listed in the claims.Word "a" or "an" before element does not exclude the presence of multiple such
Element.The present invention can be by means of including the hardware of some different elements and being come by means of properly programmed computer real
It is existing.In if the unit claim of equipment for drying is listed, several in these devices can be by same hardware branch
To embody.The use of word first, second, and third does not indicate that any order.These words can be explained and run after fame
Claim.
Claims (10)
1. a kind of automatic Pilot processing method based on scene cut, it includes:
Real-time image acquisition collecting device is captured and/or the vehicle drive way recorded in video in current frame image;
Current frame image is inputted into nervus opticus network, obtains the corresponding scene cut result of current frame image;Wherein, institute
State nervus opticus network and carry out guidance instruction using the output data at least one layer of intermediate layer of first nerves network trained in advance
Get, the number of plies of the first nerves network is more than the number of plies of the nervus opticus network;
According to the scene cut as a result, determining travel route and/or driving instruction;
According to identified travel route and/or driving instruction, automatic Pilot control is carried out to the vehicle.
2. according to the method described in claim 1, wherein, it is described according to the scene cut as a result, determine travel route and/or
Driving instruction further comprises:
According to the scene cut as a result, determining the profile information of special object;
According to the profile information of the special object, the relative position relation of calculating vehicle and the special object;
According to the relative position relation being calculated, travel route and/or driving instruction are determined.
3. according to the method described in claim 2, wherein, the relative position relation bag of the vehicle and the special object
Include the distance between vehicle and the special object information and/or angle information.
4. according to the method described in claim 1, wherein, it is described according to the scene cut as a result, determine travel route and/or
Driving instruction further comprises:
According to the road signs information included in the scene cut result, determine that vehicle travel route and/or traveling refer to
Order.
5. according to the method described in claim 1, wherein, it is described according to the scene cut as a result, determine travel route and/or
Driving instruction further comprises:
According to the traffic lights information included in the scene cut result, travel route and/or driving instruction are determined.
6. according to the method any one of claim 1-5, wherein, the training process of the nervus opticus network includes:
The training sample data of scene cut are inputted into trained obtained first nerves network, obtain first nerves network
The first intermediate layer of at least one layer output data;
The training sample data of scene cut are inputted into nervus opticus network to be trained, obtain nervus opticus network extremely
The output data and final output data in few one layer of second intermediate layer, at least one layer of second intermediate layer and described at least one
The first intermediate layer of layer has correspondence;
Using at least one layer of second intermediate layer output data and at least one layer of first intermediate layer output data it
Between loss, and the loss between the final output data and the output data that marks in advance carries out nervus opticus network
Training.
7. according to the method described in claim 6, wherein, at least one layer of first intermediate layer includes the bottle of first nerves network
Neck layer;At least one layer of second intermediate layer includes the bottleneck layer of nervus opticus network.
8. a kind of automatic Pilot processing unit based on scene cut, it includes:
Acquisition module, suitable in the video in captured by the real-time image acquisition collecting device and/or vehicle drive way recorded
Current frame image;
Identification module, suitable for inputting current frame image into nervus opticus network, obtains the corresponding scene point of current frame image
Cut result;Wherein, the nervus opticus network utilizes the output at least one layer of intermediate layer of first nerves network trained in advance
Data carry out instructing training to obtain, and the number of plies of the first nerves network is more than the number of plies of the nervus opticus network;
Determining module, suitable for according to the scene cut as a result, determining travel route and/or driving instruction;
Control module, suitable for according to identified travel route and/or driving instruction, automatic Pilot is carried out to the vehicle
Control.
9. a kind of computing device, including:Processor, memory, communication interface and communication bus, the processor, the storage
Device and the communication interface complete mutual communication by the communication bus;
The memory is used to store an at least executable instruction, and the executable instruction makes the processor perform right such as will
Ask the corresponding operation of automatic Pilot processing method based on scene cut any one of 1-7.
10. a kind of computer-readable storage medium, an at least executable instruction, the executable instruction are stored with the storage medium
Correspond to the automatic Pilot processing method based on scene cut that the processor is performed as any one of claim 1-7
Operation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711156360.0A CN107944375A (en) | 2017-11-20 | 2017-11-20 | Automatic Pilot processing method and processing device based on scene cut, computing device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711156360.0A CN107944375A (en) | 2017-11-20 | 2017-11-20 | Automatic Pilot processing method and processing device based on scene cut, computing device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107944375A true CN107944375A (en) | 2018-04-20 |
Family
ID=61929155
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711156360.0A Pending CN107944375A (en) | 2017-11-20 | 2017-11-20 | Automatic Pilot processing method and processing device based on scene cut, computing device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107944375A (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108681825A (en) * | 2018-05-28 | 2018-10-19 | 深圳市易成自动驾驶技术有限公司 | Driving instruction and methods of marking, equipment and computer readable storage medium |
CN108717536A (en) * | 2018-05-28 | 2018-10-30 | 深圳市易成自动驾驶技术有限公司 | Driving instruction and methods of marking, equipment and computer readable storage medium |
CN108897313A (en) * | 2018-05-23 | 2018-11-27 | 清华大学 | A kind of end-to-end Vehicular automatic driving system construction method of layer-stepping |
CN108960304A (en) * | 2018-06-20 | 2018-12-07 | 东华大学 | A kind of deep learning detection method of network trading fraud |
CN109165562A (en) * | 2018-07-27 | 2019-01-08 | 深圳市商汤科技有限公司 | Training method, crosswise joint method, apparatus, equipment and the medium of neural network |
CN109455178A (en) * | 2018-11-13 | 2019-03-12 | 吉林大学 | A kind of road vehicles traveling active control system and method based on binocular vision |
CN109558942A (en) * | 2018-11-20 | 2019-04-02 | 电子科技大学 | A kind of neural network moving method based on either shallow study |
CN109649255A (en) * | 2019-01-11 | 2019-04-19 | 福建天眼视讯网络科技有限公司 | Intelligent automotive light control system and its method neural network based |
CN109991978A (en) * | 2019-03-19 | 2019-07-09 | 莫日华 | A kind of method and device of network-based multi-information fusion |
CN110276322A (en) * | 2019-06-26 | 2019-09-24 | 湖北亿咖通科技有限公司 | A kind of image processing method and device of the unused resource of combination vehicle device |
CN110531754A (en) * | 2018-05-24 | 2019-12-03 | 通用汽车环球科技运作有限责任公司 | Control system, control method and the controller of autonomous vehicle |
CN110852325A (en) * | 2019-10-31 | 2020-02-28 | 上海商汤智能科技有限公司 | Image segmentation method and device, electronic equipment and storage medium |
CN110857100A (en) * | 2018-08-09 | 2020-03-03 | 通用汽车环球科技运作有限责任公司 | Method for embedded coding of context information using neural network |
CN111204346A (en) * | 2018-11-05 | 2020-05-29 | 通用汽车环球科技运作有限责任公司 | Method and system for end-to-end learning of control commands for autonomous vehicles |
CN111311646A (en) * | 2018-12-12 | 2020-06-19 | 杭州海康威视数字技术股份有限公司 | Optical flow neural network training method and device |
CN111923919A (en) * | 2019-05-13 | 2020-11-13 | 广州汽车集团股份有限公司 | Vehicle control method, vehicle control device, computer equipment and storage medium |
CN112146680A (en) * | 2019-06-28 | 2020-12-29 | 百度(美国)有限责任公司 | Determining vanishing points based on feature maps |
CN112686457A (en) * | 2021-01-04 | 2021-04-20 | 腾讯科技(深圳)有限公司 | Route arrival time estimation method and device, electronic equipment and storage medium |
CN113609980A (en) * | 2021-08-04 | 2021-11-05 | 东风悦享科技有限公司 | Lane line sensing method and device for automatic driving vehicle |
CN114830204A (en) * | 2019-12-23 | 2022-07-29 | 罗伯特·博世有限公司 | Training neural networks through neural networks |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150352999A1 (en) * | 2014-06-06 | 2015-12-10 | Denso Corporation | Driving context generation system |
CN105488534A (en) * | 2015-12-04 | 2016-04-13 | 中国科学院深圳先进技术研究院 | Method, device and system for deeply analyzing traffic scene |
CN106548190A (en) * | 2015-09-18 | 2017-03-29 | 三星电子株式会社 | Model training method and equipment and data identification method |
CN107247989A (en) * | 2017-06-15 | 2017-10-13 | 北京图森未来科技有限公司 | A kind of neural network training method and device |
-
2017
- 2017-11-20 CN CN201711156360.0A patent/CN107944375A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150352999A1 (en) * | 2014-06-06 | 2015-12-10 | Denso Corporation | Driving context generation system |
CN106548190A (en) * | 2015-09-18 | 2017-03-29 | 三星电子株式会社 | Model training method and equipment and data identification method |
CN105488534A (en) * | 2015-12-04 | 2016-04-13 | 中国科学院深圳先进技术研究院 | Method, device and system for deeply analyzing traffic scene |
CN107247989A (en) * | 2017-06-15 | 2017-10-13 | 北京图森未来科技有限公司 | A kind of neural network training method and device |
Non-Patent Citations (2)
Title |
---|
VIJAY BADRINARAYANAN ET AL.: "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Robust Semantic Pixel-Wise Labelling", 《ARXIV》 * |
张茂于: "《产业专利分析报告 第58册 自动驾驶》", 30 June 2017 * |
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108897313A (en) * | 2018-05-23 | 2018-11-27 | 清华大学 | A kind of end-to-end Vehicular automatic driving system construction method of layer-stepping |
CN110531754A (en) * | 2018-05-24 | 2019-12-03 | 通用汽车环球科技运作有限责任公司 | Control system, control method and the controller of autonomous vehicle |
CN108717536A (en) * | 2018-05-28 | 2018-10-30 | 深圳市易成自动驾驶技术有限公司 | Driving instruction and methods of marking, equipment and computer readable storage medium |
CN108681825A (en) * | 2018-05-28 | 2018-10-19 | 深圳市易成自动驾驶技术有限公司 | Driving instruction and methods of marking, equipment and computer readable storage medium |
CN108960304A (en) * | 2018-06-20 | 2018-12-07 | 东华大学 | A kind of deep learning detection method of network trading fraud |
CN108960304B (en) * | 2018-06-20 | 2022-07-15 | 东华大学 | Deep learning detection method for network transaction fraud behaviors |
CN109165562A (en) * | 2018-07-27 | 2019-01-08 | 深圳市商汤科技有限公司 | Training method, crosswise joint method, apparatus, equipment and the medium of neural network |
CN109165562B (en) * | 2018-07-27 | 2021-06-04 | 深圳市商汤科技有限公司 | Neural network training method, lateral control method, device, equipment and medium |
CN110857100A (en) * | 2018-08-09 | 2020-03-03 | 通用汽车环球科技运作有限责任公司 | Method for embedded coding of context information using neural network |
CN111204346A (en) * | 2018-11-05 | 2020-05-29 | 通用汽车环球科技运作有限责任公司 | Method and system for end-to-end learning of control commands for autonomous vehicles |
CN109455178B (en) * | 2018-11-13 | 2023-11-17 | 吉林大学 | Road traffic vehicle driving active control system and method based on binocular vision |
CN109455178A (en) * | 2018-11-13 | 2019-03-12 | 吉林大学 | A kind of road vehicles traveling active control system and method based on binocular vision |
CN109558942A (en) * | 2018-11-20 | 2019-04-02 | 电子科技大学 | A kind of neural network moving method based on either shallow study |
CN109558942B (en) * | 2018-11-20 | 2021-11-26 | 电子科技大学 | Neural network migration method based on shallow learning |
CN111311646B (en) * | 2018-12-12 | 2023-04-07 | 杭州海康威视数字技术股份有限公司 | Optical flow neural network training method and device |
CN111311646A (en) * | 2018-12-12 | 2020-06-19 | 杭州海康威视数字技术股份有限公司 | Optical flow neural network training method and device |
CN109649255A (en) * | 2019-01-11 | 2019-04-19 | 福建天眼视讯网络科技有限公司 | Intelligent automotive light control system and its method neural network based |
CN109991978A (en) * | 2019-03-19 | 2019-07-09 | 莫日华 | A kind of method and device of network-based multi-information fusion |
CN109991978B (en) * | 2019-03-19 | 2021-04-02 | 莫日华 | Intelligent automatic driving method and device based on network |
CN111923919A (en) * | 2019-05-13 | 2020-11-13 | 广州汽车集团股份有限公司 | Vehicle control method, vehicle control device, computer equipment and storage medium |
CN110276322A (en) * | 2019-06-26 | 2019-09-24 | 湖北亿咖通科技有限公司 | A kind of image processing method and device of the unused resource of combination vehicle device |
CN110276322B (en) * | 2019-06-26 | 2022-01-07 | 湖北亿咖通科技有限公司 | Image processing method and device combined with vehicle machine idle resources |
CN112146680A (en) * | 2019-06-28 | 2020-12-29 | 百度(美国)有限责任公司 | Determining vanishing points based on feature maps |
CN112146680B (en) * | 2019-06-28 | 2024-03-29 | 百度(美国)有限责任公司 | Determining vanishing points based on feature maps |
CN110852325A (en) * | 2019-10-31 | 2020-02-28 | 上海商汤智能科技有限公司 | Image segmentation method and device, electronic equipment and storage medium |
CN110852325B (en) * | 2019-10-31 | 2023-03-31 | 上海商汤智能科技有限公司 | Image segmentation method and device, electronic equipment and storage medium |
WO2021082517A1 (en) * | 2019-10-31 | 2021-05-06 | 上海商汤智能科技有限公司 | Neural network training method and apparatus, image segmentation method and apparatus, device, medium, and program |
CN114830204A (en) * | 2019-12-23 | 2022-07-29 | 罗伯特·博世有限公司 | Training neural networks through neural networks |
CN112686457B (en) * | 2021-01-04 | 2022-06-03 | 腾讯科技(深圳)有限公司 | Route arrival time estimation method and device, electronic equipment and storage medium |
CN112686457A (en) * | 2021-01-04 | 2021-04-20 | 腾讯科技(深圳)有限公司 | Route arrival time estimation method and device, electronic equipment and storage medium |
CN113609980A (en) * | 2021-08-04 | 2021-11-05 | 东风悦享科技有限公司 | Lane line sensing method and device for automatic driving vehicle |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107944375A (en) | Automatic Pilot processing method and processing device based on scene cut, computing device | |
JP7150846B2 (en) | Object interaction prediction system and method for autonomous vehicles | |
CN108216229B (en) | Vehicle, road line detection and driving control method and device | |
RU2734744C1 (en) | Operational control of autonomous vehicle, including operation of model instance of partially observed markov process of decision making | |
US11693409B2 (en) | Systems and methods for a scenario tagger for autonomous vehicles | |
RU2733015C1 (en) | Real-time vehicle control | |
CN108133484A (en) | Automatic Pilot processing method and processing device based on scene cut, computing device | |
CN109389838A (en) | Unmanned crossing paths planning method, system, equipment and storage medium | |
CN108475057A (en) | The method and system of one or more tracks of situation prediction vehicle based on vehicle periphery | |
CN110377025A (en) | Sensor aggregation framework for automatic driving vehicle | |
CN109902899B (en) | Information generation method and device | |
CN109426256A (en) | The lane auxiliary system based on driver intention of automatic driving vehicle | |
CN108139884A (en) | The method simulated the physical model of automatic driving vehicle movement and combine machine learning | |
CN109520744A (en) | The driving performance test method and device of automatic driving vehicle | |
CN108921200A (en) | Method, apparatus, equipment and medium for classifying to Driving Scene data | |
US20140058652A1 (en) | Traffic information processing | |
CN109213134A (en) | The method and apparatus for generating automatic Pilot strategy | |
KR102007181B1 (en) | Method for transmitting route data for traffic telematics | |
CN107389080A (en) | A kind of vehicle route air navigation aid and electronic equipment | |
CN111369783B (en) | Method and system for identifying intersection | |
CN108733046A (en) | The system and method that track for automatic driving vehicle is planned again | |
CN110349416A (en) | The traffic light control system based on density for automatic driving vehicle (ADV) | |
EP2743898A3 (en) | A method of and a navigation device for time-dependent route planning | |
CN108646752A (en) | The control method and device of automated driving system | |
CN107894237A (en) | Method and apparatus for showing navigation information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180420 |
|
RJ01 | Rejection of invention patent application after publication |