CN110210619A - The training method and device of neural network, electronic equipment and storage medium - Google Patents

The training method and device of neural network, electronic equipment and storage medium Download PDF

Info

Publication number
CN110210619A
CN110210619A CN201910430085.XA CN201910430085A CN110210619A CN 110210619 A CN110210619 A CN 110210619A CN 201910430085 A CN201910430085 A CN 201910430085A CN 110210619 A CN110210619 A CN 110210619A
Authority
CN
China
Prior art keywords
value
fixed
neural network
point number
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910430085.XA
Other languages
Chinese (zh)
Inventor
李润东
王岩
秦红伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN201910430085.XA priority Critical patent/CN110210619A/en
Publication of CN110210619A publication Critical patent/CN110210619A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

This disclosure relates to the training method and device of a kind of neural network, electronic equipment and storage medium, wherein, this method comprises: the bit wide determined according to the deployed environment by the neural network based on fixed-point number, the parameters for the neural network based on floating number that training is completed are mapped as fixed-point number, first nerves network is obtained, each parameter of the first nerves network is fixed-point number;Using the sample image collection training first nerves network, the neural network based on fixed-point number is obtained.Using the disclosure, fixed-point number mapping is counted to by floating-point and trains what is obtained to be somebody's turn to do the neural network based on fixed-point number, the precision of disposed electronic equipment deployed environment can be met.

Description

The training method and device of neural network, electronic equipment and storage medium
Technical field
This disclosure relates to the training method and device of technical field of computer vision more particularly to a kind of neural network, electricity Sub- equipment and storage medium.
Background technique
In the related technology, it in the application scenarios of image procossing, can be detected using deep learning model, depth It practises model and generally carries out storage and operation using floating number, can be based on the deep learning model of floating number, storage and computing overhead It is excessively high.In order to reduce storage and computing overhead, low accuracy model can be used.It is set however, low accuracy model is deployed in electronics When standby upper operation, there are problems that deployed environment precision is not achieved, in this regard, there is no effective solution in the related technology.
Summary of the invention
The present disclosure proposes a kind of technical solutions of neural metwork training.
According to the one side of the disclosure, a kind of training method of neural network is provided, which comprises
According to the bit wide that the deployed environment by the neural network based on fixed-point number determines, by training completion based on floating number The parameters of neural network be mapped as fixed-point number, obtain first nerves network, each parameter of the first nerves network is equal For fixed-point number;
Using the sample image collection training first nerves network, the neural network based on fixed-point number is obtained;
Wherein, in the training first nerves network, the first nerves that occur in forward process that will be trained each time The parameter value of network is each mapped to fixed-point number for the value of the parameters of floating number, and in the forward process that will be trained each time The variate-value of the first nerves network of appearance is that the value of each intermediate variable of floating number is each mapped to fixed-point number;Each time It is trained it is backward during, the value of the parameter is updated using the gradient value that the parameters of the first nerves network receive.
In possible implementation, according to the bit wide, by the neural network based on floating number of the training completion Parameters are mapped as fixed-point number, comprising:
The first Effective Numerical range is determined according to the parameter value of the parameters of the neural network based on floating number;
According to the first Effective Numerical range and the bit wide, the nerve net based on floating number that the training is completed The parameter value equal interval quantizing of the parameters of network is fixed-point number;
In the training first nerves network, by the first nerves network occurred in forward process trained each time Parameter value is that the value of the parameters of floating number is each mapped to fixed-point number, comprising:
According to the first Effective Numerical range and the bit wide, first will occurred in forward process trained each time The parameter value of neural network is that the value equal interval quantizing of the parameters of floating number is fixed-point number.
In possible implementation, in the training first nerves network, it will go out in forward process trained each time The variate-value of existing first nerves network is that the value of each intermediate variable of floating number is each mapped to fixed-point number, comprising:
According to the second Effective Numerical range and the bit wide, the first nerves that will occur in forward process trained each time The variate-value of network is that the value equal interval quantizing of each intermediate variable of floating number is fixed-point number;
The second Effective Numerical range is determined according to the variate-value of the intermediate variable of the first nerves network.
In possible implementation, the variate-value of the intermediate variable of the first nerves network is obtained using following steps:
By each sample image point in the image set of the deployed environment of the neural network based on fixed-point number The first nerves network is not inputted carries out forward calculation;
Each layer of output valve of the last layer will be removed in the first nerves network as the first nerves network The variate-value of intermediate variable.
In possible implementation, after determining the second Effective Numerical range, to the first nerves network into Before row training, the method also includes:
According to the second Effective Numerical range and the bit wide, by the variate-value line of the intermediate variable of the first nerves network Property is quantified as fixed-point number.
In possible implementation, according to the first Effective Numerical range and the bit wide, the training is completed The parameter value equal interval quantizing of the parameters of neural network based on floating number is fixed-point number, comprising:
According to the first Effective Numerical range and the bit wide, the first quantization parameter indicated with floating number is determined;
According to the parameters for the neural network based on floating number that first quantization parameter completes the training Parameter value is converted to fixed-point number;
According to the first Effective Numerical range and the bit wide, first will occurred in forward process trained each time The parameter value of neural network is that the value equal interval quantizing of the parameters of floating number is fixed-point number, comprising:
According to first quantization parameter by the parameter of the first nerves network occurred in forward process trained each time Value is that the value of the parameters of floating number is converted to fixed-point number.
In possible implementation, according to the second Effective Numerical range and the bit wide, by forward direction mistake trained each time The variate-value of the first nerves network occurred in journey is that the value equal interval quantizing of each intermediate variable of floating number is fixed-point number, packet It includes:
According to the second Effective Numerical range and the bit wide, the second quantization parameter indicated with floating number is determined;
According to second quantization parameter by the variable of the first nerves network occurred in forward process trained each time Value is that the value of each intermediate variable of floating number is converted to fixed-point number.
In possible implementation, when being trained to first nerves network, in forward process trained each time, When carrying out batch normalized, carry out in the mean value and the neural network based on floating number of batch normalized into The mean value of corresponding batch of normalized of row is identical, carries out the variance statistic value of batch normalized and described based on floating number The variance statistic value that corresponding batch of normalized is carried out in neural network is identical.
In possible implementation, the first Effective Numerical range is determined according to the first statistical-reference parameter, described One statistics reference parameter is that parameter value falls into the neural network based on floating number within the scope of first Effective Numerical The ratio of the total quantity of the parameter of the quantity of parameter and the neural network based on floating number.
In possible implementation, the second Effective Numerical range is determined according to the second statistical-reference parameter, described Two statistical-reference parameters are the intermediate variable that variate-value falls into the first nerves network within the scope of second Effective Numerical Quantity and the first nerves network intermediate variable total quantity ratio.
In possible implementation, the method also includes:
After determining the first Effective Numerical range, before being trained to the first nerves network, described in adjustment First Effective Numerical range, until the minimum value in the first Effective Numerical range is divided by the first quantization indicated with floating number The obtained remainder of coefficient is 0.
In possible implementation, the method also includes:
After determining the second Effective Numerical range, before being trained to the first nerves network, described in adjustment Second Effective Numerical range, until the minimum value in the second Effective Numerical range is divided by the second quantization indicated with floating number The obtained remainder of coefficient is 0.
In possible implementation, the sample graph image set of the training first nerves network and training are described based on floating number Neural network image set it is identical.
According to the one side of the disclosure, a kind of image processing method is provided, is instructed using method described in any of the above embodiments Experienced neural network carries out image procossing.
In possible implementation, image procossing is carried out using the neural network, comprising:
During carrying out forward calculation to image to be processed using the neural network, according to the neural network Each layer of input value Effective Numerical range, the input linear of this layer is quantified as fixed-point number.
It is described to carry out image procossing using the neural network based on fixed-point number in possible implementation, comprising:
To the multiplication of the floating number in image processing process, using signless integer product and to carry out corresponding arithmetic right Obtained result is replaced the operation result of floating-point multiplication by shifting function.
It is described to carry out image procossing using the neural network based on fixed-point number in possible implementation, comprising:
To the addition of the floating number in image processing process, zoom factor is determined according to bit wide, according to the zoom factor The floating-point of all addends is zoomed to consistent with the floating-point scaling of output.
It is described to carry out image procossing using the neural network based on fixed-point number in possible implementation, comprising:
To the interpolation operation in image processing process, to progress interpolation operation in the neural network based on fixed-point number Layer, carries out interpolation operation using the mode of arest neighbors interpolation;
Wherein, it is N times original of feelings that the arest neighbors interpolation, which includes: needing some intermediate variable size to expand, Under condition, interpolation is carried out divided by the nearest input pixel value in position is obtained after N according to output pixel coordinate y.
According to the one side of the disclosure, a kind of video monitoring method is provided, comprising:
Acquire the video image in monitoring area;
Collected video image is handled using any of the above-described image processing method, obtains image procossing knot Fruit;
According to obtained processing result image, prompt information is exported.
According to the one side of the disclosure, a kind of intelligent driving method is provided, comprising:
Acquire the video image of vehicle periphery;
Collected video image is handled using any of the above-described image processing method, obtains image procossing knot Fruit;
According to obtained processing result image, output instruction information, to carry out traveling control to vehicle.
According to the one side of the disclosure, a kind of training device of neural network is provided, described device includes:
First processing units, for will instruct according to the bit wide determined by the deployed environment of the neural network based on fixed-point number Practice the parameters of the neural network based on floating number completed and be mapped as fixed-point number, obtains first nerves network, described first Each parameter of neural network is fixed-point number;
The second processing unit obtains described based on fixed point for using the sample image collection training first nerves network Several neural networks;
Wherein, in the training first nerves network, the first nerves that occur in forward process that will be trained each time The parameter value of network is each mapped to fixed-point number for the value of the parameters of floating number, and in the forward process that will be trained each time The variate-value of the first nerves network of appearance is that the value of each intermediate variable of floating number is each mapped to fixed-point number;Each time It is trained it is backward during, the value of the parameter is updated using the gradient value that the parameters of the first nerves network receive.
In possible implementation, the first processing units are used for:
The first Effective Numerical range is determined according to the parameter value of the parameters of the neural network based on floating number;
According to the first Effective Numerical range and the bit wide, the nerve net based on floating number that the training is completed The parameter value equal interval quantizing of the parameters of network is fixed-point number;
Described the second processing unit will occur in the training first nerves network in forward process trained each time First nerves network parameter value be floating number parameters value be each mapped to fixed-point number in the case where, be used for basis The first Effective Numerical range and the bit wide, by the ginseng of the first nerves network occurred in forward process trained each time Numerical value is that the value equal interval quantizing of the parameters of floating number is fixed-point number.
In possible implementation, described the second processing unit will instruct each time in the training first nerves network The variate-value of the first nerves network occurred in experienced forward process is that the value of each intermediate variable of floating number is each mapped to In the case where fixed-point number, for will go out in forward process trained each time according to the second Effective Numerical range and the bit wide The variate-value of existing first nerves network is that the value equal interval quantizing of each intermediate variable of floating number is fixed-point number;
The second Effective Numerical range is determined according to the variate-value of the intermediate variable of the first nerves network.
In possible implementation, described device further include: obtain the intermediate of first nerves network using following steps and become The acquiring unit of the variate-value of amount;
The acquiring unit, is used for:
By each sample image point in the image set of the deployed environment of the neural network based on fixed-point number The first nerves network is not inputted carries out forward calculation;
Each layer of output valve of the last layer will be removed in the first nerves network as the first nerves network The variate-value of intermediate variable.
In possible implementation, described device further includes third processing unit, and the third processing unit is used for:
After determining the second Effective Numerical range, before being trained to the first nerves network, according to The variate-value equal interval quantizing of the intermediate variable of the first nerves network is fixed point by two Effective Numerical ranges and the bit wide Number.
In possible implementation, the first processing units are used for:
According to the first Effective Numerical range and the bit wide, the first quantization parameter indicated with floating number is determined;
According to the parameters for the neural network based on floating number that first quantization parameter completes the training Parameter value is converted to fixed-point number;
Described the second processing unit is according to the first Effective Numerical range and the bit wide, before training each time Parameter value to the first nerves network occurred in the process is that the value equal interval quantizing of the parameters of floating number is the feelings of fixed-point number Under condition, for according to first quantization parameter by the parameter of the first nerves network occurred in forward process trained each time Value is that the value of the parameters of floating number is converted to fixed-point number.
In possible implementation, described the second processing unit is incited somebody to action according to the second Effective Numerical range and the bit wide The variate-value of the first nerves network occurred in trained forward process each time is the value line of each intermediate variable of floating number Property is quantified as being used in the case where fixed-point number:
According to the second Effective Numerical range and the bit wide, the second quantization parameter indicated with floating number is determined;
According to second quantization parameter by the variable of the first nerves network occurred in forward process trained each time Value is that the value of each intermediate variable of floating number is converted to fixed-point number.
In possible implementation, described the second processing unit is also used to:
When being trained to first nerves network, in forward process trained each time, when carrying out criticizing normalizing When changing processing, the mean value for carrying out batch normalized is normalized with progress corresponding batch in the neural network based on floating number The mean value of processing is identical, carries out carrying out phase in the variance statistic value and the neural network based on floating number of batch normalized The variance statistic value for batch normalized answered is identical.
In possible implementation, the first processing units are used for: determining described according to the first statistical-reference parameter In the case where one Effective Numerical range, the first statistical-reference parameter is that parameter value is fallen within the scope of first Effective Numerical The neural network based on floating number parameter quantity and the neural network based on floating number parameter sum The ratio of amount.
In possible implementation, described the second processing unit is used for: determining described according to the second statistical-reference parameter In the case where two Effective Numerical ranges, the second statistical-reference parameter is that variate-value is fallen within the scope of second Effective Numerical The first nerves network intermediate variable quantity and the first nerves network intermediate variable total quantity ratio.
In possible implementation, described device further include:
The first adjustment unit, is used for:
After determining the first Effective Numerical range, before being trained to the first nerves network, described in adjustment First Effective Numerical range, until the minimum value in the first Effective Numerical range is divided by the first quantization indicated with floating number The obtained remainder of coefficient is 0.
In possible implementation, described device further include:
Second adjustment unit, is used for:
After determining the second Effective Numerical range, before being trained to the first nerves network, described in adjustment Second Effective Numerical range, until the minimum value in the second Effective Numerical range is divided by the second quantization indicated with floating number The obtained remainder of coefficient is 0.
In possible implementation, the sample graph image set of the training first nerves network and training are described based on floating number Neural network sample graph image set it is identical.
According to the one side of the disclosure, a kind of image processing apparatus is provided, using using side described in any of the above embodiments The neural network of method training carries out image procossing.
In possible implementation, described device is used for:
During carrying out forward calculation to image to be processed using the neural network, according to the neural network Each layer of input value Effective Numerical range, the input linear of this layer is quantified as fixed-point number.
According to the one side of the disclosure, a kind of video monitoring system is provided, comprising:
First imaging sensor, for acquiring the video image in monitoring area;
First processor, for being handled using any of the above-described image processing method collected video image, Obtain processing result image;
First output unit, for exporting prompt information according to obtained processing result image.
According to the one side of the disclosure, a kind of intelligent driving system is provided, comprising:
Second imaging sensor, for acquiring the video image of vehicle periphery;
Second processor, for being handled using any of the above-described image processing method collected video image, Obtain processing result image;
Second output unit, for according to obtained processing result image, output instruction information, to be travelled to vehicle Control.
According to the one side of the disclosure, a kind of electronic equipment is provided, comprising:
Third imaging sensor, for acquiring the video image around electronic equipment;
Third processor, for being handled using any of the above-described image processing method collected video image, Obtain processing result image;
Controller, the traveling of the result controlling electronic devices for being generated according to the third processor.
According to a kind of electronic equipment of the disclosure, comprising:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to: perform claim requirement any of the above-described neural network training method.
According to the one side of the disclosure, a kind of computer readable storage medium is provided, computer program is stored thereon with The training method of any of the above-described neural network is realized in instruction when the computer program instructions are executed by processor.
In the embodiments of the present disclosure, the bit wide determined according to the deployed environment by the neural network based on fixed-point number will instruct Practice the parameters of the neural network based on floating number completed and be mapped as fixed-point number, obtains first nerves network, described first Each parameter of neural network is fixed-point number;Using the sample image collection training first nerves network, obtain described based on fixed The neural network of points;Wherein, in the training first nerves network, will occur in forward process trained each time the The parameter value of one neural network is that the value of the parameters of floating number is each mapped to fixed-point number, and the forward direction that will be trained each time The variate-value of the first nerves network occurred in the process is that the value of each intermediate variable of floating number is each mapped to fixed-point number;? Each time training it is backward during, update the parameter using the gradient value that the parameters of the first nerves network receive Value.Using the disclosure, due to carrying out floating number according to the bit wide determined by the deployed environment of the neural network based on fixed-point number To the mapping of fixed-point number, the parameters for the neural network based on floating number that training is completed can be mapped as fixed-point number, obtained To first nerves network in each parameter be fixed-point number, then, can be with using sample image collection training first nerves network The neural network based on fixed-point number is obtained, therefore, fixed-point number mapping is counted to by floating-point and trains what is obtained should be based on fixed-point number Neural network, to reduce storage and computing overhead, and meet the precision of disposed electronic equipment deployed environment.
It should be understood that above general description and following detailed description is only exemplary and explanatory, rather than Limit the disclosure.
According to below with reference to the accompanying drawings to detailed description of illustrative embodiments, the other feature and aspect of the disclosure will become It is clear.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and those figures show meet this public affairs The embodiment opened, and together with specification it is used to illustrate the technical solution of the disclosure.
Fig. 1 shows the flow chart of the neural network training method according to the embodiment of the present disclosure.
Fig. 2 shows the flow charts according to the neural network training method of the embodiment of the present disclosure.
Fig. 3 shows the flow chart of the image processing method according to the embodiment of the present disclosure.
Fig. 4 shows the schematic diagram of the image processing effect according to the embodiment of the present disclosure.
Fig. 5 shows the block diagram of the neural metwork training device according to the embodiment of the present disclosure.
Fig. 6 shows the block diagram according to disclosure video monitoring system.
Fig. 7 shows the block diagram according to disclosure intelligent driving system.
Fig. 8 shows the block diagram according to disclosure electronic equipment.
Fig. 9 shows the block diagram of the electronic equipment according to the embodiment of the present disclosure.
Specific embodiment
Various exemplary embodiments, feature and the aspect of the disclosure are described in detail below with reference to attached drawing.It is identical in attached drawing Appended drawing reference indicate element functionally identical or similar.Although the various aspects of embodiment are shown in the attached drawings, remove It non-specifically points out, it is not necessary to attached drawing drawn to scale.
Dedicated word " exemplary " means " being used as example, embodiment or illustrative " herein.Here as " exemplary " Illustrated any embodiment should not necessarily be construed as preferred or advantageous over other embodiments.
The terms "and/or", only a kind of incidence relation for describing affiliated partner, indicates that there may be three kinds of passes System, for example, A and/or B, can indicate: individualism A exists simultaneously A and B, these three situations of individualism B.In addition, herein Middle term "at least one" indicate a variety of in any one or more at least two any combination, it may for example comprise A, B, at least one of C can indicate to include any one or more elements selected from the set that A, B and C are constituted.
In addition, giving numerous details in specific embodiment below to better illustrate the disclosure. It will be appreciated by those skilled in the art that without certain details, the disclosure equally be can be implemented.In some instances, for Method, means, element and circuit well known to those skilled in the art are not described in detail, in order to highlight the purport of the disclosure.
Deep learning neural network generally uses floating number (754 standard of IEEE) to carry out storage and operation, higher to obtain Accuracy, for its storage and the excessively high problem of computing overhead, can improve deep learning neural network training process or (fine tuning is properly termed as low precision training technique) is finely adjusted to existing deep learning neural network, so that the storage of neural network And/or calculating process can be by occupying the fixed-point number or integer completion that storage is less, operation is more economical.It is thus obtained to be based on determining Its storage of the neural network of points and operation efficiency are higher, and expense is dropped for comparing the neural network based on floating number It is low, it is more suitable for disposing on end-to-end equipment (such as smart phone, security camera) electronic equipment.However, based on fixed-point number Neural network storage, operation numerical precision especially by limited time (such as single number bit wide≤4), in target detection, face On the complex tasks such as identification, accuracy is significantly reduced, that is to say, that the precision of disposed electronic equipment deployed environment is not achieved; When disposing the obtained neural network based on fixed-point number, some of them operation is reduced there is still a need for being executed using floating number Performance when end-to-end deployed with devices, or can not be deployed on the equipment for customizing for not supporting floating-point operation.It is mapped by floating number Into the training process of fixed-point number, and/or to when obtaining the neural network based on fixed-point number and carrying out operation, it can rely on and be difficult to The technology efficiently realized on hardware device, that is to say, that there is still a need for the technologies by means of hardware supported, and cannot pass through completely Software support is realized, to reduce the universality of deployment facility.
In view of the above-mentioned problems, the neural network based on floating number can be converted to the first mind based on fixed-point number by the disclosure Through network (neural network before the training of first nerves network representation), the first nerves network being converted to is trained, It is quantified, is finely tuned, the neural network based on fixed-point number is obtained after training, connects the neural network based on fixed-point number It is bordering on or even is restored to the precision of the neural network based on floating number.Finally, the neural network based on fixed-point number is deployed in and is set Standby upper operation can be adapted for target detection in image procossing, recognition of face, security protection video monitoring and Vehicular automatic driving etc. Application scenarios.To which in the case where deployed environment numerical precision is especially limited, what is made should the nerve net based on fixed-point number Network still maintains the accuracy (precision of deployed environment business demand) that production environment is enough to receive;Ensure using disclosure training What is obtained is somebody's turn to do the neural network based on fixed-point number, and any additional floating-point values operation is not depended in deployment, is being disposed with improving The operational efficiency of environment;Ensure to be somebody's turn to do the neural network based on fixed-point number, numerical operation method using what disclosure training obtained The hardware supported that can be efficiently realized on most of hardware device, and not depend on equipment itself, to improve deployment facility Universality.
Fig. 1 shows the flow chart of the training method of the neural network according to the embodiment of the present disclosure, the training of the neural network Method is applied to the training device of neural network, for example, the training device of neural network can be by terminal device or other processing Equipment executes, wherein terminal device can be user equipment (UE, User Equipment), mobile device, cellular phone, nothing Rope phone, handheld device, calculates equipment, vehicle-mounted sets personal digital assistant (PDA, Personal Digital Assistant) Standby, wearable device etc..In some possible implementations, the training method of the neural network can be called by processor The mode of the computer-readable instruction stored in memory is realized.As shown in Figure 1, the process includes:
Step S101, according to the bit wide determined by the deployed environment of the neural network based on fixed-point number, training is completed The parameters of neural network based on floating number are mapped as fixed-point number, obtain first nerves network, the first nerves network Each parameter be fixed-point number.
Step S102, using the sample image collection training first nerves network, the nerve based on fixed-point number is obtained Network;Wherein, in the training first nerves network, the first nerves network that occurs in forward process that will be trained each time Parameter value be that the value of parameters of floating number is each mapped to fixed-point number, and will be trained each time forward process in occur The variate-value of first nerves network be that the value of each intermediate variable of floating number is each mapped to fixed-point number;It is trained each time It is backward during, the value of the parameter is updated using the gradient value that the parameters of the first nerves network receive.
It is the neural network based on fixed-point number, storage and operation by the obtained neural network of training using the disclosure Occupied expense is simultaneously little, and being somebody's turn to do the neural network based on fixed-point number is according to the deployment by the neural network based on fixed-point number The bit wide that environment determines is trained, and the parameters for the neural network based on floating number that training is completed are mapped as pinpointing Number, that is to say, that fixed-point number mapping is counted to by floating-point and trains what is obtained to be somebody's turn to do the neural network based on fixed-point number, can both be dropped The expense of low storage and calculating, can also meet the precision of disposed electronic equipment deployed environment, therefore, should be based on fixed-point number Neural network does not depend on any additional floating-point values operation not only in deployment, so that the operational efficiency in deployed environment is improved, And the target detection suitable for image procossing, recognition of face, the application scenarios such as security protection video monitoring and Vehicular automatic driving, no It is limited to deployed environment numerical precision, what is made is somebody's turn to do the neural network based on fixed-point number, compared to the nerve based on floating number Network, the accuracy (precision of deployed environment business demand) that production environment still can be kept to be enough to receive.
Fig. 2 shows the flow chart according to the training method of the neural network of the embodiment of the present disclosure, the training of the neural network Method is applied to the training device of neural network, for example, the training device of neural network can be by terminal device or other processing Equipment executes, wherein terminal device can be user equipment (UE, User Equipment), mobile device, cellular phone, nothing Rope phone, handheld device, calculates equipment, vehicle-mounted sets personal digital assistant (PDA, Personal Digital Assistant) Standby, wearable device etc..In some possible implementations, the training method of the neural network can be called by processor The mode of the computer-readable instruction stored in memory is realized.As shown in Fig. 2, the process includes:
Step S201, the first significant figure is determined according to the parameter value based on the parameters of the neural network of floating number It is worth range.
Step S202, according to the first Effective Numerical range and bit wide (according to by the neural network based on fixed-point number The bit wide that deployed environment determines), the parameter value line of the parameters for the neural network based on floating number that the training is completed Property is quantified as fixed-point number, obtains first nerves network, each parameter in the first nerves network is fixed-point number.
Step S203, using the sample image collection training first nerves network, the nerve based on fixed-point number is obtained Network;Wherein, according to the first Effective Numerical range and the bit wide (according to the deployment by the neural network based on fixed-point number The bit wide that environment determines), the parameter value by the first nerves network occurred in forward process trained each time is floating number The value equal interval quantizing of parameters is fixed-point number, and by the change of the first nerves network occurred in forward process trained each time Magnitude is that the value of each intermediate variable of floating number is each mapped to fixed-point number;In the rear in the process of training each time, use The gradient value that the parameters of the first nerves network receive updates the value of the parameter.
In the possible implementation of the disclosure, in the training first nerves network, by forward direction mistake trained each time The variate-value of the first nerves network occurred in journey is that the value of each intermediate variable of floating number is each mapped to fixed-point number, is wrapped It includes: according to the second Effective Numerical range and the bit wide, the first nerves network that will occur in forward process trained each time Variate-value be the value equal interval quantizing of each intermediate variable of floating number be fixed-point number.Wherein, the second Effective Numerical range It is to be determined according to the variate-value of the intermediate variable of the first nerves network, specifically, including more in first nerves network A layer, such as the application scenarios of image procossing, first nerves network can be picture scroll product neural network, figure convolutional neural networks Including multiple convolutional layers, the intermediate variable that exports after each layer of convolution algorithm is as next layer of input.It can be according to the first mind Variate-value through each layer of intermediate variable in network determines a second Effective Numerical range, that is to say, that first nerves net Each layer in network has a second Effective Numerical range.
In the possible implementation of the disclosure, the variate-value acquisition modes packet of the intermediate variable of the first nerves network It includes: each sample image in the image set of the deployed environment of the neural network based on fixed-point number is inputted respectively The first nerves network carries out forward calculation, and each layer of the output valve that the last layer is removed in the first nerves network is made For the variate-value of the intermediate variable of the first nerves network.It is noted that the output of the last layer can also carry out line Property quantization, for example, obtain the output parameter of the last layer, in the case that output parameter is floating number, according to the effective of current layer Numberical range and bit wide determine the quantization parameter that indicates with floating number, according to the quantization parameter by the output parameter of floating number Be converted to fixed-point number.
In the possible implementation of the disclosure, before obtaining first nerves network, it is determined that the second Effective Numerical range, Before being trained to the first nerves network, the method also includes: it, will according to the second Effective Numerical range and the bit wide The variate-value equal interval quantizing of the intermediate variable of the first nerves network is fixed-point number.
In the possible implementation of the disclosure, according to quantizing range (such as the first Effective Numerical range, the second Effective Numerical model Enclose) and fixed-point number bit wide, can calculate quantization parameter that is corresponding, indicating with floating number will be defeated further according to this quantization parameter The floating number entered is mapped as fixed-point number.
First, for the parameter in first nerves network, according to the first Effective Numerical range and the bit wide, Determine the first quantization parameter indicated with floating number;According to first quantization parameter by the training completion based on floating number The parameter value of parameters of neural network be converted to fixed-point number.
And according to first quantization parameter by the first nerves network occurred in forward process trained each time Parameter value is that the value of the parameters of floating number is converted to fixed-point number.
Second, for each layer in the calculating process in first nerves network of input and output, i.e., for intermediate variable, root According to the second Effective Numerical range and the bit wide, the second quantization parameter indicated with floating number is determined;According to described second The variate-value of the first nerves network occurred in forward process trained each time is each centre of floating number by quantization parameter The value of variable is converted to fixed-point number.
In the possible implementation of the disclosure, when being trained to first nerves network, in forward direction trained each time In the process, when carrying out criticizing normalized, the mean value for criticizing normalized and the nerve based on floating number are carried out The mean value that corresponding batch of normalized is carried out in network is identical, and the variance statistic value for carrying out batch normalized is based on described The variance statistic value that corresponding batch of normalized is carried out in the neural network of floating number is identical.For example, first nerves network pair When the mean value of batch normalized and the neural network based on floating number handle data for the first time when data are handled The mean value of batch normalized is identical for the first time, criticizes normalized when first nerves network handles data for the first time Variance statistic value and the variance for criticizing normalized when handling for the neural network based on floating number data for the first time Statistical value is identical;When first nerves network handles data the mean value of second batch of normalized with for be based on floating-point The mean value of second batch of normalized is identical when several neural networks handles data, first nerves network to data into Row processing when second batch of normalized variance statistic value be directed to the neural network based on floating number to data at The variance statistic value of second batch of normalized is identical when reason;……;N-th when first nerves network handles data The mean value and n-th batch normalized when handling for the neural network based on floating number data for criticizing normalized Mean value it is identical, when first nerves network handles data the variance statistic value of n-th batch normalized be directed to base The variance statistic value of n-th batch normalized is identical when the neural network of floating number handles data, and N is greater than 2 Positive integer.
Using the disclosure, in order to meet the precision of deployed environment business demand, by the floating number data concentration training sample Xiang Xunlian before the neural network based on fixed-point number carries out after this input training, inputs the neural network based on fixed-point number after the training Afterwards, the parameters of floating number format and each intermediate variable are directly converted into fixed point number format first.Then, in the training Afterwards in the neural network based on fixed-point number when operation, not updates operation in the neural network after the training based on fixed-point number and obtain The mean value and variance statistic arrived, but the mean value and variance statistic obtained with operation in the neural network based on floating number, to give Directly to replace, to ensure the operation stability of the neural network based on fixed-point number after the training.
In the possible implementation of the disclosure, the first Effective Numerical range is determined according to the first statistical-reference parameter, The first statistical-reference parameter is that parameter value falls into the nerve based on floating number within the scope of first Effective Numerical The ratio of the total quantity of the parameter of the quantity of the parameter of network and the neural network based on floating number.
In the possible implementation of the disclosure, the second Effective Numerical range is determined according to the second statistical-reference parameter, Guarantee that 99.9% intermediate variable drops within the scope of the Effective Numerical.The second statistical-reference parameter falls into institute for variate-value It states in quantity and the first nerves network of the intermediate variable of the first nerves network within the scope of the second Effective Numerical Between variable total quantity ratio.The total quantity is the sum of one layer of intermediate variable, each layer of the second Effective Numerical range It is different.
It is noted that for above-mentioned Effective Numerical range, in each layer of neural network, the input of different layers The Effective Numerical range (namely the second Effective Numerical range) of parameter is different, the Effective Numerical range of the parameter of different layers ( It is exactly the first Effective Numerical range) different, the Effective Numerical model of the parameter of the Effective Numerical range and this layer of the input of same layer It encloses also different.
Using the disclosure, can through parameter of analytic model and its in operation intermediate variable statistical distribution, calibration is all Model parameter and its in operation intermediate variable Effective Numerical range.Demarcate parameter and intermediate variable in floating number network model Numberical range, floating number is mapped to get to being similar to " the fixed point of floating number precision by fixed-point number according to the numberical range The low accurate network model that number " is constituted is stable to realize the conversion of floating number network model to low accurate network model Metric policy.
During being trained to first nerves network, according to the position of the first Effective Numerical range and deployed environment Width carries out equal interval quantizing to the parameters in the neural network based on fixed-point number;According to the second Effective Numerical range and deployment The bit wide of environment carries out equal interval quantizing to each intermediate variable, so that parameters in the neural network based on fixed-point number and each A intermediate variable is fixed point number format.Then, the fine tuning stage neural network after quantization is adjusted (as finely tuned) until Convergence, based on the neural network of fixed-point number after being trained.
It, can be according to above-mentioned calibration convenient for being realized on most of terminal hardware using the equal interval quantizing strategy of the disclosure All model parameters for being indicated and being stored with floating number and intermediate variable are mapped to deployment ring by as a result resulting numberical range The low precision integer that border is supported, to realize the conversion of floating number network model to low accurate network model.
In the possible implementation of the disclosure, for the first Effective Numerical range, 0 point alignment (namely want by the 0 of floating number It is mapped on the 0 of integer), comprising: after determining the first Effective Numerical range, the first nerves network is trained Before, the first Effective Numerical range is adjusted, until the minimum value in the first Effective Numerical range is divided by with floating number The obtained remainder of the first quantization parameter indicated is 0.
In the possible implementation of the disclosure, for the second Effective Numerical range, 0 point alignment (namely want by the 0 of floating number It is mapped on the 0 of integer), comprising: after determining the second Effective Numerical range, the first nerves network is trained Before, the second Effective Numerical range is adjusted, until the minimum value in the second Effective Numerical range is divided by with floating number The obtained remainder of the second quantization parameter indicated is 0.
In the possible implementation of the disclosure, the sample graph image set of the training first nerves network is based on described in training The image set of the neural network of floating number is identical.That is, can be using the image of neural network of the training based on floating number Collection goes to train first nerves network.
Fig. 3 shows the flow chart of the image processing method according to the embodiment of the present disclosure, which is applied to figure As processing unit, for example, image processing apparatus can be executed by terminal device or other processing equipments, wherein terminal device can Think user equipment (UE, User Equipment), mobile device, cellular phone, wireless phone, personal digital assistant (PDA, Personal Digital Assistant), handheld device, calculate equipment, mobile unit, wearable device etc..It is some can In the implementation of energy, which can pass through processor and call the computer-readable instruction stored in memory Mode is realized.As shown in figure 3, the process includes:
Step S301, image set is obtained, the neural network based on fixed-point number that will be obtained after image set input training.
Neural network based on fixed-point number is the neural network obtained according to the above-mentioned described training method of the disclosure.
Step S302, image procossing is carried out using the neural network based on fixed-point number, obtains processing result image.
Use the neural network carry out image procossing, comprising: using the neural network to image to be processed into During row forward calculation, according to the Effective Numerical range of each layer of input value of the neural network, (such as first effectively Numberical range, the second Effective Numerical range), the input linear of this layer is quantified as fixed-point number.
Using the disclosure, floating number is mapped as by fixed-point number by training, obtains the nerve net based on fixed-point number after training Network can reach and the Processing with Neural Network based on floating number according to the Processing with Neural Network image set based on fixed-point number Essentially identical treatment effect.Fig. 4 shows the Processing with Neural Network effect picture according to the disclosure.In Fig. 4, F, P, C is in the chart In respectively represent the various calculating processes proposed in above-mentioned training method, wherein F indicate batch normalization layer do not update mean value and side Difference statistics, i.e., do not update μ, σ;P indicates the numberical range that intermediate variable is demarcated using upper and lower quantile (Percentile), on For example upper and lower γ quantile of lower quantile is point for obtaining multiple intermediate variables according to the second statistical-reference parameter of setting and constituting Cloth range, i.e. the second Effective Numerical range;C indicates the weight parameter Theta for neural network, counts according to each channel Its maximin range.AP, AP0.5 and AP0.75 are one kind that accuracy in detection is assessed in the object detection of image procossing Measurement index, the higher the better for numerical value.
In Fig. 4, each row is as follows in the meaning of the chart:
#0: it indicates to use based on the attainable accuracy of floating number network model institute, the baseline compared as experiment;
#1: in the case where indicating the training method proposed without using the disclosure, directly adopt what first nerves network reached Accuracy;
#2~8: after respectively indicating the training method for having used the disclosure to propose, using the mind based on fixed-point number after training The accuracy obtained through network.
Showing in the treatment effect figure from Fig. 4 can know: obtained after applying the above-mentioned training method of the disclosure Based on the neural network of fixed-point number after training, the accuracy close to the neural network based on floating number can be generated.Such as according to #2~8 As can be seen that each calculating process (innovatory algorithm) proposed using the disclosure, either F, P or C are to based on fixed point The accuracy of several neural networks produces positive influences, and by the operation of F, P and C, obtained numerical value gradually increases.
In the possible implementation of the disclosure, to the multiplication of the floating number in image processing process, using signless integer Product and carry out corresponding arithmetic shift right bit manipulation, by obtained result replace floating-point multiplication operation result.
In the possible implementation of the disclosure, to the addition of the floating number in image processing process, determine to contract according to bit wide Put the factor, according to the zoom factor by the floating-point of all addends zoom to output floating-point scaling it is consistent.
In the possible implementation of the disclosure, to the interpolation operation in image processing process, to described based on fixed-point number The layer that interpolation operation is carried out in neural network, carries out interpolation operation using the mode of arest neighbors interpolation;Wherein, the arest neighbors is inserted Value includes: to remove in the case where needing some intermediate variable size to expand is original N times to output pixel coordinate y Interpolation is carried out to obtain the nearest input pixel value in position after N.Wherein, N is the positive integer greater than 1.
A kind of video monitoring method of the disclosure, comprising: the video image in acquisition monitoring area;At above-mentioned image Reason method handles collected video image, obtains processing result image;According to obtained processing result image, output Prompt information.The video monitoring method can be applied to safety-security area, recognition of face gate system etc..
A kind of intelligent driving method of the disclosure, comprising: acquire the video image of vehicle periphery;Using above-mentioned image procossing Method handles collected video image, obtains processing result image;According to obtained processing result image, output refers to Show information, to carry out traveling control to vehicle.
The intelligent driving method can be applied in advanced DAS (Driver Assistant System) or automatic Pilot, when for advanced auxiliary When in control loop, whether the instruction information of output is such as to need to turn in next crossing driver for prompting driver It is curved or in next moment lane change etc.;When in automatic Pilot, the instruction information of output can be used to control vehicle Central control system, such as next crossing by central control system whether need turn round or in next moment lane change.
In addition, the executing subject of above-mentioned intelligent driving method is also possible to robot or other smart machines, such as lead Equipment of blind etc.
Using example:
1, the neural network M based on floating number for inputting a large capacity, high accuracy is carried out effective using following steps The calibration of numberical range, to first nerves networkIt is trained, obtains one after training and meet deployed environment business precision The neural network based on fixed-point number of (or precision close to the neural network based on floating number):
A) all parameters in the neural network M based on floating number are subjected to equal interval quantizing.
Copy the parameter of all preservations in the neural network M based on floating numberTo first nerves networkMark Remember each parameter θlThe first Effective Numerical range beThis first effectively Numberical range can be denoted as [lb1, ub1], according to the numerical precision of the first Effective Numerical range and deployed environment (or position It is wide) k, to θlEqual interval quantizing is carried out, shown in the formula of equal interval quantizing such as formula (1).The formula for calculating the first quantization parameter is such as public Shown in formula (2).Shown in the formula such as formula (3) for calculating integer, the formula such as formula (4) for calculating integer zero point is shown.Only Parameter is linearly quantized into after fixed-point number, and the intermediate variable based on point parameter could be obtained in step c).
Wherein, x is all parameter θs in the neural network M based on floating numberl, θlFor in the neural network M based on floating number The parameter matrix or bias matrix of each layer of convolution or full articulamentum;For first nerves networkIn all parameters, i.e., by base In the first nerves network that the neural network M Mapping and Converting of floating number obtainsIn all parameters;δ is integer to real data The floating-point of (i.e. floating number) scales (difference) coefficient, i.e. the first quantization parameter;I is integer, is that size is identical as x, and bit wide is k's Low precision integer indicates;Z is the zero point of integer, is that the zero point that bit wide is k indicates.
B) prepare a sample graph image set D similar with deployed environment, using pretreatment mode identical with M, for example, M Before to image A processing, image A mean value/variance is subtracted into first, then also wanting to the sample image in sample graph image set D Identical pretreatment is carried out, i.e., mean value/variance will also be subtracted to the sample image in sample graph image set D.
C) sample images all in D are sequentially sent toIt is forward calculation, and setup parameter γ.The specific number of parameter γ Value does not do specific limitation, generally takes γ=0.999.In calculating process, for each intermediate variable al, counts and save it Upper and lower γ quantile, as the second Effective Numerical range, which can be denoted as [lb2, ub2], the Effective Numerical Range is for subsequent to first nerves networkIt is finely adjusted.
Wherein D derives from the data set of the similar environment of deployed environment, and the data in D are floating number, constitutes intermediate become It measures, in this example, alTo pass through each intermediate variable obtained in multiple parameters calculating process;γ is the statistical-reference of setting Parameter takes 0.999;Upper and lower γ quantile is the distribution model for obtaining multiple intermediate variables according to the statistical-reference parameter of setting and constituting It encloses, i.e. the second Effective Numerical range, to be different from above-mentioned the first Effective Numerical range being made of multiple parameters.
2, first nerves network is obtainedAfterwards, further training is carried out on the data set used in M to restoreAccuracy:
It a), will be according to the Effective Numerical range of each layer of input of network model (such as when training in trained forward process First Effective Numerical range, the second Effective Numerical range) equal interval quantizing is carried out to input:
I. above-mentioned formula (1)-(4) are still used, difference is: for all floating point parameters and intermediate variable x used, this When x refer to the neural network based on floating number in parameter θlWith intermediate variable al, respectively according to corresponding Effective Numerical range (the corresponding first Effective Numerical range of such as parameter, the corresponding second Effective Numerical range of intermediate variable) does equal interval quantizing, that is, uses It is correspondingInstead of,For the parameter and intermediate variable in first nerves network;Wherein, θlCorresponding first has Imitating numberical range isThe specific value of parameter γ does not do specific limitation, generally takes γ=0.999.In calculating process, to each intermediate variable al, count and save its upper and lower γ quantile, as second Effective Numerical range.
For first nerves neural networkIn all parameters and intermediate variable(such as according to above-mentioned Effective Numerical range The corresponding first Effective Numerical range of parameter, the corresponding second Effective Numerical range of intermediate variable) and deployed environment numerical value essence (or bit wide) k is spent, equal interval quantizing is done:Wherein floating-point scaling (the first quantization parameter either the Two quantization parameters)Identical as x for size, the low precision integer that bit wide is k indicates,The zero point for being k for bit wide indicates.
Ii. if there is the layer for carrying out batch normalized in first nerves network, this layer of exported intermediate variable is not updated Mean value, variance statistic μ, σ.That is, batch normalization operation is carried out before a certain layer output of first nerves network, this Batch normalization operation in the layer in mean value and variance statistic in a batch of normalization operation and the neural network based on floating number In mean value and variance statistic difference it is equal.The layer can be convolutional layer, other layers being also possible in network.
B) in trained reverse procedure:
In reverse procedure, update for the parameter of first nerves network, with the ladder for receiving parameter (for fixed-point number) Angle value (for floating number) updates the parameter, then the parameter has reformed into floating number.Parameter will re-start after becoming floating number Equal interval quantizing.
C) repeat a), b) process untilAccuracy reaches requirement, that is, meets the precision of deployed environment business demand.
Using the disclosure, by the stabilization for improving the evolution of model parameter in training on Detection task of first nerves network Property, it is accurate in the tasks such as detection, classification can be obviously improved it for the neural network based on fixed-point number after being trained Degree.
3, in the neural network based on fixed-point number that deployment obtains:
A) high value precision is used, such as bit wide is the signless integer of 2kδ is scaled instead of the floating-point of each layer, is used Signless integer multiplication and corresponding arithmetic shift right bit manipulation, instead of floating-point multiplication relevant to δ;That is, for The multiplication of floating number, using signless integer product and carry out corresponding arithmetic shift right bit manipulation, by obtained result replace The operation result of floating-point multiplication.
It b) is 2k's with bit wide for the add operation in the neural network based on fixed-point numberInstead of addend and between Zoom factorAnd use and a) identical strategy scale all addends to and δout;δinFor the mind based on fixed-point number The zoom factor of input through the kth layer in network, i.e. the first quantization parameter or the second quantization parameter of kth layer, δoutFor The zoom factor of+1 layer of kth of input in neural network based on fixed-point number, i.e. ,+1 layer of kth of the first quantization parameter or Two quantization parameters, wherein k is the integer of total number of plies less than the neural network based on fixed-point number.Wherein, " addend " refers to floating addition Any two addend in method operation;"and" refer to floating add operation according to any two addend carry out add operation obtain it is defeated Result out;That is, the addition of floating number is determined zoom factor according to bit wide, will be directed to according to the zoom factor The zoom factor of all " addends " zooms to consistent with the zoom factor of "and".
C) it is realized using arest neighbors interpolationIn interpolation operation.Interpolation operation, to the nerve net based on fixed-point number The layer for needing to carry out interpolation operation in network carries out interpolation operation using the mode of arest neighbors interpolation.Arest neighbors interpolation includes: to need Some intermediate variable size is expanded in the case where being original N times, is obtained according to output pixel coordinate y divided by after N The input pixel value nearest to position carries out interpolation.Wherein, N is the positive integer greater than 1.
Using the disclosure, the deployment disclosure can be kept based on fixed-point number in the case where deployment numerical precision is limited The accuracy of neural network, and the operation for disposing the neural network based on fixed-point number does not depend on specialised hardware, therefore can incite somebody to action Calculating demand is larger, the higher large capacity floating-point exponential model of accuracy.The neural network based on fixed-point number of the disclosure is extensive Ground, which is deployed to, to be calculated on the limited terminal device of power, and can be with smaller expense, under the premise of accuracy is acceptable, efficiently Complete the calculating under deployed environment.
For example, a public security department intends to dispose pedestrian detection and identification system on the intelligent security guard camera that a batch is newly purchased System, by the calculating power of these cameras itself, it is public to detect to low latency identification in the case where not using cloud computing resource Suspicious figure in place altogether.It is purchased and after installing equipment spending huge sums, the IT expert of the department has found these intelligent cameras The computing chip performance of head institute band is very weak, can not real time execution pedestrian detection and identifying system;Again new equipment is purchased and disposes, The expense that can not be born will be brought.Neural network of the disclosure based on fixed-point number is disposed, the IT expert of the department can be easily The calculating demand of model used in pedestrian detection and identifying system is substantially reduced by ground, and passes through computing chip in security camera Universal instruction set, coding realizes all basic operations needed for low precision deep learning model in the operation disclosure, and system is most Whole accuracy will not be decreased obviously.
In addition to security protection, the disclosure is further adapted in the application scenarios such as mobile phone, automatic Pilot, and one or more spies may be implemented Property: under the premise of storage, operation numerical value bit wide≤4, keep detection/recognition accuracy;It is improved in operation efficiency/handling capacity Under the premise of about 27 times, detection/recognition accuracy is kept.
It will be understood by those skilled in the art that each step writes sequence simultaneously in the above method of specific embodiment It does not mean that stringent execution sequence and any restriction is constituted to implementation process, the specific execution sequence of each step should be with its function It can be determined with possible internal logic.
Above-mentioned each embodiment of the method that the disclosure refers to can phase each other without prejudice to principle logic The embodiment formed after combining is mutually combined, as space is limited, the disclosure repeats no more.
In addition, the disclosure additionally provides the training device, image processing apparatus, video monitoring system, intelligence of neural network Control loop, electronic equipment, computer readable storage medium, program.Wherein, for how to be trained to neural network, such as What carries out image procossing using the neural network based on fixed-point number obtained after training, can be used to realize times that the disclosure provides A kind of training of neural network and image processing method, corresponding technical solution and description and the corresponding record referring to method part, It repeats no more.
Fig. 5 shows the block diagram of the target object detection device according to the embodiment of the present disclosure, as shown in figure 5, the disclosure is implemented The training device of the neural network of example, comprising: first processing units 31, for according to the portion by the neural network based on fixed-point number The bit wide that environment determines is affixed one's name to, the parameters for the neural network based on floating number that training is completed are mapped as fixed-point number, are obtained Each parameter of first nerves network, the first nerves network is fixed-point number;The second processing unit 32, for using sample graph The image set training first nerves network obtains the neural network based on fixed-point number.Wherein, in the training first nerves When network, the parameter value by the first nerves network occurred in forward process trained each time is the parameters of floating number Value is each mapped to fixed-point number, and is floating-point by the variate-value of the first nerves network occurred in forward process trained each time The value of several each intermediate variables is each mapped to fixed-point number;In the rear in the process of training each time, first mind is used The gradient value that parameters through network receive updates the value of the parameter.
In the possible implementation of the disclosure, the first processing units are further used for: being based on floating number according to described The parameter value of parameters of neural network determine the first Effective Numerical range;According to the first Effective Numerical range and institute Rheme is wide, and the parameter value equal interval quantizing by the parameters for the neural network based on floating number that the training is completed is fixed point Number.Described the second processing unit in the training first nerves network, will occur in forward process trained each time the The parameter value of one neural network is in the case that the value of the parameters of floating number is each mapped to fixed-point number, for according to First Effective Numerical range and the bit wide, by the parameter value of the first nerves network occurred in forward process trained each time Value equal interval quantizing for the parameters of floating number is fixed-point number.
In the possible implementation of the disclosure, described the second processing unit is further in the training first nerves network When, the variate-value by the first nerves network occurred in forward process trained each time is each intermediate variable of floating number In the case that value is each mapped to fixed-point number, it is used for according to the second Effective Numerical range and the bit wide, by training each time The variate-value of the first nerves network occurred in forward process is that the value equal interval quantizing of each intermediate variable of floating number is fixed point Number;The second Effective Numerical range is determined according to the variate-value of the intermediate variable of the first nerves network.
In the possible implementation of the disclosure, described device further include: obtain first nerves network using following steps The acquiring unit of the variate-value of intermediate variable;The acquiring unit, is used for: will be from the neural network based on fixed-point number Deployed environment image set in each sample image input the first nerves network respectively and carry out forward calculation;It will be described Variable in first nerves network except each layer of output valve of the last layer as the intermediate variable of the first nerves network Value.
In the possible implementation of the disclosure, described device further include: third processing unit, the third processing unit are used In after determining the second Effective Numerical range, before being trained to the first nerves network, effectively according to second The variate-value equal interval quantizing of the intermediate variable of the first nerves network is fixed-point number by numberical range and the bit wide.
In the possible implementation of the disclosure, the first processing units are further used for: according to first significant figure It is worth range and the bit wide, determines the first quantization parameter indicated with floating number;According to first quantization parameter by the instruction The parameter value for practicing the parameters for the neural network based on floating number completed is converted to fixed-point number;Described the second processing unit exists According to the first Effective Numerical range and the bit wide, the first nerves network that will occur in forward process trained each time Parameter value be floating number parameters value equal interval quantizing be fixed-point number in the case where, for according to it is described first quantization be Number turns the value for the parameters that the parameter value of the first nerves network occurred in forward process trained each time is floating number It is changed to fixed-point number.
In the possible implementation of the disclosure, described the second processing unit is according to the second Effective Numerical range and institute's rheme Width, the variate-value by the first nerves network occurred in forward process trained each time are each intermediate variable of floating number In the case where being worth equal interval quantizing for fixed-point number, it is further used for: according to the second Effective Numerical range and the bit wide, determines The second quantization parameter indicated with floating number;To be occurred in forward process trained each time according to second quantization parameter The variate-value of first nerves network is that the value of each intermediate variable of floating number is converted to fixed-point number.
In the possible implementation of the disclosure, described the second processing unit is further used for: to first nerves network into When row training, in forward process trained each time, when carrying out criticizing normalized, batch normalized is carried out Mean value is identical as the mean value of corresponding batch of normalized of progress in the neural network based on floating number, carries out batch normalization The variance statistic of the variance statistic value of processing and corresponding batch of normalized of progress in the neural network based on floating number It is worth identical.
In the possible implementation of the disclosure, the first processing units are further used for: being joined according to the first statistical-reference In the case that number determines the first Effective Numerical range, the first statistical-reference parameter, which falls into described first for parameter value, to be had Imitate the quantity and the neural network based on floating number of the parameter of the neural network based on floating number in numberical range Parameter total quantity ratio.
In the possible implementation of the disclosure, described the second processing unit is further used for: being joined according to the second statistical-reference In the case that number determines the second Effective Numerical range, the second statistical-reference parameter, which falls into described second for variate-value, to be had The intermediate variable of the quantity and first nerves network of the intermediate variable of the first nerves network in effect numberical range The ratio of total quantity.
In the possible implementation of the disclosure, described device further include: the first adjustment unit is used for: described is being determined After one Effective Numerical range, before being trained to the first nerves network, the first Effective Numerical range is adjusted, until Minimum value in the first Effective Numerical range is 0 divided by the obtained remainder of the first quantization parameter indicated with floating number.
In the possible implementation of the disclosure, described device further include: second adjustment unit is used for: described is being determined After two Effective Numerical ranges, before being trained to the first nerves network, the second Effective Numerical range is adjusted, until Minimum value in the second Effective Numerical range is 0 divided by the obtained remainder of the second quantization parameter indicated with floating number.
In the possible implementation of the disclosure, the sample graph image set of the training first nerves network is based on described in training The sample graph image set of the neural network of floating number is identical.
The nerve net using method described in any of the above embodiments training can be used in a kind of image processing apparatus of the disclosure Network carries out image procossing.
In the possible implementation of the disclosure, described device is further used for: in the use neural network to be processed Image carry out forward calculation during, will according to the Effective Numerical range of each layer of input value of the neural network The input linear of this layer is quantified as fixed-point number.
Fig. 6 shows the block diagram according to disclosure video monitoring system, in Fig. 6, a kind of video monitoring system of the disclosure 500, comprising: the first imaging sensor 51, for acquiring the video image in monitoring area;First processor 54, for using Above-mentioned image processing method handles collected video image, obtains processing result image;First output unit 55 is used According to obtained processing result image, prompt information is exported.Wherein, in the case of the first output unit is language output, according to Obtained processing result image exports speech prompt information;First output unit is in the case of image exports, according to obtained figure As processing result, picture cues information is exported.The video monitoring system 500 further includes memory 53 and company for storing instruction Connect the bus of above-mentioned each device (the first imaging sensor 51, memory 53, first processor 54 and the first output unit 55) 52, and first processor 54 executes the processing of above-mentioned video image by reading the instruction stored in memory 53.
Fig. 7 shows the block diagram according to disclosure intelligent driving system, in Fig. 7, a kind of intelligent driving system of the disclosure 600, comprising: the second imaging sensor 61, for acquiring the video image of vehicle periphery;Second processor 64, for using It states image processing method to handle collected video image, obtains processing result image;Second output unit 65, is used for According to obtained processing result image, output instruction information, to carry out traveling control to vehicle.The intelligent driving system includes height Grade auxiliary drives and automatic Pilot, it can is applied in advanced DAS (Driver Assistant System) or automatic Pilot.When for advanced auxiliary When helping in control loop, whether output instruction information can be is exported to the driver of vehicle, to be used to prompt driver under One crossing needs to turn round or in next moment lane change etc.;When in automatic Pilot, the instruction information of output can Being exported to the central control system of vehicle, to be used to control the central control system of vehicle, such as pass through central control system at next crossing Whether need to turn round or in next moment lane change etc..The intelligent driving system 600 further includes depositing for storing instruction Reservoir 63 and connection above-mentioned each device (the second imaging sensor 61, memory 63, second processor 64 and the second output unit 65) bus 62, and second processor 64 executes above-mentioned video image by reading the instruction stored in memory 63 Processing.
In addition, the executing subject of above-mentioned intelligent driving system is also possible to robot or other smart machines, such as lead Equipment of blind etc.
Fig. 8 shows the block diagram according to disclosure electronic equipment, in Fig. 8, a kind of electronic equipment 700 of the disclosure, comprising: Three imaging sensors 71, for acquiring the video image around electronic equipment;Third processor 74, for using at above-mentioned image Reason method handles collected video image, obtains processing result image;Controller 75, at according to the third Manage the traveling for the result controlling electronic devices (such as robot or guide equipment) that device generates.The electronic equipment 700 further includes using In the memory 73 and connection above-mentioned each device (third imaging sensor 71, memory 73, third processor 74 of store instruction With controller 75) bus 72, and third processor 74 executes above-mentioned view by reading in memory 73 instruction that is stored The processing of frequency image.
In some embodiments, the embodiment of the present disclosure provides the function that has of device or comprising module can be used for holding The method of row embodiment of the method description above, specific implementation are referred to the description of embodiment of the method above, for sake of simplicity, this In repeat no more.
The embodiment of the present disclosure also proposes a kind of computer readable storage medium, is stored thereon with computer program instructions, institute It states when computer program instructions are executed by processor and realizes the above method.Computer readable storage medium can be non-volatile meter Calculation machine readable storage medium storing program for executing.
The embodiment of the present disclosure also proposes a kind of electronic equipment, comprising: processor;For storage processor executable instruction Memory;Wherein, the processor is configured to the above method.The electronic equipment may be provided as terminal or other forms Equipment.
Fig. 9 is the block diagram of a kind of electronic equipment 800 shown according to an exemplary embodiment.For example, electronic equipment 800 can To be mobile phone, computer, digital broadcasting terminal, messaging device, game console, tablet device, Medical Devices are good for Body equipment, the terminals such as personal digital assistant.
Referring to Fig. 9, electronic equipment 800 may include following one or more components: processing component 802, memory 804, Power supply module 806, multimedia component 808, audio component 810, the interface 812 of input/output (I/O), sensor module 814, And communication component 816.
The integrated operation of the usual controlling electronic devices 800 of processing component 802, such as with display, call, data are logical Letter, camera operation and record operate associated operation.Processing component 802 may include one or more processors 820 to hold Row instruction, to perform all or part of the steps of the methods described above.In addition, processing component 802 may include one or more moulds Block, convenient for the interaction between processing component 802 and other assemblies.For example, processing component 802 may include multi-media module, with Facilitate the interaction between multimedia component 808 and processing component 802.
Memory 804 is configured as storing various types of data to support the operation in electronic equipment 800.These data Example include any application or method for being operated on electronic equipment 800 instruction, contact data, telephone directory Data, message, picture, video etc..Memory 804 can by any kind of volatibility or non-volatile memory device or it Combination realize, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM) is erasable Except programmable read only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, fastly Flash memory, disk or CD.
Power supply module 806 provides electric power for the various assemblies of electronic equipment 800.Power supply module 806 may include power supply pipe Reason system, one or more power supplys and other with for electronic equipment 800 generate, manage, and distribute the associated component of electric power.
Multimedia component 808 includes the screen of one output interface of offer between the electronic equipment 800 and user. In some embodiments, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch surface Plate, screen may be implemented as touch screen, to receive input signal from the user.Touch panel includes one or more touches Sensor is to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or sliding The boundary of movement, but also detect duration and pressure associated with the touch or slide operation.In some embodiments, Multimedia component 808 includes a front camera and/or rear camera.When electronic equipment 800 is in operation mode, as clapped When taking the photograph mode or video mode, front camera and/or rear camera can receive external multi-medium data.It is each preposition Camera and rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 810 is configured as output and/or input audio signal.For example, audio component 810 includes a Mike Wind (MIC), when electronic equipment 800 is in operation mode, when such as call mode, recording mode, and voice recognition mode, microphone It is configured as receiving external audio signal.The received audio signal can be further stored in memory 804 or via logical Believe that component 816 is sent.In some embodiments, audio component 810 further includes a loudspeaker, is used for output audio signal.
I/O interface 812 provides interface between processing component 802 and peripheral interface module, and above-mentioned peripheral interface module can To be keyboard, click wheel, button etc..These buttons may include, but are not limited to: home button, volume button, start button and lock Determine button.
Sensor module 814 includes one or more sensors, for providing the state of various aspects for electronic equipment 800 Assessment.For example, sensor module 814 can detecte the state that opens/closes of electronic equipment 800, the relative positioning of component, example As the component be electronic equipment 800 display and keypad, sensor module 814 can also detect electronic equipment 800 or The position change of 800 1 components of electronic equipment, the existence or non-existence that user contacts with electronic equipment 800, electronic equipment 800 The temperature change of orientation or acceleration/deceleration and electronic equipment 800.Sensor module 814 may include proximity sensor, be configured For detecting the presence of nearby objects without any physical contact.Sensor module 814 can also include optical sensor, Such as CMOS or ccd image sensor, for being used in imaging applications.In some embodiments, which may be used also To include acceleration transducer, gyro sensor, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 816 is configured to facilitate the communication of wired or wireless way between electronic equipment 800 and other equipment. Electronic equipment 800 can access the wireless network based on communication standard, such as WiFi, 2G or 3G or their combination.Show at one In example property embodiment, communication component 816 receives broadcast singal or broadcast from external broadcasting management system via broadcast channel Relevant information.In one exemplary embodiment, the communication component 816 further includes near-field communication (NFC) module, short to promote Cheng Tongxin.For example, radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band can be based in NFC module (UWB) technology, bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, electronic equipment 800 can be by one or more application specific integrated circuit (ASIC), number Word signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for executing the above method.
In the exemplary embodiment, a kind of non-volatile computer readable storage medium storing program for executing is additionally provided, for example including calculating The memory 804 of machine program instruction, above-mentioned computer program instructions can be executed by the processor 820 of electronic equipment 800 to complete The above method.
The disclosure can be system, method and/or computer program product.Computer program product may include computer Readable storage medium storing program for executing, containing for making processor realize the computer-readable program instructions of various aspects of the disclosure.
Computer readable storage medium, which can be, can keep and store the tangible of the instruction used by instruction execution equipment Equipment.Computer readable storage medium for example can be-- but it is not limited to-- storage device electric, magnetic storage apparatus, optical storage Equipment, electric magnetic storage apparatus, semiconductor memory apparatus or above-mentioned any appropriate combination.Computer readable storage medium More specific example (non exhaustive list) includes: portable computer diskette, hard disk, random access memory (RAM), read-only deposits It is reservoir (ROM), erasable programmable read only memory (EPROM or flash memory), static random access memory (SRAM), portable Compact disk read-only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanical coding equipment, for example thereon It is stored with punch card or groove internal projection structure and the above-mentioned any appropriate combination of instruction.Calculating used herein above Machine readable storage medium storing program for executing is not interpreted that instantaneous signal itself, the electromagnetic wave of such as radio wave or other Free propagations lead to It crosses the electromagnetic wave (for example, the light pulse for passing through fiber optic cables) of waveguide or the propagation of other transmission mediums or is transmitted by electric wire Electric signal.
Computer-readable program instructions as described herein can be downloaded to from computer readable storage medium it is each calculate/ Processing equipment, or outer computer or outer is downloaded to by network, such as internet, local area network, wide area network and/or wireless network Portion stores equipment.Network may include copper transmission cable, optical fiber transmission, wireless transmission, router, firewall, interchanger, gateway Computer and/or Edge Server.Adapter or network interface in each calculating/processing equipment are received from network to be counted Calculation machine readable program instructions, and the computer-readable program instructions are forwarded, for the meter being stored in each calculating/processing equipment In calculation machine readable storage medium storing program for executing.
Computer program instructions for executing disclosure operation can be assembly instruction, instruction set architecture (ISA) instructs, Machine instruction, machine-dependent instructions, microcode, firmware instructions, condition setup data or with one or more programming languages The source code or object code that any combination is write, the programming language include the programming language-of object-oriented such as Smalltalk, C++ etc., and conventional procedural programming languages-such as " C " language or similar programming language.Computer Readable program instructions can be executed fully on the user computer, partly execute on the user computer, be only as one Vertical software package executes, part executes on the remote computer or completely in remote computer on the user computer for part Or it is executed on server.In situations involving remote computers, remote computer can pass through network-packet of any kind It includes local area network (LAN) or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as benefit It is connected with ISP by internet).In some embodiments, by utilizing computer-readable program instructions Status information carry out personalized customization electronic circuit, such as programmable logic circuit, field programmable gate array (FPGA) or can Programmed logic array (PLA) (PLA), the electronic circuit can execute computer-readable program instructions, to realize each side of the disclosure Face.
Referring herein to according to the flow chart of the method, apparatus (system) of the embodiment of the present disclosure and computer program product and/ Or block diagram describes various aspects of the disclosure.It should be appreciated that flowchart and or block diagram each box and flow chart and/ Or in block diagram each box combination, can be realized by computer-readable program instructions.
These computer-readable program instructions can be supplied to general purpose computer, special purpose computer or other programmable datas The processor of processing unit, so that a kind of machine is produced, so that these instructions are passing through computer or other programmable datas When the processor of processing unit executes, function specified in one or more boxes in implementation flow chart and/or block diagram is produced The device of energy/movement.These computer-readable program instructions can also be stored in a computer-readable storage medium, these refer to It enables so that computer, programmable data processing unit and/or other equipment work in a specific way, thus, it is stored with instruction Computer-readable medium then includes a manufacture comprising in one or more boxes in implementation flow chart and/or block diagram The instruction of the various aspects of defined function action.
Computer-readable program instructions can also be loaded into computer, other programmable data processing units or other In equipment, so that series of operation steps are executed in computer, other programmable data processing units or other equipment, to produce Raw computer implemented process, so that executed in computer, other programmable data processing units or other equipment Instruct function action specified in one or more boxes in implementation flow chart and/or block diagram.
The flow chart and block diagram in the drawings show system, method and the computer journeys according to multiple embodiments of the disclosure The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation One module of table, program segment or a part of instruction, the module, program segment or a part of instruction include one or more use The executable instruction of the logic function as defined in realizing.In some implementations as replacements, function marked in the box It can occur in a different order than that indicated in the drawings.For example, two continuous boxes can actually be held substantially in parallel Row, they can also be executed in the opposite order sometimes, and this depends on the function involved.It is also noted that block diagram and/or The combination of each box in flow chart and the box in block diagram and or flow chart, can the function as defined in executing or dynamic The dedicated hardware based system made is realized, or can be realized using a combination of dedicated hardware and computer instructions.
The presently disclosed embodiments is described above, above description is exemplary, and non-exclusive, and It is not limited to disclosed each embodiment.Without departing from the scope and spirit of illustrated each embodiment, for this skill Many modifications and changes are obvious for the those of ordinary skill in art field.The selection of term used herein, purport In the principle, practical application or technological improvement to technology in market for best explaining each embodiment, or make the art Other those of ordinary skill can understand each embodiment disclosed herein.

Claims (10)

1. a kind of training method of neural network, which is characterized in that the described method includes:
According to the bit wide that the deployed environment by the neural network based on fixed-point number determines, the mind based on floating number that training is completed Parameters through network are mapped as fixed-point number, obtain first nerves network, and each parameter of the first nerves network is fixed Points;
Using the sample image collection training first nerves network, the neural network based on fixed-point number is obtained;
Wherein, in the training first nerves network, the first nerves network that occurs in forward process that will be trained each time Parameter value be that the value of parameters of floating number is each mapped to fixed-point number, and will be trained each time forward process in occur The variate-value of first nerves network be that the value of each intermediate variable of floating number is each mapped to fixed-point number;It is trained each time It is backward during, the value of the parameter is updated using the gradient value that the parameters of the first nerves network receive.
2. the method according to claim 1, wherein according to the bit wide, by the training completion based on floating The parameters of the neural network of points are mapped as fixed-point number, comprising:
The first Effective Numerical range is determined according to the parameter value of the parameters of the neural network based on floating number;
According to the first Effective Numerical range and the bit wide, by the neural network based on floating number of the training completion The parameter value equal interval quantizing of parameters is fixed-point number;
In the training first nerves network, by the parameter of the first nerves network occurred in forward process trained each time Value is that the value of the parameters of floating number is each mapped to fixed-point number, comprising:
According to the first Effective Numerical range and the bit wide, the first nerves that will occur in forward process trained each time The parameter value of network is that the value equal interval quantizing of the parameters of floating number is fixed-point number.
3. the method according to claim 1, wherein will be instructed each time in the training first nerves network The variate-value of the first nerves network occurred in experienced forward process is that the value of each intermediate variable of floating number is each mapped to Fixed-point number, comprising:
According to the second Effective Numerical range and the bit wide, the first nerves network that will occur in forward process trained each time Variate-value be the value equal interval quantizing of each intermediate variable of floating number be fixed-point number;
The second Effective Numerical range is determined according to the variate-value of the intermediate variable of the first nerves network.
4. a kind of image processing method, which is characterized in that use the mind using the described in any item method training of claim 1-3 Image procossing is carried out through network.
5. a kind of video monitoring method characterized by comprising
Acquire the video image in monitoring area;
Collected video image is handled using method as claimed in claim 4, obtains processing result image;
According to obtained processing result image, prompt information is exported.
6. a kind of intelligent driving method characterized by comprising
Acquire the video image of vehicle periphery;
Collected video image is handled using method as claimed in claim 4, obtains processing result image;
According to obtained processing result image, output instruction information, to carry out traveling control to vehicle.
7. a kind of training device of neural network, which is characterized in that described device includes:
First processing units, it is for the bit wide that basis is determined by the deployed environment of the neural network based on fixed-point number, training is complete At the parameters of the neural network based on floating number be mapped as fixed-point number, obtain first nerves network, the first nerves Each parameter of network is fixed-point number;
The second processing unit obtains described based on fixed-point number for using the sample image collection training first nerves network Neural network;
Wherein, in the training first nerves network, the first nerves network that occurs in forward process that will be trained each time Parameter value be that the value of parameters of floating number is each mapped to fixed-point number, and will be trained each time forward process in occur The variate-value of first nerves network be that the value of each intermediate variable of floating number is each mapped to fixed-point number;It is trained each time It is backward during, the value of the parameter is updated using the gradient value that the parameters of the first nerves network receive.
8. a kind of image processing apparatus, which is characterized in that use the mind using the described in any item method training of claim 1-3 Image procossing is carried out through network.
9. a kind of video monitoring system characterized by comprising
First imaging sensor, for acquiring the video image in monitoring area;
First processor obtains image for handling using method as claimed in claim 4 collected video image Processing result;
First output unit, for exporting prompt information according to obtained processing result image.
10. a kind of intelligent driving system characterized by comprising
Second imaging sensor, for acquiring the video image of vehicle periphery;
Second processor obtains image for handling using method as claimed in claim 4 collected video image Processing result;
Second output unit, for according to obtained processing result image, output instruction information, to carry out traveling control to vehicle System.
CN201910430085.XA 2019-05-22 2019-05-22 The training method and device of neural network, electronic equipment and storage medium Pending CN110210619A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910430085.XA CN110210619A (en) 2019-05-22 2019-05-22 The training method and device of neural network, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910430085.XA CN110210619A (en) 2019-05-22 2019-05-22 The training method and device of neural network, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110210619A true CN110210619A (en) 2019-09-06

Family

ID=67788151

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910430085.XA Pending CN110210619A (en) 2019-05-22 2019-05-22 The training method and device of neural network, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110210619A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111191789A (en) * 2020-01-20 2020-05-22 上海依图网络科技有限公司 Model training method, system, chip, electronic device and medium
CN111310890A (en) * 2020-01-19 2020-06-19 深圳云天励飞技术有限公司 Deep learning model optimization method and device and terminal equipment
CN111401518A (en) * 2020-03-04 2020-07-10 杭州嘉楠耘智信息科技有限公司 Neural network quantization method and device and computer readable storage medium
CN111738427A (en) * 2020-08-14 2020-10-02 电子科技大学 Operation circuit of neural network
CN111985495A (en) * 2020-07-09 2020-11-24 珠海亿智电子科技有限公司 Model deployment method, device, system and storage medium
CN112508166A (en) * 2019-09-13 2021-03-16 富士通株式会社 Information processing apparatus and method, and recording medium storing information processing program
CN113298244A (en) * 2021-04-21 2021-08-24 上海安路信息科技股份有限公司 Neural network post-processing implementation method, device, terminal and medium in target detection
WO2021239006A1 (en) * 2020-05-27 2021-12-02 支付宝(杭州)信息技术有限公司 Secret sharing-based training method and apparatus, electronic device, and storage medium
CN111401518B (en) * 2020-03-04 2024-06-04 北京硅升科技有限公司 Neural network quantization method, device and computer readable storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107766939A (en) * 2017-11-07 2018-03-06 维沃移动通信有限公司 A kind of data processing method, device and mobile terminal

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107766939A (en) * 2017-11-07 2018-03-06 维沃移动通信有限公司 A kind of data processing method, device and mobile terminal

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112508166A (en) * 2019-09-13 2021-03-16 富士通株式会社 Information processing apparatus and method, and recording medium storing information processing program
CN111310890B (en) * 2020-01-19 2023-10-17 深圳云天励飞技术有限公司 Optimization method and device of deep learning model and terminal equipment
CN111310890A (en) * 2020-01-19 2020-06-19 深圳云天励飞技术有限公司 Deep learning model optimization method and device and terminal equipment
CN111191789A (en) * 2020-01-20 2020-05-22 上海依图网络科技有限公司 Model training method, system, chip, electronic device and medium
CN111191789B (en) * 2020-01-20 2023-11-28 上海依图网络科技有限公司 Model optimization deployment system, chip, electronic equipment and medium
CN111401518A (en) * 2020-03-04 2020-07-10 杭州嘉楠耘智信息科技有限公司 Neural network quantization method and device and computer readable storage medium
CN111401518B (en) * 2020-03-04 2024-06-04 北京硅升科技有限公司 Neural network quantization method, device and computer readable storage medium
WO2021239006A1 (en) * 2020-05-27 2021-12-02 支付宝(杭州)信息技术有限公司 Secret sharing-based training method and apparatus, electronic device, and storage medium
CN111985495A (en) * 2020-07-09 2020-11-24 珠海亿智电子科技有限公司 Model deployment method, device, system and storage medium
CN111985495B (en) * 2020-07-09 2024-02-02 珠海亿智电子科技有限公司 Model deployment method, device, system and storage medium
CN111738427A (en) * 2020-08-14 2020-10-02 电子科技大学 Operation circuit of neural network
CN111738427B (en) * 2020-08-14 2020-12-29 电子科技大学 Operation circuit of neural network
CN113298244A (en) * 2021-04-21 2021-08-24 上海安路信息科技股份有限公司 Neural network post-processing implementation method, device, terminal and medium in target detection
CN113298244B (en) * 2021-04-21 2023-11-24 上海安路信息科技股份有限公司 Neural network post-processing implementation method, device, terminal and medium in target detection

Similar Documents

Publication Publication Date Title
CN110210619A (en) The training method and device of neural network, electronic equipment and storage medium
CN111114554B (en) Method, device, terminal and storage medium for predicting travel track
CN108256555A (en) Picture material recognition methods, device and terminal
CN109829501A (en) Image processing method and device, electronic equipment and storage medium
CN109801270A (en) Anchor point determines method and device, electronic equipment and storage medium
CN109766954A (en) A kind of target object processing method, device, electronic equipment and storage medium
CN108010060A (en) Object detection method and device
US20200051564A1 (en) Artificial intelligence device
CN107798669A (en) Image defogging method, device and computer-readable recording medium
CN109919300A (en) Neural network training method and device and image processing method and device
CN108764069A (en) Biopsy method and device
CN107832836A (en) Model-free depth enhancing study heuristic approach and device
CN106477038A (en) Image capturing method and device, unmanned plane
CN110298310A (en) Image processing method and device, electronic equipment and storage medium
CN109087238A (en) Image processing method and device, electronic equipment and computer readable storage medium
CN109819229A (en) Image processing method and device, electronic equipment and storage medium
CN110060215A (en) Image processing method and device, electronic equipment and storage medium
CN110909815A (en) Neural network training method, neural network training device, neural network processing device, neural network training device, image processing device and electronic equipment
CN109543537A (en) Weight identification model increment training method and device, electronic equipment and storage medium
CN106778773A (en) The localization method and device of object in picture
CN106203306A (en) The Forecasting Methodology at age, device and terminal
CN110532956A (en) Image processing method and device, electronic equipment and storage medium
CN107992848A (en) Obtain the method, apparatus and computer-readable recording medium of depth image
CN110245757A (en) A kind of processing method and processing device of image pattern, electronic equipment and storage medium
CN110188865A (en) Information processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190906