CN107909147A - A kind of data processing method and device - Google Patents

A kind of data processing method and device Download PDF

Info

Publication number
CN107909147A
CN107909147A CN201711137531.5A CN201711137531A CN107909147A CN 107909147 A CN107909147 A CN 107909147A CN 201711137531 A CN201711137531 A CN 201711137531A CN 107909147 A CN107909147 A CN 107909147A
Authority
CN
China
Prior art keywords
network model
atlas
compression
fisrt feature
weights
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711137531.5A
Other languages
Chinese (zh)
Inventor
廖振生
曾儿孟
吴伟华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHENZHEN HARZONE TECHNOLOGY Co Ltd
Original Assignee
SHENZHEN HARZONE TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHENZHEN HARZONE TECHNOLOGY Co Ltd filed Critical SHENZHEN HARZONE TECHNOLOGY Co Ltd
Priority to CN201711137531.5A priority Critical patent/CN107909147A/en
Publication of CN107909147A publication Critical patent/CN107909147A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Neurology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

An embodiment of the present invention provides a kind of data processing method and device, the described method includes:First nerves network model is obtained, and determines the fisrt feature mapping atlas of the first nerves network model;The nuisance parameter of the fisrt feature mapping atlas is determined according to sparse representation theory;First compression processing is carried out to fisrt feature mapping atlas according to the nuisance parameter, obtains second feature mapping atlas, and atlas generation nervus opticus network model is mapped according to the second feature;Pending data is trained according to the nervus opticus network model, obtains training result.Processing can be compressed it according to the nuisance parameter of neural network model using the embodiment of the present invention, on the one hand realize the compression to neural network model and preposition calculate of network accelerates, on the other hand, also improve data processing speed.

Description

A kind of data processing method and device
Technical field
The present invention relates to technical field of data processing, and in particular to a kind of data processing method and device.
Background technology
Deep neural network has been opened up in fields such as computer vision, natural language processing, language identification and machine translation Reveal huge application power.Because of its good expressive ability, allow deep neural network smart mobile phone, embedded system and from It is widely used in the mobile equipment such as dynamic driving.But the rising of this precision is in depth and range by network model On stacking produce, therefore so that the video memory consumption of the hard-disc storage of computer, memory consumption and GPU steeply rises.This Outside, it is actually needed as technology develops, the application of deep neural network model on the mobile apparatus causes extensive concern.But Computing capability and the storage such as many mobile equipment such as notebook, mobile phone or embedded hardware application, automobile embedded device Ability is all very limited.Also, the live effect requirement needed in the application such as video monitoring, scene analysis of deep neural network Height, therefore, how to solve the problems, such as compression to deep neural network model and network it is preposition calculate accelerate it is urgently to be resolved hurrily.
The content of the invention
An embodiment of the present invention provides a kind of data processing method and device, can solve to deep neural network model Compression and preposition calculate of network accelerate, and improve data processing speed.
First aspect of the embodiment of the present invention provides a kind of data processing method, including:
First nerves network model is obtained, and determines the fisrt feature mapping atlas of the first nerves network model;
The nuisance parameter of the fisrt feature mapping atlas is determined according to sparse representation theory;
First compression processing is carried out to fisrt feature mapping atlas according to the nuisance parameter, second feature is obtained and reflects Atlas is penetrated, and atlas generation nervus opticus network model is mapped according to the second feature;
Pending data is trained according to the nervus opticus network model, obtains training result.
Second aspect of the embodiment of the present invention provides data processing equipment, including:
Acquiring unit, for obtaining first nerves network model, and determines the first spy of the first nerves network model Sign mapping atlas;
Determination unit, for determining the nuisance parameter of the fisrt feature mapping atlas according to sparse representation theory;
First compression unit, for being carried out according to the nuisance parameter to fisrt feature mapping atlas at the first compression Reason, obtains second feature mapping atlas, and maps atlas generation nervus opticus network model according to the second feature;
Training unit, for being trained pending data according to the nervus opticus network model, obtains training knot Fruit.
The third aspect, an embodiment of the present invention provides a kind of mobile terminal, including:Processor and memory;And one Or multiple programs, one or more of programs are stored in the memory, and it is configured to be held by the processor OK, described program includes being used for such as the instruction of the part or all of step described in first aspect.
Fourth aspect, an embodiment of the present invention provides a kind of computer-readable recording medium, wherein, it is described computer-readable Storage medium is used to store computer program, wherein, the computer program causes computer to perform such as the embodiment of the present invention the The instruction of part or all of step described in one side.
5th aspect, an embodiment of the present invention provides a kind of computer program product, wherein, the computer program product Non-transient computer-readable recording medium including storing computer program, the computer program are operable to make calculating Machine is performed such as the part or all of step described in first aspect of the embodiment of the present invention.The computer program product can be one A software installation bag.
Implement the embodiment of the present invention, have the advantages that:
As can be seen that by the embodiment of the present invention, first nerves network model is obtained, and determine first nerves network model Fisrt feature mapping atlas, according to sparse representation theory determine fisrt feature mapping atlas nuisance parameter, joined according to redundancy It is several that first compression processing is carried out to fisrt feature mapping atlas, second feature mapping atlas is obtained, and map according to second feature Atlas generates nervus opticus network model, and pending data is trained according to nervus opticus network model, obtains training knot Fruit, it is thus possible to according to the nuisance parameter of neural network model, is compressed it processing, on the one hand realizes to nerve net The compression of network model and preposition calculate of network accelerate, and on the other hand, also improve data processing speed.
Brief description of the drawings
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, drawings in the following description are some embodiments of the present invention, for ability For the those of ordinary skill of domain, without creative efforts, it can also be obtained according to these attached drawings other attached Figure.
Fig. 1 is a kind of first embodiment flow diagram of data processing method provided in an embodiment of the present invention;
Fig. 2 is a kind of second embodiment flow diagram of data processing method provided in an embodiment of the present invention;
Fig. 3 a are a kind of example structure schematic diagrams of data processing equipment provided in an embodiment of the present invention;
Fig. 3 b are the structures of the first compression unit of the described data processing equipments of Fig. 3 a provided in an embodiment of the present invention Schematic diagram;
Fig. 3 c are the structural representations of the determination unit of the described data processing equipments of Fig. 3 a provided in an embodiment of the present invention Figure;
Fig. 3 d are the another structure diagrams of the described data processing equipments of Fig. 3 a provided in an embodiment of the present invention;
Fig. 3 e are the structures of the second compression unit of the described data processing equipments of Fig. 3 d provided in an embodiment of the present invention Schematic diagram;
Fig. 4 is a kind of example structure schematic diagram of data processing equipment provided in an embodiment of the present invention.
Embodiment
Below in conjunction with the attached drawing in the embodiment of the present invention, the technical solution in the embodiment of the present invention is carried out clear, complete Site preparation describes, it is clear that described embodiment is part of the embodiment of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, the every other implementation that those of ordinary skill in the art are obtained without creative efforts Example, belongs to the scope of protection of the invention.
Term " first ", " second ", " the 3rd " in description and claims of this specification and the attached drawing and " Four " etc. be to be used to distinguish different objects, rather than for describing particular order.In addition, term " comprising " and " having " and it Any deformation, it is intended that cover non-exclusive include.Such as contain the process of series of steps or unit, method, be The step of system, product or equipment are not limited to list or unit, but alternatively further include the step of not listing or list Member, or alternatively further include for the intrinsic other steps of these processes, method, product or equipment or unit.
Referenced herein " embodiment " is it is meant that a particular feature, structure, or characteristic described can wrap in conjunction with the embodiments Containing at least one embodiment of the present invention.It is identical that each position in the description shows that the phrase might not be each meant Embodiment, nor the independent or alternative embodiment with other embodiments mutual exclusion.Those skilled in the art explicitly and Implicitly understand, embodiment described herein can be combined with other embodiments.
Data processing equipment described by the embodiment of the present invention can include smart mobile phone (such as Android phone, iOS mobile phones, Windows Phone mobile phones etc.), tablet computer, video matrix, monitor supervision platform, mobile unit, satellite, palm PC, notebook Computer, mobile internet device (MID, Mobile Internet Devices) or Wearable etc., above-mentioned is only citing, And it is non exhaustive, including but not limited to above device, certainly, above-mentioned data processing equipment can also be server.The present invention is implemented Pending data in example can be following at least one:Video data, image data, voice data etc..The present invention is implemented The attribute information of pending data in example can include following at least one:Memory size, data type, data format, number According to source etc..It should be noted that neural network model in the embodiment of the present invention is in many machine vision tasks, such as point Class, identification, detection etc., all show have powerful fulfillment capability.And test and show, with the increasing of network depth and range Add, the expressive ability of model has greatly improved.But also there is the problems such as calculation amount increase, model parameter increases severely therewith in this.This Neural network model in inventive embodiments can be used for realizing following at least one function:Recognition of face, Car license recognition, vehicle Identification etc..
By taking video data as an example, the data processing equipment in the embodiment of the present invention can be connected with multiple cameras, each to take the photograph As head be used equally for capture video image, each camera can have a corresponding position mark, alternatively, can have one Corresponding numbering.Under normal conditions, camera may be provided at public place, for example, school, museum, crossroad, step Row street, office building, garage, airport, hospital, subway station, station, bus platform, supermarket, hotel, public place of entertainment etc..Camera After video image is photographed, the memory of system where which can be saved in data processing equipment.Camera is clapped The each frame video image taken the photograph corresponds to an attribute information, and when pending data is video data, its attribute information can be with For following at least one:The shooting time of video image, the position of video image, the property parameters of video image are (form, big Small, resolution ratio etc.), the character features attribute in the numbering and video image of video image.Personage in above-mentioned video image is special Sign attribute may include but be not limited only to:Personage's number, character positions, personage's angle in video image etc..Above-mentioned angle letter Breath may include but be not limited only to:Horizontally rotate angle, pitch angle or gradient.For example, horizontally rotate angle be no more than ± 30 °, pitch angle be no more than ± 20 °, inclination angle be no more than ± 45 °.Recommended levels rotational angle be no more than ± 15 °, pitch angle not It is no more than ± 15 ° more than ± 10 °, inclination angle.For example, can also be screened to whether facial image is blocked by other objects, lead to In the case of often, jewelry should not block face's main region, and such as dark sunglasses of jewelry, mask and exaggeration jewellery, certainly, also having can It can be covered with dust all over above camera, cause facial image to be blocked.The picture format of video image in the embodiment of the present invention can Include but are not limited to:BMP, JPEG, JPEG2000, PNG etc., its size can be between 10-40KB.
Referring to Fig. 1, the first embodiment flow diagram for a kind of data processing method provided in an embodiment of the present invention. Data processing method described in the present embodiment, comprises the following steps:
101st, first nerves network model is obtained, and determines the fisrt feature mapping graph of the first nerves network model Collection.
Wherein, first nerves network model can be deep neural network model.It can determine first nerves network model Fisrt feature mapping atlas, the fisrt feature mapping atlas can include multiple fisrt feature mapping graphs, which reflects It can be with the mapping atlas for characteristic point, alternatively, the mapping atlas of feature contour to penetrate atlas.
Alternatively, above-mentioned first nerves network model is to include the modules such as convolutional layer, normalization layer, active coating and pond layer The network architecture, furthermore it is possible to convolution nuclear parameter be chosen, for example, 1*1,3*3 size and activation primitive pRelu.
Alternatively, before above-mentioned steps 101 are performed, data enhancing can be carried out to training set and is re-used as network inputs, Neutral net is trained with the stochastic gradient descent method of batch, when trained error is stable and reaches specified threshold, terminates network Training, obtains primitive network model data, i.e. first nerves network model.
102nd, the nuisance parameter of the fisrt feature mapping atlas is determined according to sparse representation theory.
Wherein, since fisrt feature mapping graph is concentrated there are certain redundancy, can be by redundancy to first Feature Mapping atlas is compressed, and the fisrt feature mapping atlas is determined using sparse representation theory in the embodiment of the present invention Nuisance parameter, above-mentioned nuisance parameter are used for the redundancy for representing fisrt feature mapping atlas.
Alternatively, in embodiments of the present invention, the channel information of redundancy by the sparse coding of passage, can be removed, is dropped The port number of low feature.In the case where ensureing that output characteristic mapping graph C is constant, Feature Mapping figure B is obtained by rarefaction representation In most expressive ability feature passage, remove those redundancy features convolution kernel corresponding with its, reaching reduces channel characteristics Purpose, features described above mapping graph C and Feature Mapping figure B are any mapping graph that above-mentioned fisrt feature mapping graph is concentrated.
It is assumed that certain layer original input data X, size is N × c × kh×kw, convolution nuclear parameter W, dimension is n × c × kh× kw, then the dimension of output characteristic mapping graph Y is N × n.Herein, N represents sample number, and n represents convolutional filtering number, and output Port number, c be input port number, kh,kwIt is the size of convolution kernel respectively.In order to reduce input channel from c to c'(0≤c'≤ C), rarefaction representation is carried out to input feature vector passage using sparse coding theory, formula is expressed as:
||·||FRepresent not this black norm of Luo Beini, XiRepresent the N × k of i-th (i=1 ... .., c) passagehkwMatrix, Wi Represent n × k of the i-th passagehkwMatrix.α represents the passage sparse vector that length is c.If αi=0, then it represents that the passage is to defeated The channel characteristics gone out influence less, to remove, while corresponding convolution kernel passage can also remove.This is just in certain degree On reduce the correlations of each channel characteristics, while the convolution kernel feature correlation retained is also reduced, but the feature letter extracted Cease constant.It reduce input channel number and convolution check figure, has the function that passage is sparse.
To solve passage sparse coefficient α, it is necessary to which formula (1) is changed to optimization form:
The zero number of punishment parameter γ controls α determines compression factor, in additionIt is to prevent trivial solution.Most In optimum theory, rarefaction representation, which solves, is divided into two steps, and fixed W renewals α, then fixes α renewals W.But this solution Multiple iteration is needed, constantly adjusts the value of two parameters, this not only takes and does not reach optimal solution.
So it is following steps A and step B that passage rarefaction representation, which solves mode, in the embodiment of the present invention:
A, first, the convolution kernel weights W in original, uncompressed model is fixed, sparse solution factor alpha is as channel screen Coefficient.The solution of factor alpha is solved using LASSO regression algorithms successive ignition:
If some element value α in αi=0, then it represents that the corresponding channel characteristics of i can be removed, while corresponding convolution kernel Also can remove.In the sparse solution of reality, it is that initialization γ is 0 first, is then gradually increased and constantly iteration updates α's Value, until | | α | |0Non-zero number is stable and meets | | α | |0≤c'。
B, after passage sparse coefficient α is obtained, to improve the efficiency solved, right value update only needs an iteration, just It can carry out the compression of model.Convolution weights are updated to:
W={ γiWi} (4)
Alternatively, the redundancy for determining the fisrt feature mapping atlas according to sparse representation theory in above-mentioned steps 102 is joined Number, it may include following steps:
21st, the attribute information of the pending data is obtained;
22nd, target sparse corresponding with the attribute information is chosen from the multiple rarefaction representation algorithms prestored to represent Algorithm;
23rd, represent algorithm to determining that the fisrt feature maps the nuisance parameter of atlas according to the target sparse.
Wherein, above-mentioned attribute information can be at least one below:The memory size of pending data, the number of pending data According to type, the duration of pending data, form of pending data etc..Attribute letter can be prestored in data processing equipment Mapping relations between breath and rarefaction representation algorithm, and then, it can determine that the attribute of pending data is believed according to the mapping relations Corresponding rarefaction representation algorithm is ceased, the nuisance parameter of fisrt feature mapping atlas is determined according to the rarefaction representation algorithm.
103rd, the first compression processing is carried out to fisrt feature mapping atlas according to the nuisance parameter, it is special obtains second Sign mapping atlas, and atlas generation nervus opticus network model is mapped according to the second feature.
Wherein, different nuisance parameters realizes that the compression degree of compression processing is different to fisrt feature mapping atlas, this The first compression processing is carried out to fisrt feature mapping atlas according to nuisance parameter in inventive embodiments, obtains second feature mapping graph Collection, and atlas generation nervus opticus network model is mapped according to second feature.
Alternatively, above-mentioned steps 103 carry out the first compression according to the nuisance parameter to fisrt feature mapping atlas Processing, it may include following steps:
31st, first threshold and second threshold are determined according to the nuisance parameter;
32nd, screening operation is carried out to fisrt feature mapping atlas according to the first threshold, and according to described second Threshold value carries out delete operation to the fisrt feature mapping atlas after the screening operation, and special to first after the delete operation Sign mapping atlas carries out reorganization operation.
Wherein, the first mapping relations between first threshold and nuisance parameter can be prestored in data processing equipment, And the second mapping relations between second threshold and nuisance parameter, and then, it can determine step according to first mapping relations The corresponding first threshold of nuisance parameter in 103, and determine nuisance parameter corresponding in step 103 according to the second mapping relations Two threshold values.Further, screening operation is carried out to fisrt feature mapping atlas according to first threshold, the purpose of screening operation is The few mapping graph of some features is screened out, deletion behaviour is carried out to the fisrt feature mapping atlas after screening operation according to second threshold Make, i.e. the purpose of delete operation is the mapping graph for deleting some redundancies, finally, to the fisrt feature mapping graph after delete operation Collection carries out reorganization operation.
104th, pending data is trained according to the nervus opticus network model, obtains training result.
Wherein, nervus opticus network model has carried out a series of squeeze operations by being then based on first nerves network model, Therefore, its redundancy is low, can lift data processing speed, in the embodiment of the present invention, is treated according to nervus opticus network model Processing data are trained, and obtain training result.
Alternatively, between above-mentioned steps 103 and step 104, can also include the following steps:
Using increment type, progressively quantization method carries out the second compression processing to the nervus opticus network model;
Then, above-mentioned steps 104, are trained pending data according to the nervus opticus network model, can be according to Following steps are implemented:
Nervus opticus network model after being handled according to second compression is trained the pending data.
Wherein, it is above-mentioned to use increment type progressively quantization method, it is possible to achieve the second pressure is carried out to nervus opticus network model Contracting is handled, and is greatly reduced the redundancy of nervus opticus network model, is improved data processing speed.
Alternatively, using increment type, progressively quantization method carries out the second compression processing to the nervus opticus network model, It may include following steps S1-S5:
S1, the running environment parameter for obtaining processor;
S2, determine quantization compression acceleration ratio according to the running environment parameter;
S3, the weights for obtaining the nervus opticus network model, obtain multiple weights;
S4, iteration perform following steps A and B:
A, the multiple weights are grouped according to the quantization compression acceleration ratio, obtain multigroup weights;
B, one group of weights in multigroup weights are quantified and is fixed, and in addition to one group of weights Weights carry out retraining, and the loss of significance after the retraining is compensated according to specified parameter;
Precision of the S5 after the retraining is in preset range, then after performing the processing according to second compression Nervus opticus network model the step of being trained to the pending data.
Wherein, the running environment parameter of above-mentioned processor can include but are not limited to:Memory size, dominant frequency, outer frequency, times Frequently, interface, caching, multimedia instruction set, manufacturing process, voltage, encapsulation shape, check figure etc..Can be pre- in data processing equipment Mapping relations between first storage running environmental parameter and quantization compression acceleration ratio, and then, it may be determined that running environment parameter Determine to quantify compression acceleration ratio, obtain the weights of nervus opticus network model, obtain multiple weights, iteration performs following steps A and B:A, the multiple weights are grouped according to the quantization compression acceleration ratio, obtain multigroup weights;B, will be described more One group of weights in group weights are quantified and are fixed, and the weights in addition to one group of weights carry out retraining, and The loss of significance after the retraining is compensated according to specified parameter.Above-mentioned specified parameter can voluntarily be set by user Put, alternatively, system default, specified parameter can be following at least one:Sigmoid activation primitives, neuron number, convolution kernel Size, convolution kernel species, sample area, sampling type etc..Above-mentioned preset range can voluntarily be set by user, alternatively, being System acquiescence.
Specifically, the original feature compared with horn of plenty can still be kept after channel characteristics are cut.But in depth and range All very big network models, forward calculation amount and amount of storage also have larger expense, further to reduce the demand of model, need Will be to the further compression of model.The weights of the model of passage still can consume larger in storage and preposition calculating after sparse The device space, therefore herein using progressively dynamic quantization network model weights.Can be by following operation:According to quantization compression acceleration ratio Example, selects Dynamic Weights packet, then according to quantization level n, one group of weights is quantified and fixed, and n is the integer more than 1, its Remaining non-quantized weighting parameter group is by retraining method, loss of significance that compensation produces after quantifying, two step iteration into OK, quantify until weights re -training, so that, full precision weights are quantified to specific precision and keeps precision or can connect The loss of significance scope received.
As can be seen that by the embodiment of the present invention, the embodiment of the present invention proposes a kind of depth based on sparse representation theory The model compression method that network characterization mapping graph is rebuild is spent, obtains first nerves network model, and determine first nerves network mould The fisrt feature mapping atlas of type, the nuisance parameter of fisrt feature mapping atlas is determined according to sparse representation theory, according to redundancy Parameter carries out the first compression processing to fisrt feature mapping atlas, obtains second feature mapping atlas, and reflect according to second feature Atlas generation nervus opticus network model is penetrated, pending data is trained according to nervus opticus network model, is trained As a result, it is thus possible to according to the nuisance parameter of neural network model, processing is compressed to it, is on the one hand realized to nerve The compression of network model and preposition calculate of network accelerate, and on the other hand, also improve data processing speed, and combine increment type by Walk quantization method and depth-compression and acceleration are carried out to weights model, can further lift data processing speed.
Consistent with the abovely, referring to Fig. 2, second for a kind of data processing method provided in an embodiment of the present invention implements Example flow diagram.Data processing method described in the present embodiment, comprises the following steps:
201st, pending data is obtained.
202nd, when the memory size of the pending data is more than default memory threshold, first nerves network mould is obtained Type, and determine the fisrt feature mapping atlas of the first nerves network model.
Wherein, above-mentioned default memory threshold can voluntarily be set by user, alternatively, system default, if for example, pending number Larger according to memory, then processing time is longer, it is therefore possible to use the method that the embodiments of the present invention are provided.
203rd, the nuisance parameter of the fisrt feature mapping atlas is determined according to sparse representation theory.
204th, the first compression processing is carried out to fisrt feature mapping atlas according to the nuisance parameter, it is special obtains second Sign mapping atlas, and atlas generation nervus opticus network model is mapped according to the second feature.
205th, the pending data is trained according to the nervus opticus network model, obtains training result.
Wherein, the specific descriptions of above-mentioned steps 202- steps 205 can refer to the correspondence of the described data processing methods of Fig. 1 Step 101- steps 104, details are not described herein.
As can be seen that by the embodiment of the present invention, pending data is obtained, is more than in the memory size of pending data pre- If during memory threshold, obtaining first nerves network model, and determine the fisrt feature mapping atlas of first nerves network model, root According to sparse representation theory determine fisrt feature mapping atlas nuisance parameter, according to nuisance parameter to fisrt feature map atlas into The first compression of row is handled, and obtains second feature mapping atlas, and map atlas generation nervus opticus network mould according to second feature Type, is trained pending data according to nervus opticus network model, obtains training result, it is thus possible to according to nerve net The nuisance parameter of network model, is compressed it processing, on the one hand realizes compression to neural network model and network is preposition Calculate and accelerate, on the other hand, also improve data processing speed.
Consistent with the abovely, it is specific as follows below to implement the device of above-mentioned data processing method:
Fig. 3 a are referred to, are a kind of example structure schematic diagram of data processing equipment provided in an embodiment of the present invention.This Data processing equipment described in embodiment, including:Acquiring unit 301, determination unit 302, the first compression unit 303 and instruction Practice unit 304, it is specific as follows:
Acquiring unit 301, for obtaining first nerves network model, and determines the first of the first nerves network model Feature Mapping atlas;
Determination unit 302, for determining the nuisance parameter of the fisrt feature mapping atlas according to sparse representation theory;
First compression unit 303, for carrying out the first pressure to fisrt feature mapping atlas according to the nuisance parameter Contracting is handled, and obtains second feature mapping atlas, and map atlas generation nervus opticus network model according to the second feature;
Training unit 304, for being trained according to the nervus opticus network model to pending data, is trained As a result.
Alternatively, if Fig. 3 b, Fig. 3 b are the tool of the first compression unit 303 in the data processing equipment described in Fig. 3 a Body refines structure, and first compression unit 303 may include:First determining module 3031 and processing module 3032, it is specific as follows:
First determining module 3031, for determining first threshold and second threshold according to the nuisance parameter;
Processing module 3032, for carrying out screening operation to fisrt feature mapping atlas according to the first threshold, And delete operation is carried out to the fisrt feature mapping atlas after the screening operation according to the second threshold, and to the deletion Fisrt feature mapping atlas after operation carries out reorganization operation.
Alternatively, if Fig. 3 c, Fig. 3 c are the specific thin of the determination unit 302 in the data processing equipment described in Fig. 3 a Change structure, the determination unit 302 may include:First acquisition module 3021, choose 3022 and second determining module 3023 of module, It is specific as follows:
First acquisition module 3021, for obtaining the attribute information of the pending data;
Module 3022 is chosen, it is corresponding with the attribute information for being chosen from the multiple rarefaction representation algorithms prestored Target sparse represent algorithm;
Second determining module 3023, for representing algorithm to determining the fisrt feature mapping graph according to the target sparse The nuisance parameter of collection.
Alternatively, if Fig. 3 d, Fig. 3 d are the another modification structures of the data processing dress described in Fig. 3 a, Fig. 3 d and Fig. 3 a Compare, it is further included:Second compression unit 305, it is specific as follows:
Second compression unit 305, for progressively quantization method to carry out the nervus opticus network model using increment type Second compression is handled;
The training unit 304 is specifically used for:
Nervus opticus network model after being handled according to second compression is trained the pending data.
Alternatively, if Fig. 3 e, Fig. 3 e are the tool of the second compression unit 305 in the data processing equipment described in Fig. 3 d Body refines structure, and second compression unit 305 may include:Second acquisition module 3051, the 3rd determining module 3052 and iteration Module 3053, it is specific as follows:
Second acquisition module 3051, for obtaining the running environment parameter of processor;
3rd determining module 3052, for being determined to quantify compression acceleration ratio according to the running environment parameter;
Second acquisition module 3051, is additionally operable to obtain the weights of the nervus opticus network model, obtains multiple weights;
Iteration module 3053, following steps A and B are performed for iteration:
A, the multiple weights are grouped according to the quantization compression acceleration ratio, obtain multigroup weights;
B, one group of weights in multigroup weights are quantified and is fixed, and in addition to one group of weights Weights carry out retraining, and the loss of significance after the retraining is compensated according to specified parameter;It is single by the training First 304 precision after the retraining be in preset range, then performs the after the processing according to second compression The step of two neural network models are trained the pending data.
As can be seen that by the described data processing equipment of the embodiment of the present invention, first nerves network model is obtained, and Determine the fisrt feature mapping atlas of first nerves network model, fisrt feature mapping atlas is determined according to sparse representation theory Nuisance parameter, carries out the first compression processing to fisrt feature mapping atlas according to nuisance parameter, obtains second feature mapping atlas, And according to second feature map atlas generation nervus opticus network model, according to nervus opticus network model to pending data into Row training, obtains training result, it is thus possible to according to the nuisance parameter of neural network model, processing, a side is compressed to it Face realizes the compression and the preposition calculating acceleration of network to neural network model, on the other hand, also improves data processing speed.
Consistent with the abovely, referring to Fig. 4, embodiment knot for a kind of data processing equipment provided in an embodiment of the present invention Structure schematic diagram.Data processing equipment described in the present embodiment, including:At least one input equipment 1000;It is at least one defeated Go out equipment 2000;At least one processor 3000, such as CPU;With memory 4000, above-mentioned input equipment 1000, output equipment 2000th, processor 3000 and memory 4000 are connected by bus 5000.
Wherein, above-mentioned input equipment 1000 concretely contact panel, physical button or mouse.
Above-mentioned output equipment 2000 concretely display screen.
Above-mentioned memory 4000 can be high-speed RAM memory, or nonvolatile storage (non-volatile ), such as magnetic disk storage memory.Above-mentioned memory 4000 is used to store batch processing code, above-mentioned input equipment 1000, defeated Go out equipment 2000 and processor 3000 is used to call the program code stored in memory 4000, perform following operation:
Above-mentioned processor 3000, is used for:
First nerves network model is obtained, and determines the fisrt feature mapping atlas of the first nerves network model;
The nuisance parameter of the fisrt feature mapping atlas is determined according to sparse representation theory;
First compression processing is carried out to fisrt feature mapping atlas according to the nuisance parameter, second feature is obtained and reflects Atlas is penetrated, and atlas generation nervus opticus network model is mapped according to the second feature;
Pending data is trained according to the nervus opticus network model, obtains training result.
Alternatively, above-mentioned processor 3000 carries out the first pressure according to the nuisance parameter to fisrt feature mapping atlas Contracting is handled, including:
First threshold and second threshold are determined according to the nuisance parameter;
Screening operation is carried out to fisrt feature mapping atlas according to the first threshold, and according to the second threshold Delete operation is carried out to the fisrt feature mapping atlas after the screening operation, and the fisrt feature after the delete operation is reflected Penetrate atlas and carry out reorganization operation.
Alternatively, above-mentioned processor 3000 determines that the redundancy of the fisrt feature mapping atlas is joined according to sparse representation theory Number, including:
Obtain the attribute information of the pending data;
Target sparse corresponding with the attribute information is chosen from the multiple rarefaction representation algorithms prestored to represent to calculate Method;
Represent algorithm to determining that the fisrt feature maps the nuisance parameter of atlas according to the target sparse.
Alternatively, above-mentioned processor 3000 also particularly useful for:
Using increment type, progressively quantization method carries out the second compression processing to the nervus opticus network model;
It is described that pending data is trained according to the nervus opticus network model, including:
Nervus opticus network model after being handled according to second compression is trained the pending data.
Alternatively, using increment type, progressively quantization method carries out the nervus opticus network model to above-mentioned processor 3000 Second compression is handled, including:
Obtain the running environment parameter of processor;
Determined to quantify compression acceleration ratio according to the running environment parameter;
The weights of the nervus opticus network model are obtained, obtain multiple weights;
Iteration performs following steps A and B:
A, the multiple weights are grouped according to the quantization compression acceleration ratio, obtain multigroup weights;
B, one group of weights in multigroup weights are quantified and is fixed, and in addition to one group of weights Weights carry out retraining, and the loss of significance after the retraining is compensated according to specified parameter;
Precision after the retraining is in preset range, then after performing the processing according to second compression The step of nervus opticus network model is trained the pending data.
The embodiment of the present invention also provides a kind of computer-readable storage medium, wherein, which can be stored with journey Sequence, the part or all of step including any type data processing method described in above method embodiment when which performs Suddenly.
The embodiment of the present invention also provides a kind of computer program product, and the computer program product includes storing calculating The non-transient computer-readable recording medium of machine program, the computer program are operable to make computer perform such as above-mentioned side The part or all of step of any type data processing method described in method embodiment.
Although combining each embodiment herein, invention has been described, however, implementing the present invention for required protection During, those skilled in the art are by checking the attached drawing, disclosure and the appended claims, it will be appreciated that and it is real Other changes of the existing open embodiment.In the claims, " comprising " (comprising) word is not excluded for other compositions Part or step, "a" or "an" are not excluded for multiple situations.Single processor or other units can realize claim In some functions enumerating.Mutually different has been recited in mutually different dependent some measures, it is not intended that these are arranged Apply to combine and produce good effect.
It will be understood by those skilled in the art that the embodiment of the present invention can be provided as method, apparatus (equipment) or computer journey Sequence product.Therefore, in terms of the present invention can use complete hardware embodiment, complete software embodiment or combine software and hardware The form of embodiment.Moreover, the present invention can use the calculating for wherein including computer usable program code in one or more The computer program that machine usable storage medium is implemented on (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) The form of product.Computer program is stored/distributed in suitable medium, is provided together with other hardware or one as hardware Part, can also use other distribution forms, such as pass through the wired or wireless telecommunication systems of Internet or other.
The present invention be with reference to the embodiment of the present invention method, apparatus (equipment) and computer program product flow chart with/ Or block diagram describes.It should be understood that can be realized by computer program instructions each flow in flowchart and/or the block diagram and/ Or the flow in square frame and flowchart and/or the block diagram and/or the combination of square frame.These computer program instructions can be provided To the processor of all-purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce one A machine so that the instruction performed by computer or the processor of other programmable data processing devices, which produces, to be used for realization The device for the function of being specified in one flow of flow chart or multiple flows and/or one square frame of block diagram or multiple square frames.
These computer program instructions, which may also be stored in, can guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works so that the instruction being stored in the computer-readable memory, which produces, to be included referring to Make the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one square frame of block diagram or The function of being specified in multiple square frames.
These computer program instructions can be also loaded into computer or other programmable data processing devices so that counted Series of operation steps is performed on calculation machine or other programmable devices to produce computer implemented processing, thus in computer or The instruction performed on other programmable devices is provided and is used for realization in one flow of flow chart or multiple flows and/or block diagram one The step of function of being specified in a square frame or multiple square frames.
Although with reference to specific features and embodiment, invention has been described, it is clear that, do not departing from this hair In the case of bright spirit and scope, various modifications and combinations can be carried out to it.Correspondingly, the specification and drawings are only institute The exemplary illustration of the invention that attached claim is defined, and be considered as covered in the scope of the invention any and all and repair Change, change, combining or equivalent.Obviously, those skilled in the art various changes and modifications can be made to the invention without Depart from the spirit and scope of the present invention.If in this way, these modifications and changes of the present invention belong to the claims in the present invention and its Within the scope of equivalent technologies, then the present invention is also intended to comprising including these modification and variations.

Claims (10)

  1. A kind of 1. data processing method, it is characterised in that including:
    First nerves network model is obtained, and determines the fisrt feature mapping atlas of the first nerves network model;
    The nuisance parameter of the fisrt feature mapping atlas is determined according to sparse representation theory;
    First compression processing is carried out to fisrt feature mapping atlas according to the nuisance parameter, obtains second feature mapping graph Collection, and atlas generation nervus opticus network model is mapped according to the second feature;
    Pending data is trained according to the nervus opticus network model, obtains training result.
  2. 2. according to the method described in claim 1, it is characterized in that, described reflect the fisrt feature according to the nuisance parameter Penetrate atlas and carry out the first compression processing, including:
    First threshold and second threshold are determined according to the nuisance parameter;
    Screening operation is carried out to fisrt feature mapping atlas according to the first threshold, and according to the second threshold to institute State the mapping atlas of the fisrt feature after screening operation and carry out delete operation, and to the fisrt feature mapping graph after the delete operation Collection carries out reorganization operation.
  3. 3. method according to claim 1 or 2, it is characterised in that described to determine described first according to sparse representation theory The nuisance parameter of Feature Mapping atlas, including:
    Obtain the attribute information of the pending data;
    Target sparse corresponding with the attribute information is chosen from the multiple rarefaction representation algorithms prestored and represents algorithm;
    Represent algorithm to determining that the fisrt feature maps the nuisance parameter of atlas according to the target sparse.
  4. 4. method according to any one of claims 1 to 3, it is characterised in that the method further includes:
    Using increment type, progressively quantization method carries out the second compression processing to the nervus opticus network model;
    It is described that pending data is trained according to the nervus opticus network model, including:
    Nervus opticus network model after being handled according to second compression is trained the pending data.
  5. 5. according to the method described in claim 4, it is characterized in that, it is described using increment type progressively quantization method to described second Neural network model carries out the second compression processing, including:
    Obtain the running environment parameter of processor;
    Determined to quantify compression acceleration ratio according to the running environment parameter;
    The weights of the nervus opticus network model are obtained, obtain multiple weights;
    Iteration performs following steps A and B:
    A, the multiple weights are grouped according to the quantization compression acceleration ratio, obtain multigroup weights;
    B, one group of weights in multigroup weights are quantified and is fixed, and the weights in addition to one group of weights Retraining is carried out, and the loss of significance after the retraining is compensated according to specified parameter;
    Precision after the retraining is in preset range, then performs second after the processing according to second compression The step of neural network model is trained the pending data.
  6. A kind of 6. data processing equipment, it is characterised in that including:
    Acquiring unit, for obtaining first nerves network model, and determines that the fisrt feature of the first nerves network model is reflected Penetrate atlas;
    Determination unit, for determining the nuisance parameter of the fisrt feature mapping atlas according to sparse representation theory;
    First compression unit, for carrying out the first compression processing to fisrt feature mapping atlas according to the nuisance parameter, Second feature mapping atlas is obtained, and atlas generation nervus opticus network model is mapped according to the second feature;
    Training unit, for being trained according to the nervus opticus network model to pending data, obtains training result.
  7. 7. device according to claim 6, it is characterised in that first compression unit includes:
    First determining module, for determining first threshold and second threshold according to the nuisance parameter;
    Processing module, for carrying out screening operation to fisrt feature mapping atlas according to the first threshold, and according to institute State second threshold to after the screening operation fisrt feature mapping atlas carry out delete operation, and to the delete operation after Fisrt feature mapping atlas carries out reorganization operation.
  8. 8. the device according to claim 6 or 7, it is characterised in that the determination unit includes:
    First acquisition module, for obtaining the attribute information of the pending data;
    Module is chosen, it is dilute for choosing target corresponding with the attribute information from the multiple rarefaction representation algorithms prestored Dredge and represent algorithm;
    Second determining module, for representing algorithm to determining that the fisrt feature maps the redundancy of atlas according to the target sparse Parameter.
  9. 9. according to claim 6 to 8 any one of them device, it is characterised in that described device further includes:
    Second compression unit, for progressively quantization method to carry out the nervus opticus network model the second compression using increment type Processing;
    The training unit is specifically used for:
    Nervus opticus network model after being handled according to second compression is trained the pending data.
  10. 10. device according to claim 9, it is characterised in that second compression unit includes:
    Second acquisition module, for obtaining the running environment parameter of processor;
    3rd determining module, for being determined to quantify compression acceleration ratio according to the running environment parameter;
    Second acquisition module, is additionally operable to obtain the weights of the nervus opticus network model, obtains multiple weights;
    Iteration module, following steps A and B are performed for iteration:
    A, the multiple weights are grouped according to the quantization compression acceleration ratio, obtain multigroup weights;
    B, one group of weights in multigroup weights are quantified and is fixed, and the weights in addition to one group of weights Retraining is carried out, and the loss of significance after the retraining is compensated according to specified parameter;Existed by the training unit Precision after the retraining is in preset range, then performs the nervus opticus net after the processing according to second compression The step of network model is trained the pending data.
CN201711137531.5A 2017-11-16 2017-11-16 A kind of data processing method and device Pending CN107909147A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711137531.5A CN107909147A (en) 2017-11-16 2017-11-16 A kind of data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711137531.5A CN107909147A (en) 2017-11-16 2017-11-16 A kind of data processing method and device

Publications (1)

Publication Number Publication Date
CN107909147A true CN107909147A (en) 2018-04-13

Family

ID=61844400

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711137531.5A Pending CN107909147A (en) 2017-11-16 2017-11-16 A kind of data processing method and device

Country Status (1)

Country Link
CN (1) CN107909147A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145798A (en) * 2018-08-13 2019-01-04 浙江零跑科技有限公司 A kind of Driving Scene target identification and travelable region segmentation integrated approach
CN109299782A (en) * 2018-08-02 2019-02-01 北京奇安信科技有限公司 A kind of data processing method and device based on deep learning model
CN109685202A (en) * 2018-12-17 2019-04-26 腾讯科技(深圳)有限公司 Data processing method and device, storage medium and electronic device
CN109978142A (en) * 2019-03-29 2019-07-05 腾讯科技(深圳)有限公司 The compression method and device of neural network model
CN110516806A (en) * 2019-08-30 2019-11-29 苏州思必驰信息科技有限公司 The rarefaction method and apparatus of neural network parameter matrix
CN110619310A (en) * 2019-09-19 2019-12-27 北京达佳互联信息技术有限公司 Human skeleton key point detection method, device, equipment and medium
CN110730347A (en) * 2019-04-24 2020-01-24 合肥图鸭信息科技有限公司 Image compression method and device and electronic equipment
CN110781690A (en) * 2019-10-31 2020-02-11 北京理工大学 Fusion and compression method of multi-source neural machine translation model
CN110874270A (en) * 2019-11-18 2020-03-10 郑州航空工业管理学院 Deep learning offline reasoning load balancing method for Internet of vehicles
CN111178447A (en) * 2019-12-31 2020-05-19 北京市商汤科技开发有限公司 Model compression method, image processing method and related device
CN111723901A (en) * 2019-03-19 2020-09-29 百度在线网络技术(北京)有限公司 Training method and device of neural network model
WO2020258071A1 (en) * 2019-06-26 2020-12-30 Intel Corporation Universal loss-error-aware quantization for deep neural networks with flexible ultra-low-bit weights and activations
CN112307968A (en) * 2020-10-30 2021-02-02 天地伟业技术有限公司 Face recognition feature compression method
CN113168554A (en) * 2018-12-29 2021-07-23 华为技术有限公司 Neural network compression method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778918A (en) * 2017-01-22 2017-05-31 北京飞搜科技有限公司 A kind of deep learning image identification system and implementation method for being applied to mobile phone terminal
CN107316079A (en) * 2017-08-08 2017-11-03 珠海习悦信息技术有限公司 Processing method, device, storage medium and the processor of terminal convolutional neural networks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778918A (en) * 2017-01-22 2017-05-31 北京飞搜科技有限公司 A kind of deep learning image identification system and implementation method for being applied to mobile phone terminal
CN107316079A (en) * 2017-08-08 2017-11-03 珠海习悦信息技术有限公司 Processing method, device, storage medium and the processor of terminal convolutional neural networks

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109299782A (en) * 2018-08-02 2019-02-01 北京奇安信科技有限公司 A kind of data processing method and device based on deep learning model
CN109145798A (en) * 2018-08-13 2019-01-04 浙江零跑科技有限公司 A kind of Driving Scene target identification and travelable region segmentation integrated approach
WO2020125236A1 (en) * 2018-12-17 2020-06-25 腾讯科技(深圳)有限公司 Data processing method and device, storage medium, and electronic device
CN109685202A (en) * 2018-12-17 2019-04-26 腾讯科技(深圳)有限公司 Data processing method and device, storage medium and electronic device
US11689607B2 (en) 2018-12-17 2023-06-27 Tencent Technology (Shenzhen) Company Limited Data processing method and apparatus, storage medium, and electronic device
CN113168554A (en) * 2018-12-29 2021-07-23 华为技术有限公司 Neural network compression method and device
CN113168554B (en) * 2018-12-29 2023-11-28 华为技术有限公司 Neural network compression method and device
CN111723901B (en) * 2019-03-19 2024-01-12 百度在线网络技术(北京)有限公司 Training method and device for neural network model
CN111723901A (en) * 2019-03-19 2020-09-29 百度在线网络技术(北京)有限公司 Training method and device of neural network model
CN109978142A (en) * 2019-03-29 2019-07-05 腾讯科技(深圳)有限公司 The compression method and device of neural network model
CN109978142B (en) * 2019-03-29 2022-11-29 腾讯科技(深圳)有限公司 Neural network model compression method and device
CN110730347A (en) * 2019-04-24 2020-01-24 合肥图鸭信息科技有限公司 Image compression method and device and electronic equipment
WO2020258071A1 (en) * 2019-06-26 2020-12-30 Intel Corporation Universal loss-error-aware quantization for deep neural networks with flexible ultra-low-bit weights and activations
CN110516806A (en) * 2019-08-30 2019-11-29 苏州思必驰信息科技有限公司 The rarefaction method and apparatus of neural network parameter matrix
CN110619310A (en) * 2019-09-19 2019-12-27 北京达佳互联信息技术有限公司 Human skeleton key point detection method, device, equipment and medium
CN110781690B (en) * 2019-10-31 2021-07-13 北京理工大学 Fusion and compression method of multi-source neural machine translation model
CN110781690A (en) * 2019-10-31 2020-02-11 北京理工大学 Fusion and compression method of multi-source neural machine translation model
CN110874270B (en) * 2019-11-18 2022-03-11 郑州航空工业管理学院 Deep learning offline reasoning load balancing method for Internet of vehicles
CN110874270A (en) * 2019-11-18 2020-03-10 郑州航空工业管理学院 Deep learning offline reasoning load balancing method for Internet of vehicles
CN111178447A (en) * 2019-12-31 2020-05-19 北京市商汤科技开发有限公司 Model compression method, image processing method and related device
CN111178447B (en) * 2019-12-31 2024-03-08 北京市商汤科技开发有限公司 Model compression method, image processing method and related device
CN112307968A (en) * 2020-10-30 2021-02-02 天地伟业技术有限公司 Face recognition feature compression method

Similar Documents

Publication Publication Date Title
CN107909147A (en) A kind of data processing method and device
CN110473141B (en) Image processing method, device, storage medium and electronic equipment
CN109671020B (en) Image processing method, device, electronic equipment and computer storage medium
CN112614213B (en) Facial expression determining method, expression parameter determining model, medium and equipment
CN111178183B (en) Face detection method and related device
CN111488985B (en) Deep neural network model compression training method, device, equipment and medium
CN106650615B (en) A kind of image processing method and terminal
EP4322056A1 (en) Model training method and apparatus
CN109919085B (en) Human-human interaction behavior identification method based on light-weight convolutional neural network
CN113065635A (en) Model training method, image enhancement method and device
CN113408570A (en) Image category identification method and device based on model distillation, storage medium and terminal
US20220156987A1 (en) Adaptive convolutions in neural networks
CN114595799A (en) Model training method and device
CN111105017A (en) Neural network quantization method and device and electronic equipment
CN111767947A (en) Target detection model, application method and related device
CN114359289A (en) Image processing method and related device
CN115512005A (en) Data processing method and device
CN111950700A (en) Neural network optimization method and related equipment
CN111738403A (en) Neural network optimization method and related equipment
CN112529149A (en) Data processing method and related device
EP4290459A1 (en) Augmented reality method and related device thereof
CN114821096A (en) Image processing method, neural network training method and related equipment
CN110728186A (en) Fire detection method based on multi-network fusion
CN112241934A (en) Image processing method and related equipment
CN115238883A (en) Neural network model training method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180413

RJ01 Rejection of invention patent application after publication