CN110378479A - Picture input method, device and terminal device based on deep learning - Google Patents
Picture input method, device and terminal device based on deep learning Download PDFInfo
- Publication number
- CN110378479A CN110378479A CN201910500560.6A CN201910500560A CN110378479A CN 110378479 A CN110378479 A CN 110378479A CN 201910500560 A CN201910500560 A CN 201910500560A CN 110378479 A CN110378479 A CN 110378479A
- Authority
- CN
- China
- Prior art keywords
- picture
- trained
- threshold value
- queue
- value index
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/546—Message passing systems or structures, e.g. queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The present invention is suitable for technical field of data processing, provides picture input method, device, terminal device and computer readable storage medium based on deep learning, comprising: obtain preset threshold value index;It repeats to select training picture under path candidate, until the selected trained picture reaches the threshold value index;Buffering queue is created, and the selected trained picture out is added to the buffering queue;Reading object is lined up in creation, reads the trained picture in the buffering queue by the reading object of lining up, and be decoded to obtain to the trained picture read and train tensor data;Thread manager object is created, and all trained tensor data are added to by calling queue by the thread manager object, the trained tensor data called in queue are used for by deep learning network call.The present invention improves the stability of picture training by inputting picture in batches.
Description
Technical field
The invention belongs to technical field of data processing, more particularly to the picture input method based on deep learning, device, end
End equipment and computer readable storage medium.
Background technique
With the development of computer technology and mathematical thought, deep learning can carry out characterization to data as one kind
The mode of habit is developed certainly and is just widely used always, to substitute the work for obtaining feature by hand.To deep learning network
When being trained, first have to that deep learning network will be inputted as the data of training object.
It is that trained picture will be needed all to be converted to data square in the prior art in the scene that training object is picture
Battle array, and be uniformly input in deep learning network and handled, namely the image array whole in training deep learning network
Data can all be called, and cause computer load big, be easy to ignite memory.To sum up, picture is being input to depth in the prior art
It is easy to cause memory to collapse when spending learning network, trained stability is poor.
Summary of the invention
In view of this, the embodiment of the invention provides based on deep learning picture input method, device, terminal device with
And computer readable storage medium, to solve to be easy memory collapse, the problem of stability difference when picture inputs in the prior art.
The first aspect of the embodiment of the present invention provides a kind of picture input method based on deep learning, comprising:
Preset threshold value index is obtained, the threshold value index is that deep learning network can be supported to input in batch training
Picture index;
It repeats to select training picture under path candidate, is until the selected trained picture reaches the threshold value index
Only, wherein storage has candidate picture under the path candidate;
Buffering queue is created, and the selected trained picture out is added to the buffering queue;
Creation line up reading object, by it is described line up reading object read in the buffering queue it is described training scheme
Piece, and the trained picture read is decoded to obtain and trains tensor data, wherein each trained tensor data and one
A trained picture is corresponding;
Thread manager object is created, and is added all trained tensor data by the thread manager object
To queue is called, the trained tensor data called in queue are used for by the deep learning network call.
The second aspect of the embodiment of the present invention provides a kind of picture input unit based on deep learning, comprising:
Acquiring unit, for obtaining preset threshold value index, the threshold value index is that deep learning network is instructed in a batch
The index of the picture of input can be supported in white silk;
Selected unit selectes training picture for repeating, until the selected trained picture reaches under path candidate
Until the threshold value index, wherein storage has candidate picture under the path candidate;
Creating unit is added to the buffering queue for creating buffering queue, and by the selected trained picture out;
Decoding unit lines up reading object for creating, and is read in the buffering queue by the reading object of lining up
The trained picture, and the trained picture read is decoded to obtain and trains tensor data, wherein each training
Tensor data are corresponding with a trained picture;
Adding unit, for creating thread manager object, and by the thread manager object by all instructions
Practice tensor data and be added to calling queue, the trained tensor data called in queue are used for by the deep learning net
Network calls.
The third aspect of the embodiment of the present invention provides a kind of terminal device, and the terminal device includes memory, processing
Device and storage in the memory and the computer program that can run on the processor, described in the processor execution
Following steps are realized when computer program:
Preset threshold value index is obtained, the threshold value index is that deep learning network can be supported to input in batch training
Picture index;
It repeats to select training picture under path candidate, is until the selected trained picture reaches the threshold value index
Only, wherein storage has candidate picture under the path candidate;
Buffering queue is created, and the selected trained picture out is added to the buffering queue;
Creation line up reading object, by it is described line up reading object read in the buffering queue it is described training scheme
Piece, and the trained picture read is decoded to obtain and trains tensor data, wherein each trained tensor data and one
A trained picture is corresponding;
Thread manager object is created, and is added all trained tensor data by the thread manager object
To queue is called, the trained tensor data called in queue are used for by the deep learning network call.
The fourth aspect of the embodiment of the present invention provides a kind of computer readable storage medium, the computer-readable storage
Media storage has computer program, and the computer program realizes following steps when being executed by processor:
Preset threshold value index is obtained, the threshold value index is that deep learning network can be supported to input in batch training
Picture index;
It repeats to select training picture under path candidate, is until the selected trained picture reaches the threshold value index
Only, wherein storage has candidate picture under the path candidate;
Buffering queue is created, and the selected trained picture out is added to the buffering queue;
Creation line up reading object, by it is described line up reading object read in the buffering queue it is described training scheme
Piece, and the trained picture read is decoded to obtain and trains tensor data, wherein each trained tensor data and one
A trained picture is corresponding;
Thread manager object is created, and is added all trained tensor data by the thread manager object
To queue is called, the trained tensor data called in queue are used for by the deep learning network call.
Existing beneficial effect is the embodiment of the present invention compared with prior art:
The embodiment of the present invention determines a batch of trained picture according to threshold value index, and training picture is first stored in buffering team
Column, then read training picture from buffering queue and be decoded, the training tensor data decoded are finally added to calling
Queue, for deep learning network call, the embodiment of the present invention carries out picture input based on mode in batches, it is defeated to improve picture
Enter the stability with picture training, reduces the risk of memory collapse.
Detailed description of the invention
It to describe the technical solutions in the embodiments of the present invention more clearly, below will be to embodiment or description of the prior art
Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only of the invention some
Embodiment for those of ordinary skill in the art without any creative labor, can also be according to these
Attached drawing obtains other attached drawings.
Fig. 1 is the implementation flow chart of the picture input method provided in an embodiment of the present invention based on deep learning;
Fig. 2 is the implementation flow chart provided in an embodiment of the present invention that threshold value index is adjusted according to resources occupation rate;
Fig. 3 is the implementation flow chart of the new threshold value index of determination provided in an embodiment of the present invention;
Fig. 4 is the implementation flow chart of creation buffering queue provided in an embodiment of the present invention;
Fig. 5 is another implementation flow chart of creation buffering queue provided in an embodiment of the present invention;
Fig. 6 is the structural block diagram of the picture input unit provided in an embodiment of the present invention based on deep learning;
Fig. 7 is the schematic diagram of terminal device provided in an embodiment of the present invention.
Specific embodiment
In being described below, for illustration and not for limitation, the tool of such as particular system structure, technology etc is proposed
Body details, to understand thoroughly the embodiment of the present invention.However, it will be clear to one skilled in the art that there is no these specific
The present invention also may be implemented in the other embodiments of details.In other situations, it omits to well-known system, device, electricity
The detailed description of road and method, in case unnecessary details interferes description of the invention.
In order to illustrate technical solutions according to the invention, the following is a description of specific embodiments.
Fig. 1 shows the implementation process of the picture input method provided in an embodiment of the present invention based on deep learning, is described in detail
It is as follows:
In S101, preset threshold value index is obtained, the threshold value index is deep learning network in batch training
It can support the index of the picture of input.
TensorFlow deep learning frame is the frame based on data flow programming an of open source, possesses multi-layer knot
Structure supports graphics processor (Graphics Processing Unit, GPU) and tensor processor (Tensor Processing
Unit, TPU) high performance numerical computing, the programming for being widely used in all kinds of deep learning algorithms is realized, and can be deployed in each
Class server, personal computer terminal and webpage, the embodiment of the present invention are intended for the figure in TensorFlow deep learning frame
Piece inputs demand, provides a kind of picture input method based on deep learning.
When carrying out picture input, firstly, obtaining preset threshold value index, which is based on TensorFlow depth
The deep learning network for spending learning framework can support the index of the picture inputted in batch training, i.e., primary training, can root
Customized setting is carried out according to practical application scene.It is noted that specific structure of the embodiment of the present invention to deep learning network
Build mode and the method for operation without limitation, also, the embodiment of the present invention to the concrete unit of threshold value index equally without limitation, than
If threshold value index can be picture number, such as 100 pictures;It can be data volume, such as 10MB (million) picture;Can also in picture number or
The other units such as resolution ratio are added on the basis of data volume constitutes threshold value index, the figure that such as 100 resolution ratio is 1280 × 720
Piece.
It in S102, repeats to select training picture under path candidate, until the selected trained picture reaches described
Until threshold value index, wherein storage has candidate picture under the path candidate.
After getting threshold value index, the training picture of deep learning network, specific weight are inputted according to threshold value index choosing
It is multiple that training picture is selected under path candidate, until selected training picture reaches threshold value index, stored under path candidate
There is candidate picture, path candidate can carry out customized setting in advance, at least there is a candidate road in embodiments of the present invention
Diameter.Since there may be at least two candidate pictures under path candidate, therefore it can also formulate and select training figure under path candidate
Selected strategy setting such as can be to be selected at random under path candidate, may also be configured to according to picture by the selected strategy of piece
File name sequence of the initial from a to z carry out it is selected etc..In addition, repeating to select identical picture in order to prevent, may be used also
Mark has been selected in picture setting to have selected, and ignores during subsequent selected with the picture for having selected mark.
Optionally, all pictures for meeting default picture format are retrieved, the path of the picture retrieved is determined as waiting
Routing diameter.Other than customized setting path candidate, the embodiment of the present invention can also be achieved automatically determining for path candidate, specific root
It is retrieved in the All Files catalogue of terminal device according to default picture format, and the path of the picture retrieved is determined as
Path candidate presets picture format such as jpg, png or jpeg etc., can input demand according to the picture of deep learning network and be set
It sets.For example, if default picture format is jpg, the file for retrieving a picture according to default picture format is entitled
The path of " example.jpg ", the picture are "/usr/admin/example1 ", then by the path "/usr/admin/
Example1 " is determined as path candidate.By the way that above reduce manual operations, the automation of setting path candidate is improved
Degree.
In S103, buffering queue is created, and the selected trained picture out is added to the buffering queue.
The training picture of going out is selected in order to manage, training picture is orderly input in deep learning network,
Picture input is carried out using the mode of deque in the embodiment of the present invention.In this step, buffering queue is created, and will select
Training picture be added to the tail of the queue of buffering queue, sequence consensus of the order of addition preferably with selected trained picture.In addition, may be used also
Picture amount can be accommodated according to the maximum of threshold value setup measures filename queue, prevents excessive picture from inputting, beyond terminal device
Load capacity.
In S104, reading object is lined up in creation, reads institute in the buffering queue by the reading object of lining up
Trained picture is stated, and the trained picture read is decoded to obtain and trains tensor data, wherein each trained tensor
Data are corresponding with a trained picture.
What it is due to buffering queue storage is still trained picture, and in TensorFlow deep learning frame, deep learning
Network is only capable of handling tensor (tensor) data, therefore reading object is lined up in creation in this step, can specifically be passed through
WholeFileReader function in TensorFlow is created.After the completion of creation, team will be buffered by lining up reading object
Training picture in column is read, and is decoded to obtain the identifiable training of deep learning network to the training picture read
Measure data, wherein it is corresponding with a trained picture each of to obtain trained tensor data.In decoding process, schemed according to training
The picture format of piece determines specific decoding functions, and the picture format of such as training picture is gif, then can pass through decode_gif
Function is decoded;The picture format for such as training picture is jpeg, then can be decoded by decode_jpeg function.
Optionally, the read threshold of buffering queue is set, and the training picture in buffering queue is read according to read threshold.
Since there may be the demands of repetition training for training picture, such as in the case where the negligible amounts of training picture, therefore creating
When buffering queue, a read threshold can be also set for buffering queue, which indicates the training picture in buffering queue
The number being read repeatedly just stops when the reading times to all trained pictures in buffering queue reach read threshold
It reads.By the above method ensure that deep learning network in certain special screnes, such as training effect of sample scene less.
In S105, thread manager object is created, and all training are opened by the thread manager object
Amount data are added to calling queue, and the trained tensor data called in queue are used for by the deep learning network tune
With.
Picture input is carried out using the mode of deque in embodiments of the present invention, in addition to what step S103 was established is used to delay
It except the buffering queue of punching, also sets up and queue is called to be called for deep learning network, which is memory queue.It is right
The training tensor data decoded in step S104 create thread manager object, especially by TensorFlow
Train.Coordinator function is created, and all trained tensor data are added to tune by thread manager object
With queue, in this way, deep learning network can be trained by accessing memory queue to obtain trained tensor data.?
In a kind of implementation, at least two thread of thread manager Object Management group is set, is instructed by the collaboration of at least two threads
Practice the queue push-in of tensor data, training tensor data A is such as added to by calling queue by thread A, will be trained by thread B
Tensor data B is added to calling queue, promotes push-in efficiency.
By embodiment illustrated in fig. 1 it is found that in embodiments of the present invention, through preset threshold value index under path candidate
Selected training picture creates buffering queue until selected training picture reaches threshold value index, will select the training and schemes
Piece is added to buffering queue, then by lining up the training picture in reading object reading buffering queue and being converted into training
Training tensor data are added to calling queue eventually by thread manager object by tensor data, and the embodiment of the present invention passes through
Threshold value index is set, candidate picture is inputted into deep learning network in batches, while figure is improved by the mode of deque
The stability of piece input, reduces the risk of memory collapse.
It is the implementation flow chart provided in an embodiment of the present invention that threshold value index is adjusted according to resources occupation rate shown in Fig. 2, it can
With the following steps are included:
In S201, the resource to device resource when training tensor data described in the deep learning network call is obtained
Occupancy.
In the above-mentioned methods, the training process of single batch picture may be led to because of preset threshold value index inaccuracy
It being mismatched with the load capacity of terminal device, the training effect of deep learning network is poor, therefore in embodiments of the present invention, it obtains deep
Degree learning network calls the resources occupation rate to device resource when a batch of all trained tensor data, wherein equipment money
Source is the resource of terminal device, at least one resource type is specifically included, if device resource can be central processing unit (Central
Processing Unit, CPU), it can be memory, can also be the combination of CPU and memory, it is not limited in the embodiment of the present invention.
The specific acquisition modes of resources occupation rate are determined according to practical application scene, for example, can be in a batch of trained tensor number
According to called period, average value processing is carried out to the resources occupation rate of different moments and obtains final resources occupation rate;It can also be one
During the training tensor data of batch are called, the maximum resources occupation rate of the numerical value that will acquire is determined as final resource and accounts for
With rate.
In S202, the threshold value index is adjusted according to the resources occupation rate.
After getting resources occupation rate, threshold value index is adjusted according to resources occupation rate.A kind of adjustment mode is, according to pre-
If the proportionate relationship between occupancy and resources occupation rate adjusts threshold value index, for example, assuming that resources occupation rate is 50%,
Default occupancy is 70%, and original threshold value index is 100MB, then can calculate between default occupancy and practical occupancy
Ratio is 1.4, the ratio and original threshold value index is carried out product calculation, and the result of product calculation is determined as to new threshold
Value index, i.e. 100*1.4=140MB, the subsequent training picture that next batch can be selected according to updated threshold value index,
So that the training load of terminal device more meets expection, avoids the too little or too much situation of batch training picture and occur.
By embodiment illustrated in fig. 2 it is found that in embodiments of the present invention, obtaining deep learning network call training tensor number
According to when resources occupation rate, threshold value index is adjusted according to the resources occupation rate, the embodiment of the present invention is according to the load of terminal device
Situation adjusts threshold value index, so that resource consumption when deep learning network is trained single batch training tensor data meets
It is expected that realizing the adaptive of picture input size.
It is the implementation flow chart of the new threshold value index of determination provided in an embodiment of the present invention, as shown in figure 3, can shown in Fig. 3
With the following steps are included:
In S301, at least one occupancy section, each corresponding load percentage in the occupancy section are determined.
The embodiment of the invention also provides the modes that another kind is adjusted threshold value index.Specifically, it is determined that at least one
A occupancy section, each occupancy section includes the numberical range of resources occupation rate, and each occupancy section is one corresponding
Preset load percentage, the load percentage indicate that occupancy section and load percentage can bases to the adjustment ratio of threshold value index
The equipment situation setting of terminal device in practical application scene.
In S302, at least one described occupancy section, searches the target being consistent with the resources occupation rate and account for
With rate section, the corresponding load percentage in target occupancy section and the threshold value index are subjected to product calculation, it will
The result of product calculation is determined as new threshold value index.
In this step, the occupancy section being consistent with the resources occupation rate got is searched, by the occupancy section pair
The load percentage and current threshold value index answered carry out product calculation, and the result of product calculation is determined as to new threshold value index,
The training picture of the new threshold value selecting index next batch of subsequent basis.As an example it is assumed that device resource includes CPU and memory
Two kinds of resources type, current threshold value index are 100MB, get resources occupation rate when trained tensor data are called, tool
Body is { CPU usage 40%, memory usage 70% }, is preset with occupancy section IntervalAFor { (CPU usage
20%, CPU usage 30%], (memory usage 40%, memory usage 60%] }, the corresponding duty factor in occupancy section
Example IndexAIt is 2;Occupancy section IntervalBFor (CPU usage 30%, CPU usage 40%], (memory usage
60%, memory usage 70%], the corresponding load percentage Index in the occupancy sectionBIt is 1.5.It then can determine that and resource
The occupancy section that occupancy is consistent is section IntervalB, then, by section IntervalBCorresponding load percentage IndexB
Product calculation is carried out with current threshold value index, using the result of product calculation as new threshold value index, i.e., new threshold value index
For 100MB*1.5=150MB.
By embodiment illustrated in fig. 3 it is found that in embodiments of the present invention, determining at least one occupancy section, Mei Gesuo
The corresponding load percentage in occupancy section is stated, the occupancy section being consistent with resources occupation rate is searched, by the occupancy section
Corresponding load percentage and threshold value index carry out product calculation, and the result of product calculation is determined as to new threshold value index, this hair
Bright embodiment is adjusted threshold value index by least one determining occupancy section, disappears in the resource for guaranteeing terminal device
On the basis of consuming expected from meeting, the adaptive adjustment of threshold value index is realized from another angle.
It is the implementation flow chart of creation buffering queue provided in an embodiment of the present invention, as shown in figure 4, can wrap shown in Fig. 4
Include following steps:
In S401, the filename of the selected trained picture out is obtained, file is established based on the filename and is ranked
Table.
It in embodiments of the present invention can also be according to the instruction of every batch of compared to the fixed buffering queue of capacity is preset
Practice picture buffering queue is arranged.Specifically, the filename of the selected training picture out of present lot is obtained, and is based on filename
It creates list of file names (list).
In S402, the buffering queue is created based on the list of file names.
After the corresponding list of file names of present lot is completed in creation, buffering queue is created based on list of file names, specifically may be used
It is created by the creation function in TensorFlow deep learning frame, during creation, list of file names is added
For the call parameters for creating function, function such as train.string_input_producer function is created.It is created in buffering queue
After the completion, the capacity of buffering queue is identical as the picture number of present lot.
By embodiment illustrated in fig. 4 it is found that in embodiments of the present invention, obtaining the filename of selected training picture out,
List of file names is established based on filename, buffering queue is created based on list of file names, the embodiment of the present invention is ranked according to file
Table creates buffering queue, so that the training picture of the capacity of buffering queue and present lot matches, improves creation buffering team
The accuracy of column.
It is another implementation flow chart of creation buffering queue provided in an embodiment of the present invention shown in Fig. 5, as shown in figure 5,
It may comprise steps of:
In S501, the path of the selected trained picture out is obtained, is road by the path integration of the trained picture
Diameter tensor.
Other than establishing list of file names, the selected training figure out of present lot can be also obtained in embodiments of the present invention
The path of piece, and be tensor data by the path integration of training picture, for the ease of distinguishing, the tensor numerical nomenclature that will be converted out
For path tensor, path can specifically be turned by the convert_to_tensor function in TensorFlow deep learning frame
It is changed to path tensor.
In S502, the buffering queue is created based on the path tensor.
Buffering queue is created based on path tensor obtained in step S501, path tensor is specifically added to creation function
Call parameters, in this way, creation complete buffering queue capacity it is just identical as the quantity of path tensor of addition.
It, will by embodiment illustrated in fig. 5 it is found that in embodiments of the present invention, obtaining the path of selected training picture out
The path integration of training picture is path tensor, creates buffering queue based on path tensor, the embodiment of the present invention passes through another kind
Mode realize with present lot training picture match buffering queue creation, improve creation buffering queue accuracy and
Flexibility.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process
Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present invention constitutes any limit
It is fixed.
Corresponding to the picture input method described in foregoing embodiments based on deep learning, Fig. 6 shows implementation of the present invention
The structural block diagram for the picture input unit based on deep learning that example provides, referring to Fig. 6, which includes:
Acquiring unit 61, for obtaining preset threshold value index, the threshold value index is deep learning network in a batch
The index of the picture of input can be supported in training;
Selected unit 62 selectes training picture for repeating, until the selected trained picture reaches under path candidate
Until the threshold value index, wherein storage has candidate picture under the path candidate;
Creating unit 63 is added to the buffering team for creating buffering queue, and by the selected trained picture out
Column;
Decoding unit 64 lines up reading object for creating, and reads the buffering queue by the reading object of lining up
The interior trained picture, and the trained picture read is decoded to obtain and trains tensor data, wherein Mei Gexun
It is corresponding with a trained picture to practice tensor data;
Adding unit 65, for creating thread manager object, and will be all described by the thread manager object
Training tensor data are added to calling queue, and the trained tensor data called in queue are used for by the deep learning
Network call.
Optionally, adding unit 65 further include:
Occupancy acquiring unit, for when obtaining described in the deep learning network call training tensor data to equipment
The resources occupation rate of resource;
Adjustment unit, for adjusting the threshold value index according to the resources occupation rate.
Optionally, adjustment unit includes:
Determine at least one occupancy section, each corresponding load percentage in the occupancy section;
In at least one described occupancy section, the target occupancy section being consistent with the resources occupation rate is searched,
The corresponding load percentage in target occupancy section and the threshold value index are subjected to product calculation, by product calculation
As a result it is determined as new threshold value index.
Optionally, creating unit 63 includes:
The filename for obtaining the selected trained picture out, establishes list of file names based on the filename;
The buffering queue is created based on the list of file names.
Optionally, creating unit 63 includes:
The path integration of the trained picture is path tensor by the path for obtaining the selected trained picture out;
The buffering queue is created based on the path tensor.
Optionally, unit 62 is selected further include:
Retrieval unit, it is for retrieving all pictures for meeting default picture format, the path of the picture retrieved is true
It is set to the path candidate.
Therefore, the picture input unit provided in an embodiment of the present invention based on deep learning based in batches with deque
Mode carries out picture input, improves the stability of picture input and picture training, reduces the risk of memory collapse.
Fig. 7 is the schematic diagram of terminal device provided in an embodiment of the present invention.As shown in fig. 7, the terminal device 7 of the embodiment
Include: processor 70, memory 71 and is stored in the calculating that can be run in the memory 71 and on the processor 70
Machine program 72, such as the picture based on deep learning input program.The processor 70 executes real when the computer program 72
Step in existing above-mentioned each picture input method embodiment based on deep learning, such as step S101 shown in FIG. 1 is extremely
S105.Alternatively, the processor 70 realizes the above-mentioned respectively picture input based on deep learning when executing the computer program 72
The function of each unit in Installation practice, such as the function of unit 61 to 65 shown in Fig. 6.
Illustratively, the computer program 72 can be divided into one or more units, one or more of
Unit is stored in the memory 71, and is executed by the processor 70, to complete the present invention.One or more of lists
Member can be the series of computation machine program instruction section that can complete specific function, and the instruction segment is for describing the computer journey
Implementation procedure of the sequence 72 in the terminal device 7.For example, the computer program 72 can be divided into acquiring unit, choosing
Order member, creating unit, decoding unit and adding unit, each unit concrete function are as follows:
Acquiring unit, for obtaining preset threshold value index, the threshold value index is that deep learning network is instructed in a batch
The index of the picture of input can be supported in white silk;
Selected unit selectes training picture for repeating, until the selected trained picture reaches under path candidate
Until the threshold value index, wherein storage has candidate picture under the path candidate;
Creating unit is added to the buffering queue for creating buffering queue, and by the selected trained picture out;
Decoding unit lines up reading object for creating, and is read in the buffering queue by the reading object of lining up
The trained picture, and the trained picture read is decoded to obtain and trains tensor data, wherein each training
Tensor data are corresponding with a trained picture;
Adding unit, for creating thread manager object, and by the thread manager object by all instructions
Practice tensor data and be added to calling queue, the trained tensor data called in queue are used for by the deep learning net
Network calls.
The terminal device 7 can be the calculating such as desktop PC, notebook, palm PC and cloud server and set
It is standby.The terminal device may include, but be not limited only to, processor 70, memory 71.It will be understood by those skilled in the art that Fig. 7
The only example of terminal device 7 does not constitute the restriction to terminal device 7, may include than illustrating more or fewer portions
Part perhaps combines certain components or different components, such as the terminal device can also include input-output equipment, net
Network access device, bus etc..
Alleged processor 70 can be central processing unit (Central Processing Unit, CPU), can also be
Other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit
(Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processor
Deng.
The memory 71 can be the internal storage unit of the terminal device 7, such as the hard disk or interior of terminal device 7
It deposits.The memory 71 is also possible to the External memory equipment of the terminal device 7, such as be equipped on the terminal device 7
Plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card dodge
Deposit card (Flash Card) etc..Further, the memory 71 can also both include the storage inside list of the terminal device 7
Member also includes External memory equipment.The memory 71 is for storing needed for the computer program and the terminal device
Other programs and data.The memory 71 can be also used for temporarily storing the data that has exported or will export.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function
Can unit division progress for example, in practical application, can according to need and by above-mentioned function distribution by different functions
Unit is completed, i.e., the internal structure of the terminal device is divided into different functional units, to complete whole described above
Or partial function.Each functional unit in embodiment can integrate in one processing unit, be also possible to each unit list
It is solely physically present, can also be integrated in one unit with two or more units, above-mentioned integrated unit can both use
Formal implementation of hardware can also be realized in the form of software functional units.In addition, the specific name of each functional unit also only
It is the protection scope that is not intended to limit this application for the ease of mutually distinguishing.The specific work process of unit in above system,
It can refer to corresponding processes in the foregoing method embodiment, details are not described herein.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, is not described in detail or remembers in some embodiment
The part of load may refer to the associated description of other embodiments.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
The scope of the present invention.
In embodiment provided by the present invention, it should be understood that disclosed terminal device and method can pass through it
Its mode is realized.For example, terminal device embodiment described above is only schematical, for example, the unit is drawn
Point, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components can
To combine or be desirably integrated into another system, or some features can be ignored or not executed.Another point, it is shown or beg for
The mutual coupling or direct-coupling or communication connection of opinion can be through some interfaces, the INDIRECT COUPLING of device or unit
Or communication connection, it can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product
When, it can store in a computer readable storage medium.Based on this understanding, the present invention realizes above-described embodiment side
All or part of the process in method can also instruct relevant hardware to complete, the computer by computer program
Program can be stored in a computer readable storage medium, and the computer program is when being executed by processor, it can be achieved that above-mentioned each
The step of a embodiment of the method.Wherein, the computer program includes computer program code, and the computer program code can
Think source code form, object identification code form, executable file or certain intermediate forms etc..The computer-readable medium can be with
It include: any entity or device, recording medium, USB flash disk, mobile hard disk, magnetic disk, light that can carry the computer program code
Disk, computer storage, read-only memory (Read-Only Memory, ROM), random access memory (Random Access
Memory, RAM), electric carrier signal, telecommunication signal and software distribution medium etc..It should be noted that described computer-readable
The content that medium includes can carry out increase and decrease appropriate according to the requirement made laws in jurisdiction with patent practice, such as at certain
A little jurisdictions do not include electric carrier signal and telecommunication signal according to legislation and patent practice, computer-readable medium.
Embodiment described above is merely illustrative of the technical solution of the present invention, rather than its limitations;Although referring to aforementioned reality
Applying example, invention is explained in detail, those skilled in the art should understand that: it still can be to aforementioned each
Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified
Or replacement, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution should all
It is included within protection scope of the present invention.
Claims (10)
1. a kind of picture input method based on deep learning characterized by comprising
Preset threshold value index is obtained, the threshold value index is the figure that deep learning network can support input in batch training
The index of piece;
It repeats to select training picture under path candidate, until the selected trained picture reaches the threshold value index,
Wherein, storage has candidate picture under the path candidate;
Buffering queue is created, and the selected trained picture out is added to the buffering queue;
Reading object is lined up in creation, reads the trained picture in the buffering queue by the reading object of lining up, and
The trained picture read is decoded to obtain and trains tensor data, wherein each trained tensor data and an instruction
It is corresponding to practice picture;
Thread manager object is created, and all trained tensor data are added to by tune by the thread manager object
With queue, the trained tensor data called in queue are used for by the deep learning network call.
2. picture input method as described in claim 1, which is characterized in that it is described by the thread manager object by institute
There are the trained tensor data to be added to after calling queue, further includes:
Obtain the resources occupation rate to device resource when training tensor data described in the deep learning network call;
The threshold value index is adjusted according to the resources occupation rate.
3. picture input method as claimed in claim 2, which is characterized in that described according to resources occupation rate adjustment
Threshold value index, comprising:
Determine at least one occupancy section, each corresponding load percentage in the occupancy section;
In at least one described occupancy section, the target occupancy section being consistent with the resources occupation rate is searched, by institute
It states the corresponding load percentage in target occupancy section and the threshold value index carries out product calculation, by the result of product calculation
It is determined as new threshold value index.
4. picture input method as described in claim 1, which is characterized in that the creation buffering queue, comprising:
The filename for obtaining the selected trained picture out, establishes list of file names based on the filename;
The buffering queue is created based on the list of file names.
5. picture input method as described in claim 1, which is characterized in that the creation buffering queue, comprising:
The path integration of the trained picture is path tensor by the path for obtaining the selected trained picture out;
The buffering queue is created based on the path tensor.
6. picture input method as described in claim 1, which is characterized in that training figure is selected in the repetition under path candidate
Before piece, further includes:
All pictures for meeting default picture format are retrieved, the path of the picture retrieved is determined as the path candidate.
7. a kind of picture input unit based on deep learning characterized by comprising
Acquiring unit, for obtaining preset threshold value index, the threshold value index is deep learning network in batch training
It can support the index of the picture of input;
Selected unit selectes training picture for repeating under path candidate, until the selected trained picture reaches described
Until threshold value index, wherein storage has candidate picture under the path candidate;
Creating unit is added to the buffering queue for creating buffering queue, and by the selected trained picture out;
Decoding unit lines up reading object for creating, and reads institute in the buffering queue by the reading object of lining up
Trained picture is stated, and the trained picture read is decoded to obtain and trains tensor data, wherein each trained tensor
Data are corresponding with a trained picture;
Adding unit opens all training for creating thread manager object, and by the thread manager object
Amount data are added to calling queue, and the trained tensor data called in queue are used for by the deep learning network tune
With.
8. a kind of terminal device, which is characterized in that the terminal device includes memory, processor and is stored in the storage
In device and the computer program that can run on the processor, the processor are realized as follows when executing the computer program
Step:
Preset threshold value index is obtained, the threshold value index is the figure that deep learning network can support input in batch training
The index of piece;
It repeats to select training picture under path candidate, until the selected trained picture reaches the threshold value index,
Wherein, storage has candidate picture under the path candidate;
Buffering queue is created, and the selected trained picture out is added to the buffering queue;
Reading object is lined up in creation, reads the trained picture in the buffering queue by the reading object of lining up, and
The trained picture read is decoded to obtain and trains tensor data, wherein each trained tensor data and an instruction
It is corresponding to practice picture;
Thread manager object is created, and all trained tensor data are added to by tune by the thread manager object
With queue, the trained tensor data called in queue are used for by the deep learning network call.
9. terminal device as claimed in claim 8, which is characterized in that it is described by the thread manager object by all institutes
Trained tensor data are stated to be added to after calling queue, further includes:
Obtain the resources occupation rate to device resource when training tensor data described in the deep learning network call;
The threshold value index is adjusted according to the resources occupation rate.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists
In the step of realization picture input method as described in any one of claim 1 to 6 when the computer program is executed by processor
Suddenly.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910500560.6A CN110378479B (en) | 2019-06-11 | 2019-06-11 | Image input method and device based on deep learning and terminal equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910500560.6A CN110378479B (en) | 2019-06-11 | 2019-06-11 | Image input method and device based on deep learning and terminal equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110378479A true CN110378479A (en) | 2019-10-25 |
CN110378479B CN110378479B (en) | 2023-04-14 |
Family
ID=68250087
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910500560.6A Active CN110378479B (en) | 2019-06-11 | 2019-06-11 | Image input method and device based on deep learning and terminal equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110378479B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112085166A (en) * | 2020-09-10 | 2020-12-15 | 江苏提米智能科技有限公司 | Convolutional neural network model accelerated training method and device, electronic equipment and storage medium |
CN114237918A (en) * | 2022-02-28 | 2022-03-25 | 之江实验室 | Graph execution method and device for neural network model calculation |
CN112085166B (en) * | 2020-09-10 | 2024-05-10 | 江苏提米智能科技有限公司 | Convolutional neural network model acceleration training method and device, electronic equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108182469A (en) * | 2017-12-27 | 2018-06-19 | 郑州云海信息技术有限公司 | A kind of neural network model training method, system, device and storage medium |
CN108711138A (en) * | 2018-06-06 | 2018-10-26 | 北京印刷学院 | A kind of gray scale picture colorization method based on generation confrontation network |
CN108885571A (en) * | 2016-04-05 | 2018-11-23 | 谷歌有限责任公司 | The input of batch machines learning model |
US10210860B1 (en) * | 2018-07-27 | 2019-02-19 | Deepgram, Inc. | Augmented generalized deep learning with special vocabulary |
-
2019
- 2019-06-11 CN CN201910500560.6A patent/CN110378479B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108885571A (en) * | 2016-04-05 | 2018-11-23 | 谷歌有限责任公司 | The input of batch machines learning model |
CN108182469A (en) * | 2017-12-27 | 2018-06-19 | 郑州云海信息技术有限公司 | A kind of neural network model training method, system, device and storage medium |
CN108711138A (en) * | 2018-06-06 | 2018-10-26 | 北京印刷学院 | A kind of gray scale picture colorization method based on generation confrontation network |
US10210860B1 (en) * | 2018-07-27 | 2019-02-19 | Deepgram, Inc. | Augmented generalized deep learning with special vocabulary |
Non-Patent Citations (4)
Title |
---|
JIECHENG ZHAO: "怎么选取训练神经网络时的Batch size?", 《知乎-HTTPS://WWW.ZHIHU.COM/QUESTION/61607442/ANSWER/204586969》 * |
一个人的游弋: "TensorFlow入门13——图片文件读取", 《知乎-HTTPS://ZHUANLAN.ZHIHU.COM/P/53630398/》 * |
月半RAI: "经典网络复现系列(二):SegNet", 《CSDN博客-HTTPS://BLOG.CSDN.NET/ZLRAI5895/ARTICLE/DETAILS/80579094》 * |
-牧野-: "tensorflow中协调器tf.train.Coordinator和入队线程启动器tf.train.start_queue_runners", 《CSDN博客-HTTPS://BLOG.CSDN.NET/DCRMG/ARTICLE/DETAILS/79780331》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112085166A (en) * | 2020-09-10 | 2020-12-15 | 江苏提米智能科技有限公司 | Convolutional neural network model accelerated training method and device, electronic equipment and storage medium |
CN112085166B (en) * | 2020-09-10 | 2024-05-10 | 江苏提米智能科技有限公司 | Convolutional neural network model acceleration training method and device, electronic equipment and storage medium |
CN114237918A (en) * | 2022-02-28 | 2022-03-25 | 之江实验室 | Graph execution method and device for neural network model calculation |
CN114237918B (en) * | 2022-02-28 | 2022-05-27 | 之江实验室 | Graph execution method and device for neural network model calculation |
US11941514B2 (en) | 2022-02-28 | 2024-03-26 | Zhejiang Lab | Method for execution of computational graph in neural network model and apparatus thereof |
Also Published As
Publication number | Publication date |
---|---|
CN110378479B (en) | 2023-04-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103853812B (en) | Method and system for providing web page | |
CN109408731A (en) | A kind of multiple target recommended method, multiple target recommended models generation method and device | |
CN103246730B (en) | File memory method and equipment, document sending method and equipment | |
Xu et al. | A multiple priority queueing genetic algorithm for task scheduling on heterogeneous computing systems | |
WO2020026741A1 (en) | Information processing method, information processing device, and information processing program | |
CN111695840B (en) | Method and device for realizing flow control | |
CN107077390A (en) | A kind of task processing method and network interface card | |
CN115473901B (en) | Distributed computing power cluster intelligent scheduling method and device and computer equipment | |
CN108933822A (en) | Method and apparatus for handling information | |
CN113778646A (en) | Task level scheduling method and device based on execution time prediction | |
CN110378479A (en) | Picture input method, device and terminal device based on deep learning | |
CN112308939B (en) | Image generation method and device | |
CN114429195A (en) | Performance optimization method and device for hybrid expert model training | |
CN109597912A (en) | Method for handling picture | |
CN110765082B (en) | Hadoop file processing method and device, storage medium and server | |
CN110532448B (en) | Document classification method, device, equipment and storage medium based on neural network | |
CN106294530A (en) | The method and system of rule match | |
CN110941483B (en) | Queue processing method, device and equipment | |
CN108134851A (en) | The method for controlling quality of service and device of data transmission | |
CN111008873A (en) | User determination method and device, electronic equipment and storage medium | |
CN114741173A (en) | DAG task arranging method and device, electronic equipment and storage medium | |
CN114461299A (en) | Unloading decision determining method and device, electronic equipment and storage medium | |
Górski et al. | Adaptive GP-based Algorithm for Hardware/Software Co-design of Distributed Embedded Systems. | |
CN109257798B (en) | Networking method and device of ZigBee device | |
CN112766719A (en) | Task allocation method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |