CN109242109A - The management method and server of depth model - Google Patents
The management method and server of depth model Download PDFInfo
- Publication number
- CN109242109A CN109242109A CN201810739543.3A CN201810739543A CN109242109A CN 109242109 A CN109242109 A CN 109242109A CN 201810739543 A CN201810739543 A CN 201810739543A CN 109242109 A CN109242109 A CN 109242109A
- Authority
- CN
- China
- Prior art keywords
- depth model
- spare
- server
- check results
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Abstract
Embodiment of the present invention is related to field of computer technology, discloses the management method and server of a kind of depth model.In embodiment of the present invention, the management method of depth model is applied to server, this method comprises: using the service request of spare depth model processing client, and spare depth model is verified to the processing result of service request, obtain check results;Wherein, spare depth model is the depth model that newly obtains parallel with currently used depth model;Judge whether spare depth model is up to standard according to check results;If judging result be it is no, discard spare depth model;If the determination result is YES, then currently used depth model is discarded.Embodiment of the present invention additionally provides a kind of server.Using embodiment of the present invention, server can carry out detection verifying to the depth model newly obtained, to guarantee that the stability of service quality provides the foundation.
Description
Technical field
Embodiment of the present invention is related to field of computer technology, the in particular to management method and server of depth model.
Background technique
Deep learning is a new field in machine learning research, and motivation is that foundation, simulation human brain are divided
The neural network for analysing study explains data by imitating the mechanism of human brain.It is deep with the rapid development of computer and internet
Degree study big data processing, in terms of occupy increasingly consequence.Currently, server internal is often
By model training with externally provide together with service bindings, i.e., progress model training while service, server are also externally provided
By constantly training, the parameter of percentage regulation model, new depth model is generated, it, can will be new when there is new depth model
Depth model directly replace the depth model being being currently used, to use new depth model to be serviced.
But the inventor of present patent application has found: the service accuracy of depth model is not with frequency of training
Increase and monotonic increase.And the depth mould for directly going replacement to be being currently used using new depth model in the prior art
Type, is easy to appear the case where service accuracy caused by model modification is fallen, and the stability of server service quality is poor.
Summary of the invention
The management method and server for being designed to provide a kind of depth model of embodiment of the present invention, can be to newly obtaining
The depth model taken carries out detection verifying, avoids the case where service accuracy caused by model modification is fallen, to guarantee clothes
The stability of business device service quality provides the foundation.
In order to solve the above technical problems, embodiments of the present invention provide a kind of management method of depth model, application
In server, this method comprises:
Using the service request of spare depth model processing client, and spare depth model is verified to the place of service request
Reason is as a result, obtain check results;Wherein, spare depth model is the depth that newly obtains parallel with currently used depth model
Model;
Judge whether spare depth model is up to standard according to check results;If judging result be it is no, discard spare depth mould
Type;If the determination result is YES, then currently used depth model is discarded.
Embodiments of the present invention additionally provide a kind of server, comprising:
At least one processor;And
The memory being connect at least one processor communication;Wherein,
Memory is stored with the instruction that can be executed by least one processor, and instruction is executed by least one processor, with
At least one processor is set to be able to carry out the management method of above-mentioned depth model.
Embodiments of the present invention additionally provide a kind of computer readable storage medium, are stored with computer program, the meter
Calculation machine program realizes the management method of above-mentioned depth model when being executed by processor.
In terms of existing technologies, server can be using the depth model newly obtained as spare for embodiment of the present invention
Depth model enables currently used depth model and spare depth model parallel processing service request.Also, server can be to standby
Detection verifying is carried out with whether depth model is up to standard, if spare depth model is not up to standard, server still be can be used currently
The depth model used is externally serviced;If spare depth model is up to standard, server can discard currently used depth
Model is spent, to externally be serviced using spare depth model.In this way, clothes caused by model modification are avoided
The case where business accuracy is fallen, to guarantee that the stability of server service quality provides the foundation.
In addition, verifying spare depth model to the processing result of service request, check results are obtained, are specifically included: will be standby
Client is back to processing result of the depth model to service request;Receive the feedback information that client is directed to processing result;
Check results are obtained according to feedback information.In this way, client-based feedback information obtains check results, check results and user
The matching degree of demand is higher, so as to improve the subsequent accuracy rate for judging whether spare depth model is up to standard, for the service of improving
Device service quality provides the foundation.
In addition, the management method of depth model also handles service request using currently used depth model, and will be current
The depth model used is back to client to the processing result of service request.In this way, client can get it is as more as possible
Relevant information, content a possibility that can satisfy user demand of client displaying is effectively improved, so as to guarantee
The stability of server service quality.
In addition, the management method of depth model also judges whether feedback information is modus tollens feedback information;Wherein, if judgement
As a result be it is yes, then the step of execution using currently used depth model processing service request.In this way, server is in spare depth
In the case that the processing result of model is unsatisfactory for user demand, just by currently used depth model to the processing knot of service request
Fruit is back to client, can not only guarantee the stability of server service quality, and can be avoided excessive invalid information
User is pushed to cause to perplex.
In addition, judging whether spare depth model is up to standard, specifically includes according to check results: if check results are spare depth
The n-th check results for spending model, then calculate the verification percent of pass of spare depth model;Judge verify percent of pass whether be greater than or
Equal to default percent of pass;Wherein, N is positive integer.This provides judge whether spare depth model is up to standard according to check results
A kind of specific implementation form, increase the flexibility of embodiment of the present invention.
In addition, if check results are the n-th check results of spare depth model, and check results are that verification passes through, then
Pass through before judging check results with the presence or absence of continuous M times verification;Wherein, M is positive integer.This provides tied according to verification
Fruit judges a kind of spare depth model specific implementation form whether up to standard, increases the flexibility of embodiment of the present invention.
In addition, using spare depth model processing client service request before, further includes: receive training end server push
The depth model sent, and using received depth model as spare depth model.In this way, by model training end and model service end
It is separately positioned on two servers, can be avoided the waste of computing resource, improves the utilization rate of computing resource.
In addition, training end server periodically pushes depth model, it is the quick update of implementation model, guarantees Service Quality
The steady-state growth of amount provides the foundation.
Detailed description of the invention
One or more embodiments are illustrated by the picture in corresponding attached drawing, these are exemplary
Illustrate not constitute the restriction to embodiment, the element in attached drawing with same reference numbers label is expressed as similar member
Part, unless there are special statement, composition does not limit the figure in attached drawing.
Fig. 1 is the specific flow chart of the management method of depth model according to first embodiment;
Fig. 2 is the specific flow chart according to the management method of the depth model of the 5th embodiment;
Fig. 3 is the structural schematic diagram at the model training end and model service end corresponding relationship according to the 5th embodiment;
Fig. 4 is the schematic diagram according to the server of sixth embodiment.
Specific embodiment
To keep the purposes, technical schemes and advantages of embodiment of the present invention clearer, below in conjunction with attached drawing to this hair
Bright each embodiment is explained in detail.However, it will be understood by those skilled in the art that in each implementation of the invention
In mode, in order to make the reader understand this application better, many technical details are proposed.But it is even if thin without these technologies
Section and various changes and modifications based on the following respective embodiments, also may be implemented the application technical solution claimed.
The first embodiment of the present invention is related to a kind of management method of depth model, detailed process is as shown in Figure 1.This reality
The management method for applying depth model in mode is applied to server, is specifically described below:
Step 101, using the service request of spare depth model processing client, and spare depth model is verified to service
The processing result of request, obtains check results.
Specifically, having a currently used depth model in server, and server gets new depth model
When, server simultaneously removes to replace currently used depth model without using the depth model that newly obtains, but will be acquired new
Depth model is as spare depth model, to enable spare depth model and currently used depth model parallel processing client
Service request.
In one embodiment, verification data can be prestored in server, in this way, server is receiving client
It, can be using the service request of spare depth model processing client, in order to voluntarily according to preset school when service request
It tests data to verify the processing result of spare depth model, to obtain check results.
In another embodiment, the client-based feedback information of server obtains check results, check results and use
The matching degree of family demand is higher, so as to improve the subsequent accuracy rate for judging whether spare depth model is up to standard, to improve clothes
Business device service quality provides the foundation.
Specifically, processing result of the spare depth model to service request is back to client by server, visitor is received
Family end is directed to the feedback information of processing result, obtains check results according to feedback information.Such as, using server as intelligent customer service service
For device, when user inputs the label corresponding to puing question on the client, client asks the label currently obtained as service
It asks and is sent to server.Then, the spare depth model in server obtains label phase according to the label in service request, retrieval
The problem of pass, answers, and acquired problem answer is returned to client as processing result, in order to which client will receive
To processing result show user to check.It should be noted that client also shows inquiry when showing that processing result is
It asks information (e.g., the printed words whether above answer solves the problems, such as you) and select button "Yes", "No" is provided.If user is selected
The select button of "Yes", then feedback information is yes, if user selectes the select button of "No", feedback information is no.In this way, clothes
If business device detects that feedback information is yes, then it is assumed that the check results of spare depth model are that verification passes through;If server detects
It is no to feedback information, then it is assumed that the check results of spare depth model are that verification does not pass through.
However, above-mentioned feedback information is yes, is no, only it is used as and is easy to understand made exemplary illustration, in practical operation,
Feedback information can exist otherwise.That is, server is in the school for obtaining spare depth model according to feedback information
It, can be by way of judging whether feedback information is modus tollens feedback information, to obtain check results when testing result.Such as, instead
Feedforward information is modus tollens feedback information, then server thinks that check results do not pass through for verification;Feedback information is not that modus tollens is anti-
Feedforward information, then server thinks that check results pass through for verification.
Step 102, judge whether spare depth model is up to standard according to check results.If the determination result is YES, then step is executed
Rapid 103, otherwise, execute step 104.
Specifically, server judges whether this check results is n-th check results to spare depth model.If
This time check results are the n-th check results to spare depth model, then the verification that server calculates spare depth model is led to
Rate is crossed, judges to verify whether percent of pass is greater than or equal to default percent of pass.If verification percent of pass is greater than or equal to default pass through
Rate, then the output result of step 102 is yes, and otherwise, the output result of step 102 is no.Wherein, the numerical value of N, default percent of pass
It can be preset by technical staff and be stored in server.
Step 103, currently used depth model is discarded.
Step 104, spare depth model is discarded.
In terms of existing technologies, server can be using the depth model newly obtained as standby for embodiments of the present invention
With depth model, currently used depth model and spare depth model parallel processing service request are enabled.Also, server can be right
Whether spare depth model is up to standard to carry out detection verifying, if spare depth model is not up to standard, server, which still can be used, works as
The preceding depth model used is externally serviced;If spare depth model is up to standard, server can be discarded currently used
Depth model, to externally be serviced using spare depth model.In this way, it avoids caused by model modification
The case where service accuracy is fallen, to guarantee that the stability of server service quality provides the foundation.
Second embodiment of the present invention is related to a kind of management method of depth model.Second embodiment is implemented first
Improved on the basis of mode, mainly theed improvement is that: in second embodiment of the invention, server will also currently make
The processing result of depth model also returns to client, is effectively guaranteed the stability of server service quality.
Specifically, server can also utilize currently used depth model after the service request for receiving client
Service request is handled, processing result of the currently used depth model to service request is back to client.In this way, client
Relevant information as much as possible can be got, what the content for effectively improving client displaying can satisfy user demand can
Energy property, so as to guarantee the stability of server service quality.Such as, by taking server is intelligent customer service server as an example, client
End shows the processing result of spare depth model and the processing result of current depth model, and user can be enabled defeated on the client
A possibility that enquirement entered is solved is higher.
In present embodiment, server by the processing result (being denoted as the first processing result below) of spare depth model and
The processing result (being denoted as second processing result below) of current depth model feeds back to client together, in order to which user checks
When the content that client is shown, voluntarily select and oneself higher processing result of demand matching degree.Also, client can also incite somebody to action
The selection result of user is uploaded to server as feedback information, and by feedback information.Such as, user click the first processing result into
Row checks that then the first processing result is selected as feedback information and is uploaded to server by client, and server is according to the feedback
The check results of the spare depth model of acquisition of information are that verification passes through.Such as, user clicks second processing result and checks, then
Second processing result is selected as feedback information and is uploaded to server by client, and server is obtained according to the feedback information
The check results of spare depth model are that verification does not pass through.Wherein, client can according to the clicking operation of user, check everywhere
The information such as the duration of result are managed, determine the selection result of user.
Third embodiment of the invention is related to a kind of management method of depth model.Third embodiment and the second embodiment party
Formula is roughly the same, is in place of the main distinction: the opportunity of the first processing result of server push and second processing result is different.
It is specifically described below:
Specifically, server, in the service request for receiving client, the spare depth model of server by utilizing is handled
The service request of client, after processing result of the spare depth model to service request is back to client, server is waited
Receive the feedback information of client.Server judges whether feedback information is negative after receiving the feedback information of client
Formula feedback information, in the case where decision-feedback information is modus tollens feedback information, server just utilizes currently used depth
Model treatment service request, and processing result of the currently used depth model to service request is back to client.In this way,
Server can be in the case where the processing result of spare depth model be unsatisfactory for user demand, in time by currently used depth
Degree model is back to client to the processing result of service request, can not only guarantee the stability of server service quality, and
And the user that is pushed to that can be avoided excessive invalid information causes to perplex.
Four embodiment of the invention is related to a kind of management method of depth model.4th embodiment and the first embodiment party
Formula is roughly the same, is in place of the main distinction: judging that whether up to standard spare depth model mode be different according to check results.With
Under be specifically described:
In present embodiment, server judges spare depth model mode whether up to standard according to check results are as follows: service
Device detects whether this check results is n-th check results to spare depth model.If this time check results are to spare
The n-th check results of depth model, then server judges whether this check results is that verification passes through.If this time check results
Pass through for verification, then server passes through before detecting this check results with the presence or absence of continuous M times verification.If server detects
This time there are continuous M times verifications to pass through before check results, then server determines that spare depth model is up to standard;If server detects
Pass through before to this check results there is no continuous M times verification, then server determines that spare depth model is not up to standard.Wherein,
N, the numerical value of M can be preset by technical staff and be stored in server.
Fifth embodiment of the invention is related to a kind of management method of depth model, and detailed process is as shown in Figure 2.5th is real
It applies mode to be improved on the basis of above-mentioned any one embodiment, mainly the improvement is that: implementing in the present invention the 5th
In mode, model training end and model service end are separately positioned on two servers, be can be avoided the waste of computing resource, are mentioned
The high utilization rate of computing resource.It is specifically described below:
The step 101 of step 202 in present embodiment into step 205 and first embodiment to step 104 substantially
It is identical, it is repeated to reduce, repeats no more, only different piece is illustrated again below:
Step 201, the depth model of training end server push is received, and using received depth model as spare depth
Model.
Specifically, trained end server, which can be set, in technical staff periodically pushes depth model, thus to realize
The quick of model updates, guarantees that the steady-state growth of service quality provides the foundation.Such as, the period can be one week.
More specifically, if model training end and model service end are all disposed on a server, each server
All it is an independent individual, does the model training of oneself respectively and externally service, model training often expend largely
Computing resource.However, the training that the identical server of function is done all be it is identical, the model training between different server is simultaneously
Do not share, thus when model training end and model service end are all disposed within a server, the computing resource of server utilizes
Rate is lower.In present embodiment, the server of the management method of depth model is the server at model service end.In this way, using
Model training end and model service end are separately positioned on the mode on two servers, and it is corresponding to can be realized a model training end
The case where multiple model service ends, as shown in figure 3, ensure that so as to reduce repetition training of the different server to model
The utilization rate of computing resource, and realize the lightweight at model service end.
The step of various methods divide above, be intended merely to describe it is clear, when realization can be merged into a step or
Certain steps are split, multiple steps are decomposed into, as long as including identical logical relation, all in the protection scope of this patent
It is interior;To adding inessential modification in algorithm or in process or introducing inessential design, but its algorithm is not changed
Core design with process is all in the protection scope of the patent.
Sixth embodiment of the invention is related to a kind of server, as shown in Figure 4, comprising: at least one processor 301;With
And the memory 302 with the communication connection of at least one processor 301.Wherein, be stored with can be by least one for memory 302
The instruction that device 301 executes is managed, instruction is executed by least one processor 301, so that at least one processor 301 is able to carry out
State the management method of the depth model in method implementation.
Wherein, memory 302 is connected with processor 301 using bus mode, and bus may include any number of interconnection
Bus and bridge, bus is by one or more processors 301 together with the various circuit connections of memory 302.Bus may be used also
With by such as peripheral equipment, voltage-stablizer, together with various other circuit connections of management circuit or the like, these are all
It is known in the art, therefore, it will not be further described herein.Bus interface provides between bus and transceiver
Interface.Transceiver can be an element, be also possible to multiple element, such as multiple receivers and transmitter, provide for
The unit communicated on transmission medium with various other devices.The data handled through processor are carried out on the radio medium by antenna
Transmission, further, antenna also receives data and transfers data to processor.
Processor 301 is responsible for management bus and common processing, can also provide various functions, including timing, periphery connects
Mouthful, voltage adjusting, power management and other control functions.And memory 302 can be used for storage processor and execute behaviour
Used data when making.
Embodiments of the present invention in terms of existing technologies, avoid service accuracy caused by model modification and fall
The case where falling, to guarantee that the stability of server service quality provides the foundation.
Seventh embodiment of the invention is related to a kind of computer readable storage medium, is stored with computer program.Computer
Above method embodiment is realized when program is executed by processor.
Embodiments of the present invention in terms of existing technologies, avoid service accuracy caused by model modification and fall
The case where falling, to guarantee that the stability of server service quality provides the foundation.
That is, it will be understood by those skilled in the art that all or part of the steps in realization above embodiment method is can
Completed with instructing relevant hardware by program, which is stored in a storage medium, including some instructions to
So that an equipment (can be single-chip microcontroller, chip etc.) or processor (processor) execute each embodiment institute of the application
State all or part of the steps of method.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-
Only Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. are various can be with
Store the medium of program code.
It will be understood by those skilled in the art that the respective embodiments described above are to realize specific embodiment party of the invention
Formula, and in practical applications, can to it, various changes can be made in the form and details, without departing from spirit and model of the invention
It encloses.
Claims (10)
1. a kind of management method of depth model, which is characterized in that be applied to server, which comprises
Using the service request of spare depth model processing client, and the spare depth model is verified to the service request
Processing result, obtain check results;Wherein, the spare depth model is and currently used depth model is parallel newly obtains
The depth model taken;
Judge whether the spare depth model is up to standard according to the check results;If judging result be it is no, discard described standby
Use depth model;If the determination result is YES, then the currently used depth model is discarded.
2. the management method of depth model according to claim 1, which is characterized in that the verification spare depth mould
Type obtains check results to the processing result of the service request, specifically includes:
The spare depth model is back to the client to the processing result of the service request;
Receive the feedback information that the client is directed to the processing result;
The check results are obtained according to the feedback information.
3. the management method of depth model according to claim 2, which is characterized in that further include:
The service request is handled using the currently used depth model, and by the currently used depth model to institute
The processing result for stating service request is back to the client.
4. the management method of depth model according to claim 3, which is characterized in that further include:
Judge whether the feedback information is modus tollens feedback information;
Wherein, if the determination result is YES, then the step that the service request is handled using the currently used depth model is executed
Suddenly.
5. the management method of depth model according to claim 1, which is characterized in that described to be sentenced according to the check results
Whether the spare depth model that breaks is up to standard, specifically includes:
If the check results are the n-th check results of the spare depth model, the spare depth model is calculated
Verify percent of pass;
Judge whether the verification percent of pass is greater than or equal to default percent of pass;Wherein, N is positive integer.
6. the management method of depth model according to claim 1, which is characterized in that described to be sentenced according to the check results
Whether the spare depth model that breaks is up to standard, specifically includes:
If the check results are the n-th check results of the spare depth model, and the check results are that verification passes through,
Pass through before then judging the check results with the presence or absence of continuous M times verification;Wherein, M is positive integer.
7. the management method of depth model according to claim 1, which is characterized in that described using at spare depth model
Before the service request for managing client, further includes:
The depth model of training end server push is received, and using the received depth model as the spare depth mould
Type.
8. the management method of depth model according to claim 7, which is characterized in that the trained end server is periodical
The push depth model.
9. a kind of server characterized by comprising
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one
It manages device to execute, so that at least one described processor is able to carry out the pipe of the depth model as described in any in claim 1 to 8
Reason method.
10. a kind of computer readable storage medium, is stored with computer program, which is characterized in that the computer program is located
Reason device realizes the management method of depth model described in any item of the claim 1 to 8 when executing.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810739543.3A CN109242109B (en) | 2018-07-06 | 2018-07-06 | Management method of depth model and server |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810739543.3A CN109242109B (en) | 2018-07-06 | 2018-07-06 | Management method of depth model and server |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109242109A true CN109242109A (en) | 2019-01-18 |
CN109242109B CN109242109B (en) | 2022-05-10 |
Family
ID=65071881
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810739543.3A Active CN109242109B (en) | 2018-07-06 | 2018-07-06 | Management method of depth model and server |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109242109B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101001183A (en) * | 2007-01-10 | 2007-07-18 | 网之易信息技术(北京)有限公司 | Test method and system for network application software |
US20160299755A1 (en) * | 2013-12-18 | 2016-10-13 | Huawei Technologies Co., Ltd. | Method and System for Processing Lifelong Learning of Terminal and Apparatus |
CN106227792A (en) * | 2016-07-20 | 2016-12-14 | 百度在线网络技术(北京)有限公司 | Method and apparatus for pushed information |
CN106448670A (en) * | 2016-10-21 | 2017-02-22 | 竹间智能科技(上海)有限公司 | Dialogue automatic reply system based on deep learning and reinforcement learning |
CN106610854A (en) * | 2015-10-26 | 2017-05-03 | 阿里巴巴集团控股有限公司 | Model update method and device |
CN106789595A (en) * | 2017-01-17 | 2017-05-31 | 北京诸葛找房信息技术有限公司 | Information-pushing method and device |
CN107273436A (en) * | 2017-05-24 | 2017-10-20 | 北京京东尚科信息技术有限公司 | The training method and trainer of a kind of recommended models |
CN107330522A (en) * | 2017-07-04 | 2017-11-07 | 北京百度网讯科技有限公司 | Method, apparatus and system for updating deep learning model |
CN107563280A (en) * | 2017-07-24 | 2018-01-09 | 南京道熵信息技术有限公司 | Face identification method and device based on multi-model |
-
2018
- 2018-07-06 CN CN201810739543.3A patent/CN109242109B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101001183A (en) * | 2007-01-10 | 2007-07-18 | 网之易信息技术(北京)有限公司 | Test method and system for network application software |
US20160299755A1 (en) * | 2013-12-18 | 2016-10-13 | Huawei Technologies Co., Ltd. | Method and System for Processing Lifelong Learning of Terminal and Apparatus |
CN106610854A (en) * | 2015-10-26 | 2017-05-03 | 阿里巴巴集团控股有限公司 | Model update method and device |
CN106227792A (en) * | 2016-07-20 | 2016-12-14 | 百度在线网络技术(北京)有限公司 | Method and apparatus for pushed information |
CN106448670A (en) * | 2016-10-21 | 2017-02-22 | 竹间智能科技(上海)有限公司 | Dialogue automatic reply system based on deep learning and reinforcement learning |
CN106789595A (en) * | 2017-01-17 | 2017-05-31 | 北京诸葛找房信息技术有限公司 | Information-pushing method and device |
CN107273436A (en) * | 2017-05-24 | 2017-10-20 | 北京京东尚科信息技术有限公司 | The training method and trainer of a kind of recommended models |
CN107330522A (en) * | 2017-07-04 | 2017-11-07 | 北京百度网讯科技有限公司 | Method, apparatus and system for updating deep learning model |
CN107563280A (en) * | 2017-07-24 | 2018-01-09 | 南京道熵信息技术有限公司 | Face identification method and device based on multi-model |
Also Published As
Publication number | Publication date |
---|---|
CN109242109B (en) | 2022-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wang et al. | Fast adaptive task offloading in edge computing based on meta reinforcement learning | |
CN106020715B (en) | Storage pool capacity management | |
CN105045831B (en) | A kind of information push method and device | |
CN109583904A (en) | Training method, impaired operation detection method and the device of abnormal operation detection model | |
CN109101624A (en) | Dialog process method, apparatus, electronic equipment and storage medium | |
CN104899315A (en) | Method and device for pushing user information | |
CN106453608B (en) | A kind of background request adaptive scheduling algorithm of the mobile application based on cloud | |
CN108805611A (en) | Advertisement screening technique and device | |
CN110363427A (en) | Model quality evaluation method and apparatus | |
CN108133390A (en) | For predicting the method and apparatus of user behavior and computing device | |
CN101465752A (en) | Method and system for ordering linkman | |
CN109918574A (en) | Item recommendation method, device, equipment and storage medium | |
CN110414763A (en) | Talent's selection device, the talent select system, talent's selection method and program | |
CN109102142A (en) | A kind of personnel evaluation methods and system based on evaluation criterion tree | |
CN103561085B (en) | A kind of service cloud evaluation method based on service level agreement constraint | |
CN110069602A (en) | Corpus labeling method, device, server and storage medium | |
CN109741818A (en) | Resource allocation management method and device are intervened in medical inferior health based on artificial intelligence | |
CN109635192A (en) | Magnanimity information temperature seniority among brothers and sisters update method and platform towards micro services | |
Fuller et al. | Learning-agent-based simulation for queue network systems | |
Xu et al. | Distributed no-regret learning in multiagent systems: Challenges and recent developments | |
Tang et al. | Digital twin assisted resource allocation for network slicing in industry 4.0 and beyond using distributed deep reinforcement learning | |
CN107608781A (en) | A kind of load predicting method, device and network element | |
CN111046156B (en) | Method, device and server for determining rewarding data | |
CN109242109A (en) | The management method and server of depth model | |
CN110348807A (en) | A kind of information processing method and relevant apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |