CN109948803A - Algorithm model optimization method, device and equipment - Google Patents

Algorithm model optimization method, device and equipment Download PDF

Info

Publication number
CN109948803A
CN109948803A CN201910182371.9A CN201910182371A CN109948803A CN 109948803 A CN109948803 A CN 109948803A CN 201910182371 A CN201910182371 A CN 201910182371A CN 109948803 A CN109948803 A CN 109948803A
Authority
CN
China
Prior art keywords
algorithm model
optimization
equipment
optimized
optimal parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910182371.9A
Other languages
Chinese (zh)
Inventor
金玲玲
饶东升
罗腾法
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Lingtu Huishi Technology Co Ltd
Original Assignee
Shenzhen Lingtu Huishi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Lingtu Huishi Technology Co Ltd filed Critical Shenzhen Lingtu Huishi Technology Co Ltd
Priority to CN201910182371.9A priority Critical patent/CN109948803A/en
Publication of CN109948803A publication Critical patent/CN109948803A/en
Pending legal-status Critical Current

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

This application discloses algorithm model optimization method, device and equipment, this method comprises: real time data, which is inserted into the first history data set, forms the first training set;Optimization algorithm model is treated based on first training set to be trained to obtain the first Optimal Parameters;Whether the precision of the algorithm model after judging optimization meets default required precision;If then optimizing according to first Optimal Parameters to the algorithm model to be optimized, optimization request otherwise is sent to the second equipment.This method provides the optimization service of proximal end, also solves the contradictory problems between demand and hardware resource deficiency.

Description

Algorithm model optimization method, device and equipment
Technical field
This application involves depth learning technology field more particularly to algorithm model optimization methods, device and equipment.
Background technique
With the arrival of big data era, machine learning algorithm is especially adapted for use in the deep learning algorithm of large-scale data Just obtaining more and more extensive concern and application, including speech recognition, image recognition and natural language processing etc..Deep learning It is that it can constantly learn to carry out algorithm model using big data automatically with the maximum difference of traditional mode recognition methods Optimization, with the increase of data volume, algorithm model will be more and more accurate by continuing to optimize.
Conventionally, as a large amount of physical hardware resources of cloud site polymerization, will usually calculate and storage all It is placed on long-range cloud, algorithm model is optimized using big data by cloud, and is user using the algorithm model after optimization The service of calculating is provided.However, machine learning treating capacity is excessive since the data volume of cloud center analysis is more, cloud is caused to take The device speed of service of being engaged in is slower, while being limited to bandwidth, leads to not be applied in real time business, therefore provides the optimization clothes of proximal end Business becomes demand, but due to the hardware resource of proximal device deficiency, how to solve the contradiction between demand and inadequate resource The problem of as urgent need to resolve.
Summary of the invention
In view of problem above, the embodiment of the present invention provides algorithm model optimization method, device and equipment, can solve State the technical issues of background technology part is mentioned.
The algorithm model optimization method of embodiment according to the invention is applied to the first equipment, comprising: insert real time data Enter the first history data set and forms the first training set;Optimization algorithm model is treated based on first training set to be trained to obtain First Optimal Parameters;Whether the precision of the algorithm model after judging optimization meets default required precision;If then according to described One Optimal Parameters optimize the algorithm model to be optimized, otherwise send optimization request to the second equipment.
The algorithm model optimization method of embodiment according to the invention is applied to the second equipment, comprising: receives the first equipment Send the optimization request for treating optimization algorithm model of needle;It responds the optimization request and returns to the second Optimal Parameters, so that described First equipment optimizes the algorithm model to be optimized according to second Optimal Parameters, wherein the second optimization ginseng Number is to be trained based on the second training set to the algorithm model to be optimized, and second training set is to count in real time It is formed according to the second history data set of insertion.
The algorithm model of embodiment according to the invention optimizes device, is applied to the first equipment, comprising: insertion module is used The first training set is formed in real time data is inserted into the first history data set;Training module, for being based on first training set Optimization algorithm model is treated to be trained to obtain the first Optimal Parameters;Judgment module, for judging the algorithm model after optimizing Whether precision meets default required precision;It is to call optimization module, otherwise calls sending module;Optimization module is used for basis First Optimal Parameters optimize the algorithm model to be optimized;Sending module optimizes for sending to the second equipment Request.
The algorithm model of embodiment according to the invention optimizes device, is applied to the second equipment, comprising: second receives mould Block, for receiving the optimization request for algorithm model to be optimized of the first equipment transmission;Return module, it is described excellent for responding Change request and return to the second Optimal Parameters, so that first equipment is according to second Optimal Parameters to the algorithm mould to be optimized Type optimizes, wherein second Optimal Parameters are to be trained based on the second training set to the algorithm model to be optimized It obtains, second training set is that real time data is inserted into what the second history data set was formed.
The electronic equipment of embodiment according to the invention, comprising: processor;And memory, it is stored thereon with executable Instruction;Wherein, the processor is configured to execute the executable instruction to implement algorithm model optimization method above-mentioned.
The computer readable storage medium of embodiment according to the invention is stored thereon with computer program, the calculating Machine program includes executable instruction, when the executable instruction is executed by processor, implements algorithm model optimization side above-mentioned Method.
It can be seen from the above that in the scheme of the embodiment of the present invention, the first equipment first with real time data and It is stored in the first training set that the first local history data set is constituted to optimize algorithm model, if the algorithm mould after optimization The precision of type meets default required precision, then directly the service of calculating can be provided using the algorithm model after optimization, so as to meet Most of demand of real time business;If the precision of the algorithm model after optimization is unsatisfactory for default required precision, second can be requested The second training set that equipment utilization real time data and the second history data set are constituted optimizes algorithm model, the second equipment Hardware resource can be higher than the first equipment, and this addresses the problem the contradictions between demand and hardware resource deficiency.
Detailed description of the invention
Fig. 1 is that present invention could apply to exemplary system architecture figures therein;
Fig. 2 is the flow chart of the algorithm model optimization method of one embodiment of the invention;
Fig. 3 is the flow chart of the algorithm model optimization method of another embodiment of the present invention;
Fig. 4 is the flow chart of the algorithm model optimization method of further embodiment of this invention;
Fig. 5 is that the algorithm model of one embodiment of the invention optimizes the schematic diagram of device;
Fig. 6 is that the algorithm model of another embodiment of the present invention optimizes the schematic diagram of device;
Fig. 7 is the schematic diagram of the electronic equipment of one embodiment of the invention.
Specific embodiment
Theme described herein is discussed referring now to example embodiment.It should be understood that discussing these embodiments only It is in order to enable those skilled in the art can better understand that being not to claim to realize theme described herein Protection scope, applicability or the exemplary limitation illustrated in book.It can be in the protection scope for not departing from present disclosure In the case of, the function and arrangement of the element discussed are changed.Each example can according to need, omit, substitute or Add various processes or component.For example, described method can be executed according to described order in a different order, with And each step can be added, omits or combine.In addition, feature described in relatively some examples is in other examples It can be combined.
As used in this article, term " includes " and its modification indicate open term, are meant that " including but not limited to ". Term "based" indicates " being based at least partially on ".Term " one embodiment " and " embodiment " expression " at least one implementation Example ".Term " another embodiment " expression " at least one other embodiment ".Term " first ", " second " etc. may refer to not Same or identical object.Here may include other definition, either specific or implicit.Unless bright in context It really indicates, otherwise the definition of a term is consistent throughout the specification.
In being described below, for illustration and not for limitation, propose such as specific system structure, interface, technology it The detail of class, to understand thoroughly the present invention.However, it will be clear to one skilled in the art that there is no these specific The present invention also may be implemented in the other embodiment of details.In other situations, omit to well-known device, circuit with And the detailed description of method, in case unnecessary details interferes description of the invention.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the present invention can phase Mutually combine.The present invention will be described in detail below with reference to the accompanying drawings and embodiments.
Fig. 1 is shown can be using the exemplary system of one embodiment of algorithm model optimization method or device of the invention System framework.
As shown in Figure 1, system 100 may include terminal 102, proximal device 104, middle layer equipment 106 and cloud device 108.The being given for example only property of number of above equipment illustrates the principle of the present invention, does not limit the invention.System 100 It can be and tissue is carried out with level shape structure, wherein cloud device 108 is located at the top layer of level shape structure, proximal device 104 are located at the bottom of level shape structure, and middle layer equipment can be set between equipment 108 and proximal device 104 beyond the clouds 106.It should be noted that topological structure may be different, such as may include the middle layer of multilayer in different systems Equipment, or do not include middle layer equipment.
The operation that terminal 102 responds user sends computation requests to system 100, i.e. user can be by terminal 102 to system 100 transmission computation requests and the calculated result for receiving the return of system 100.Terminal 100 is STB (SetTop Box, set-top box), road By device or UE (User Equipment, user terminal) etc..In the embodiment of the present application, terminal 102 can be come real in a variety of manners It applies.For example, may include such as mobile phone, smart phone, laptop, digit broadcasting receiver, PDA (individual digital Assistant), PAD (tablet computer), the mobile terminal of PMP (portable media player) etc. and desktop computer etc. Deng fixed terminal.
Proximal device 104 can be Edge Server, and 104 access terminal 102 of proximal device is sent out for receiving terminal 102 The computation requests sent, alternatively, calculated result is sent to terminal 102.Proximal device 104 loads the algorithm for being provided with the service of calculating Model, proximal device 104 collect terminal 102 transmission computation requests corresponding to real-time data memory in local, and with this Ground historical data passes through deep learning together and optimizes to algorithm model.
Middle layer equipment 106 can be node server, and middle layer equipment 106 accesses proximal device 104, close for receiving The optimization request that end equipment 104 is sent, alternatively, optimum results are sent to proximal device 104.Middle layer equipment 106 is loaded with Algorithm model, middle layer equipment 106 can by 104 collection terminal 102 of proximal device send computation requests corresponding to it is real-time Data are stored in local, and when receiving the optimization request of the transmission of proximal device 104, utilize the real time data being collected into and this Ground historical data passes through deep learning together and optimizes to algorithm model.
Cloud device 108 can be cloud central server, and cloud central server can access middle layer equipment 106, use In the optimization request for receiving the transmission of middle layer equipment, alternatively, optimum results are sent to middle layer equipment 106.Cloud device 108 It is loaded with algorithm model, the meter that cloud device 108 can be sent by middle layer equipment 106 and 104 collection terminal 102 of proximal device The corresponding real-time data memory of request is calculated in local, and when receiving the optimization request of the transmission of middle layer equipment 106, is utilized The real time data being collected into passes through deep learning together with local historical data and optimizes to algorithm model.In other embodiment party In formula, cloud device 108 can also be directly accessed proximal device 104, receive the optimization request that proximal device 104 is sent.
It in the embodiment of the present application, is that proximal device 104 optimizes algorithm model first, if optimum results meet Preset condition then proximal device 104 using the algorithm model after optimization continue as terminal 102 provide calculate service, if be unsatisfactory for Then proximal device 104 carries out algorithm model by middle layer equipment 106 excellent condition to the transmission optimization request of middle layer equipment 106 Change.If the optimum results of middle layer equipment 106 meet preset condition, optimum results are sent to closely by middle layer equipment 106 End equipment 104, proximal device 104 receive optimum results after to algorithm model optimize and utilize optimize after algorithm model after Continue and provide calculating service for terminal 102, if the optimum results of middle layer equipment 106 are unsatisfactory for preset condition, middle layer is set Standby 106 send optimization request to cloud device 108, are optimized by cloud device 108 to algorithm model, final cloud device 108 optimum results are sent to proximal device 104 by middle layer equipment 106.
It should be noted that the algorithm model optimization method provided by the embodiment of the present invention applied to the first equipment is general It is executed by proximal device 104 or middle layer equipment 106, the algorithm model optimization method applied to the second equipment is generally by middle layer Equipment 106 and cloud device 108 execute, correspondingly, the algorithm model optimization device for being applied to the first equipment is generally positioned at closely In end equipment 104 or middle layer equipment 106, the algorithm model optimization device applied to the second equipment is generally positioned at middle layer and sets For in 106 and cloud device 108.
Fig. 2 shows the flow chart for the algorithm model optimization method that one embodiment of the application provides, this method can be applied In system shown in Figure 1 framework.This method may comprise steps of.
S202: terminal generates computation requests.
In the embodiment of the present application, terminal can generate according to the corresponding data of target object that user sends and be directed to target pair The computation requests of elephant.
S204: proximally equipment sends computation requests to terminal.
In the embodiment of the present application, computation requests can be image recognition computation requests, speech recognition computation requests or from Right Language Processing computation requests, computation requests carry target object, such as image data, voice data etc. for calculating.
S206: proximal device obtains calculated result using algorithm model according to computation requests.
In the embodiment of the present application, proximal device can extract target object characteristic, then using characteristic as The input of algorithm model, the output of algorithm model are calculated result.Algorithm model in the application for example can be detection mould Type, algorithm model are such as, but not limited to CNN (Convolutional NeuralNetwork, convolutional neural networks model), RNN (Recurrent Neural Network, Recognition with Recurrent Neural Network model) or other kinds of neural network model.
S208: calculated result is returned to terminal by proximal device.
S210: terminal obtains user to the feedback information of calculated result.
In the embodiment of the present application, user can feed back calculated result, when it is implemented, may include positive feedback or negative Feedback, positive feedback such as user indicate to accept to calculated result, and negative-feedback such as user does not accept calculated result expression.It is negative anti- Feedback can also include modified information.
S212: feedback information is sent to proximal device by terminal.
S214: proximal device determines the corresponding real time data of computation requests according to feedback information.
In the embodiment of the present application, real time data includes target object and associated calculated result, when feedback information is When negative-feedback information, calculated result is according to the revised calculated result of feedback information, and therefore, which can be used for pair Algorithm model optimizes.
S216: real time data is inserted into the first history data set and forms the first training set by proximal device.
In the embodiment of the present application, real time data can be including the corresponding real time data of a computation requests, can also be with Being includes the corresponding real time data of multiple computation requests, such as can be set by the user, when proximal device receives pre-determined number When computation requests, algorithm model is optimized, or the real time data collected within a preset period of time carries out algorithm model Optimization.First history data set is the last training set used to algorithm model optimization of proximal device storage, i.e., last The last history data set of the real time data insertion for collecting the last time when optimizing to algorithm model forms the first history Data set, and this real time data is inserted into the first training set that the first history data set is formed and will be used as next time to algorithm mould The first history data set when type optimizes.
S218: proximal device treats optimization algorithm model based on the first training set and is trained to obtain the first Optimal Parameters.
In the embodiment of the present application, the data in the first training set can be divided into two parts, a part of conduct by proximal device Training data, a part are used as test data, treat optimization algorithm model using training data and be trained, utilize test data The precision of algorithm model after test training, according to precision using the parameter of the continuous adjustment algorithm model of gradient descent method until receiving It holds back, can get optimal precision and corresponding first Optimal Parameters at this time.
S220: proximal device judges whether the precision of the algorithm model after optimization meets default required precision, if then holding Row step S222, thens follow the steps S224 if not.
In the embodiment of the present application, settable default precision threshold judges whether to meet default required precision can be and sentence It is disconnected whether to be greater than or equal to default precision threshold.
S222: proximal device is treated optimization algorithm model according to the first Optimal Parameters and is optimized, and then process terminates.
In the embodiment of the present application, the parameter of algorithm model to be optimized can be adjusted to the first Optimal Parameters.
S224: proximal device sends optimization request to middle layer equipment.
S226: real time data is inserted into the second history data set and forms the second training set by middle layer equipment.
In the embodiment of the present application, the second history data set is that the last of middle layer equipment storage optimizes algorithm model The real time data that last time collects is inserted into last going through when that is, the last time optimizes algorithm model by training set used History data set forms the second history data set, and this second training set will be as the when optimizing to algorithm model next time Two history data sets.Preferably, the first history data set can be the subset of the second history data set, i.e. the second history data set With more training samples, therefore, optimization algorithm model treated using the second training set be trained that precision can be obtained more High algorithm model.Middle layer equipment can collect the data of proximal device connected to it, therefore the data of middle layer equipment It is more to measure proximal device more connected to it.
S228: middle layer equipment treats optimization algorithm model based on the second training set and is trained to obtain the second optimization ginseng Number.
In the embodiment of the present application, the training for treating optimization algorithm model can refer to step S216.
S230: middle layer equipment judges whether the precision of the algorithm model after optimization meets default required precision, if then Step S232-S234 is executed, thens follow the steps S236 if not.
S232: the second Optimal Parameters are sent to proximal device by middle layer equipment.
S234: proximal device is treated optimization algorithm model according to the second Optimal Parameters and is optimized, and then process terminates.
S236: middle layer equipment sends optimization request to cloud device.
S238: real time data insertion third history data set is formed third training set by cloud device.
In the embodiment of the present application, third history data set stores last to algorithm model optimization institute for cloud device Training set.Preferably, the second history data set can be the subset of third history data set, what third historical data was concentrated Data are algorithm model formation and all data in application process, therefore can train to obtain the algorithm model of full accuracy.Cloud End equipment collects the data of each middle layer equipment and proximal device, therefore the data volume of cloud device is at most most complete.
S240: cloud device treats optimization algorithm model according to third training set and is trained to obtain third Optimal Parameters.
S242: third Optimal Parameters are back to proximal device by middle layer equipment by cloud device.
S244: proximal device is treated optimization algorithm model according to third Optimal Parameters and is optimized.
It can be seen from the above that the scheme of the embodiment of the present application is with below the utility model has the advantages that (1) the application is real Applying in the scheme of example can utilize the algorithm model after optimization to provide and calculate clothes in optimization of the proximal device realization to algorithm model Business, can be applied in real time business;(2) scheme of the embodiment of the present application is not necessarily to the total data of cloud device being stored in proximal end Equipment can provide partial data according to the user demand of proximal device, total data can be prevented to be stolen or reveal in this way, be guaranteed The safety of data;(3) proximal device optimizes algorithm model merely with partial data in the scheme of the embodiment of the present application, The speed of service that not will lead to proximal device is too slow;(4) in the scheme of the embodiment of the present application proximal device can upstream device or Cloud device sends optimization request, can get using upstream equipment or the preferable hardware resource of cloud device and more data volume The preferable algorithm model of effect of optimization, solves the contradiction of resource and demand;(5) scheme of the embodiment of the present application is close by deployment End equipment and middle layer equipment, can mitigate the load of cloud device.
Fig. 3 shows the flow chart of the algorithm model optimization method of another embodiment according to the application.Side shown in Fig. 3 Method is applied to the first equipment.
As shown in figure 3, method 300 may comprise steps of:
Real time data is inserted into the first history data set and forms the first training set by S302.
S304 treats optimization algorithm model based on first training set and is trained to obtain the first Optimal Parameters.
Whether the precision of S306, the algorithm model after judging optimization meet default required precision;If so then execute step S308, it is no to then follow the steps S310.
S308 optimizes the algorithm model to be optimized according to first Optimal Parameters.
S310 sends optimization request to the second equipment.
In one aspect, method 300 can also include the following contents: receiving second equipment and respond the optimization request The second Optimal Parameters returned, wherein second Optimal Parameters are based on the second training set to the algorithm model to be optimized It is trained, second training set is that the real time data is inserted into what the second history data set was formed;According to institute The second Optimal Parameters are stated to optimize the algorithm model to be optimized.
On the other hand, first history data set is the subset of second history data set.
In yet another aspect, after optimizing to the algorithm model to be optimized, method 300 further includes in following Hold: first training set is determined as to the first new history data set.
Fig. 4 shows the flow chart of the algorithm model optimization method of the another embodiment according to the application.Side shown in Fig. 4 Method is applied to the second equipment.
As shown in figure 4, method 400 may comprise steps of: S402, receive the first equipment send needle to calculation to be optimized The optimization request of method model.
Method 400 can return to the second Optimal Parameters the following steps are included: S404, to respond the optimization request, so that First equipment optimizes the algorithm model to be optimized according to second Optimal Parameters, wherein described second is excellent Changing parameter is to be trained based on the second training set to the algorithm model to be optimized, and second training set is will be real When data be inserted into the second history data set formed.
In one aspect, after responding the optimization request and returning to the second Optimal Parameters, method 400 further includes in following Hold: second training set is determined as to the second new history data set.
Fig. 5 shows the schematic diagram of the algorithm model optimization device according to one embodiment of the application, dress shown in fig. 5 Setting 500 can use the mode of software, hardware or software and hardware combining to realize.Device 500 may be mounted in the first equipment.Dress The embodiment for setting 500 is substantially similar to the embodiment of method, so describing fairly simple, related place is referring to embodiment of the method Part explanation.
As shown in figure 5, device 500 may include insertion module 502, training module 504, judgment module 506, optimization module 508 and sending module 510.Module 502 is inserted into be used to for real time data to be inserted into the first history data set the first training set of formation. Training module 504 is used to treat optimization algorithm model based on first training set and is trained to obtain the first Optimal Parameters.Sentence Disconnected module 506 is used to judge whether the precision of the algorithm model after optimization meets default required precision;It is to call optimization module 508, otherwise call sending module 510.Optimization module 508 is used for according to first Optimal Parameters to the algorithm mould to be optimized Type optimizes.Sending module 510 is used to send optimization request to the second equipment.
In one aspect, device 500 can also include the first receiving module.First receiving module is for receiving described second Equipment responds the second Optimal Parameters that the optimization request returns, wherein second Optimal Parameters are based on the second training set The algorithm model to be optimized is trained, second training set is that the real time data is inserted into the second history What data set was formed.Correspondingly, optimization module 508 is also used to according to second Optimal Parameters to the algorithm model to be optimized It optimizes.
On the other hand, first history data set is the subset of second history data set.
In yet another aspect, device 500 further includes the first determining module.First determining module is used for first training Collection is determined as the first new history data set.
Fig. 6 shows the schematic diagram of the algorithm model optimization device according to one embodiment of the application, dress shown in fig. 6 Setting 600 can use the mode of software, hardware or software and hardware combining to realize.Device 600 may be mounted in the second equipment.Dress The embodiment for setting 600 is substantially similar to the embodiment of method, so describing fairly simple, related place is referring to embodiment of the method Part explanation.
As shown in fig. 6, device 600 may include the second receiving module 602 and return module 604.Second receiving module 602 For receiving the optimization request for algorithm model to be optimized of the first equipment transmission.Return module 604 is described excellent for responding Change request and return to the second Optimal Parameters, so that first equipment is according to second Optimal Parameters to the algorithm mould to be optimized Type optimizes, wherein second Optimal Parameters are to be trained based on the second training set to the algorithm model to be optimized It obtains, second training set is that real time data is inserted into what the second history data set was formed.
In one aspect, device 600 can also include the second determining module.Second determining module by described second for instructing Practice collection and is determined as the second new history data set.
The embodiment of the present application also provides a kind of electronic equipment, refers to Fig. 7, and Fig. 7 is the embodiment of the present application electronic equipment one A embodiment schematic diagram.As shown in fig. 7, for ease of description, illustrating only part relevant to the embodiment of the present application, specific skill Art details does not disclose, please refers to the embodiment of the present application method part.
As shown in fig. 7, electronic equipment 700 may include processor 702 and memory 704, wherein deposited on memory 704 Contain executable instruction, wherein the executable instruction makes processor 702 execute any implementation of Fig. 3 or Fig. 4 upon being performed Method shown in mode.
As shown in fig. 7, electronic equipment 700 can also include connecting different system components (including processor 702 and memory 704) bus 706.Bus 706 indicates one of a few class bus structures or a variety of, including memory bus or memory Controller, peripheral bus, graphics acceleration port, processor or the local using any bus structures in a variety of bus structures Bus.For example, these architectures include but is not limited to industry standard architecture (ISA) bus, microchannel system knot Structure (MAC) bus, enhanced isa bus, Video Electronics Standards Association (VESA) local bus and peripheral component interconnection (PCI) Bus.
Electronic equipment 700 typically comprises a variety of computer system readable media.These media can be it is any can be by The usable medium that electronic equipment 700 accesses, including volatile and non-volatile media, moveable and immovable medium.
Memory 704 may include the computer system readable media of form of volatile memory, such as arbitrary access is deposited Reservoir (RAM) 708 and and/or cache memory 710.Electronic equipment 700 may further include it is other it is removable/can not Mobile, volatile/non-volatile computer system storage medium.Only as an example, storage system 712 can be used for reading and writing not Movably, non-volatile magnetic media (Fig. 7 do not show, commonly referred to as " hard disk drive ").It, can be with although being not shown in Fig. 7 The disc driver for reading and writing to removable non-volatile magnetic disk (such as " floppy disk ") is provided, and non-volatile to moving The CD drive of CD (such as CD-ROM, DVD-ROM or other optical mediums) read-write.In these cases, each driving Device can be connected by one or more data media interfaces with bus 706.Memory 704 may include at least one program Product, the program product have one group of (for example, at least one) program module, these program modules are configured to perform the present invention The function of above-mentioned Fig. 3 or Fig. 4 embodiment.
Program/utility 714 with one group of (at least one) program module 716, can store in such as memory In 704, such program module 716 includes but is not limited to operating system, one or more application program, other program modules And program data, it may include the realization of network environment in each of these examples or certain combination.Program module 716 Usually execute the function and/or method in above-mentioned Fig. 3 or Fig. 4 embodiment described in the invention.
Electronic equipment 700 can also be with one or more external equipments 722 (such as keyboard, sensing equipment, display 724 Deng) communication, can also be enabled a user to one or more equipment interact with the electronic equipment 700 communicate, and/or with make Any equipment (such as network interface card, the modem that the electronic equipment 700 can be communicated with one or more of the other calculating equipment Etc.) communication.This communication can be carried out by input/output (I/O) interface 718.Also, electronic equipment 700 can also lead to Cross network adapter 720 and one or more network (such as local area network (LAN), wide area network (WAN) and/or public network, example Such as internet) communication.As shown in fig. 7, network adapter 720 is communicated by bus 706 with other modules of electronic equipment 700. It should be understood that although not shown in the drawings, other hardware and/or software module can not used in conjunction with electronic equipment 700, including but not It is limited to: microcode, device driver, redundant processor, external disk drive array, RAID system, tape drive and number According to backup storage system etc..
Program of the processor 702 by operation storage in memory 704, thereby executing various function application and data Processing, such as realize method shown in above-described embodiment.
Embodiments herein also provides a kind of computer storage medium, is stored thereon with computer program, the calculating Machine program includes executable instruction, when the executable instruction is executed by processor, implements the algorithm of foregoing individual embodiments Any one embodiment in model optimization method.
The computer storage medium of the present embodiment may include in the memory 704 in above-mentioned embodiment illustrated in fig. 7 with Machine accesses memory (RAM) 708, and/or cache memory 710, and/or storage system 712.
With the development of science and technology, the route of transmission of computer program is no longer limited by tangible medium, it can also be directly from net Network downloading, or obtained using other modes.Therefore, the computer storage medium in the present embodiment not only may include tangible Medium can also include invisible medium.
It will be understood by those skilled in the art that the embodiment of the present invention can provide as method, apparatus or computer program production Product.Therefore, in terms of the embodiment of the present invention can be used complete hardware embodiment, complete software embodiment or combine software and hardware Embodiment form.Moreover, it wherein includes computer available programs generation that the embodiment of the present invention, which can be used in one or more, The meter implemented in the computer-usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) of code The form of calculation machine program product.
The embodiment of the present invention be referring to according to the method for the embodiment of the present invention, the process of device and computer program product Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing terminal devices To generate a machine, so that being produced by the instruction that computer or the processor of other programmable data processing terminal devices execute Life is for realizing the function of specifying in one or more flows of the flowchart and/or one or more blocks of the block diagram Device.
The specific embodiment illustrated above in conjunction with attached drawing describes exemplary embodiment, it is not intended that may be implemented Or fall into all embodiments of the protection scope of claims." exemplary " meaning of the term used in entire this specification Taste " be used as example, example or illustration ", be not meant to than other embodiments " preferably " or " there is advantage ".For offer pair The purpose of the understanding of described technology, specific embodiment include detail.However, it is possible in these no details In the case of implement these technologies.In some instances, public in order to avoid the concept to described embodiment causes indigestion The construction and device known is shown in block diagram form.
The foregoing description of present disclosure is provided so that any those of ordinary skill in this field can be realized or make Use present disclosure.To those skilled in the art, the various modifications carried out to present disclosure are apparent , also, can also answer generic principles defined herein in the case where not departing from the protection scope of present disclosure For other modifications.Therefore, present disclosure is not limited to examples described herein, but with meet principle disclosed herein It is consistent with the widest scope of novel features.

Claims (10)

1. algorithm model optimization method is applied to the first equipment, comprising:
Real time data is inserted into the first history data set and forms the first training set;
Optimization algorithm model is treated based on first training set to be trained to obtain the first Optimal Parameters;
Whether the precision of the algorithm model after judging optimization meets default required precision;If then according to first Optimal Parameters The algorithm model to be optimized is optimized, otherwise sends optimization request to the second equipment.
2. according to the method described in claim 1, wherein, further includes:
It receives second equipment and responds the second Optimal Parameters that the optimization request returns, wherein second Optimal Parameters It is to be trained based on the second training set to the algorithm model to be optimized, second training set is will be described real-time Data are inserted into what the second history data set was formed;
The algorithm model to be optimized is optimized according to second Optimal Parameters.
3. according to the method described in claim 2, wherein, first history data set is the son of second history data set Collection.
4. method according to claim 1 or 2, wherein after being optimized to the algorithm model to be optimized, also wrap It includes:
First training set is determined as to the first new history data set.
5. algorithm model optimization method is applied to the second equipment, comprising:
Receive the optimization request for treating optimization algorithm model that the first equipment sends needle;
It responds the optimization request and returns to the second Optimal Parameters, so that first equipment is according to second Optimal Parameters to institute It states algorithm model to be optimized to optimize, wherein second Optimal Parameters are based on the second training set to the calculation to be optimized Method model is trained, and second training set is that real time data is inserted into what the second history data set was formed.
6. according to the method described in claim 5, wherein, after responding the optimization request and returning to the second Optimal Parameters, going back Include:
Second training set is determined as to the second new history data set.
7. algorithm model optimizes device, it is applied to the first equipment, comprising:
It is inserted into module, forms the first training set for real time data to be inserted into the first history data set;
Training module is trained to obtain the first Optimal Parameters for treating optimization algorithm model based on first training set;
Judgment module, for judging whether the precision of the algorithm model after optimizing meets default required precision;It is to call optimization Otherwise module calls sending module;
Optimization module, for being optimized according to first Optimal Parameters to the algorithm model to be optimized;
Sending module, for sending optimization request to the second equipment.
8. algorithm model optimizes device, it is applied to the second equipment, comprising:
Second receiving module, for receiving the optimization request for algorithm model to be optimized of the first equipment transmission;
Return module returns to the second Optimal Parameters for responding the optimization request, so that first equipment is according to described the Two Optimal Parameters optimize the algorithm model to be optimized, wherein second Optimal Parameters are based on the second training set The algorithm model to be optimized is trained, second training set is that real time data is inserted into the second historical data What collection was formed.
9. electronic equipment, comprising:
Processor;And
Memory is stored thereon with executable instruction;
Wherein, the processor is configured to execute the executable instruction to implement side as claimed in any one of claims 1 to 6 Method.
10. computer readable storage medium is stored thereon with computer program, the computer program includes executable instruction, When the executable instruction is executed by processor, implement as the method according to claim 1 to 6.
CN201910182371.9A 2019-03-12 2019-03-12 Algorithm model optimization method, device and equipment Pending CN109948803A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910182371.9A CN109948803A (en) 2019-03-12 2019-03-12 Algorithm model optimization method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910182371.9A CN109948803A (en) 2019-03-12 2019-03-12 Algorithm model optimization method, device and equipment

Publications (1)

Publication Number Publication Date
CN109948803A true CN109948803A (en) 2019-06-28

Family

ID=67009604

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910182371.9A Pending CN109948803A (en) 2019-03-12 2019-03-12 Algorithm model optimization method, device and equipment

Country Status (1)

Country Link
CN (1) CN109948803A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991195A (en) * 2019-12-13 2020-04-10 北京小米智能科技有限公司 Machine translation model training method, device and storage medium
WO2021143477A1 (en) * 2020-01-16 2021-07-22 支付宝(杭州)信息技术有限公司 Federated learning method and apparatus fusing public domain data and private data, and system
CN113642805A (en) * 2021-08-27 2021-11-12 Oppo广东移动通信有限公司 Algorithm optimization method of Internet of things equipment, electronic equipment and readable storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991195A (en) * 2019-12-13 2020-04-10 北京小米智能科技有限公司 Machine translation model training method, device and storage medium
CN110991195B (en) * 2019-12-13 2023-09-29 北京小米智能科技有限公司 Machine translation model training method, device and storage medium
WO2021143477A1 (en) * 2020-01-16 2021-07-22 支付宝(杭州)信息技术有限公司 Federated learning method and apparatus fusing public domain data and private data, and system
CN113642805A (en) * 2021-08-27 2021-11-12 Oppo广东移动通信有限公司 Algorithm optimization method of Internet of things equipment, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN111242282B (en) Deep learning model training acceleration method based on end edge cloud cooperation
US7537523B2 (en) Dynamic player groups for interest management in multi-character virtual environments
CN107391031B (en) Data migration method and device in computing system based on hybrid storage
CN109948803A (en) Algorithm model optimization method, device and equipment
CN110968423A (en) Method and apparatus for distributing workload to accelerators using machine learning
CN113572697B (en) Load balancing method based on graph convolution neural network and deep reinforcement learning
CN113537510A (en) Machine learning model data processing method and device based on unbalanced data set
CN115941790A (en) Edge collaborative content caching method, device, equipment and storage medium
CN101986608B (en) Method for evaluating heterogeneous overlay network load balance degree
CN111124439B (en) Intelligent dynamic unloading algorithm with cloud edge cooperation
CN110750363B (en) Computer storage management method and device, electronic equipment and storage medium
CN116800671A (en) Data transmission method, apparatus, computer device, storage medium, and program product
TWI792784B (en) Method and system for federated reinforcement learning based offloading optimization in edge computing
WO2021068247A8 (en) Neural network scheduling method and apparatus, computer device, and readable storage medium
CN114530073B (en) Training method and device based on virtual reality
CN110035126A (en) A kind of document handling method, calculates equipment and storage medium at device
CN109947850A (en) Data distributing method, device and equipment
CN109981755A (en) Image-recognizing method, device and electronic equipment
CN105205723A (en) Modeling method and device based on social application
CN112738815B (en) Method and device for evaluating number of accessible users
CN109981696A (en) A kind of load-balancing method, device and equipment
CN108762684A (en) Hot spot data migrates flow control method, device, electronic equipment and storage medium
CN114528992A (en) Block chain-based e-commerce business analysis model training method
CN109788061B (en) Computing task deployment method and device
CN104536761A (en) Method and device for evaluating service module processing time

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination