CN108304265A - EMS memory management process, device and storage medium - Google Patents

EMS memory management process, device and storage medium Download PDF

Info

Publication number
CN108304265A
CN108304265A CN201810064209.2A CN201810064209A CN108304265A CN 108304265 A CN108304265 A CN 108304265A CN 201810064209 A CN201810064209 A CN 201810064209A CN 108304265 A CN108304265 A CN 108304265A
Authority
CN
China
Prior art keywords
memory
feature unit
branch
size
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810064209.2A
Other languages
Chinese (zh)
Other versions
CN108304265B (en
Inventor
黄凯宁
朱晓龙
梅利健
黄生辉
王同
王一同
罗镜民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810064209.2A priority Critical patent/CN108304265B/en
Publication of CN108304265A publication Critical patent/CN108304265A/en
Application granted granted Critical
Publication of CN108304265B publication Critical patent/CN108304265B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The embodiment of the invention discloses a kind of EMS memory management process, device and storage mediums, belong to field of computer technology.Method includes:According to the feature unit connection relation in neural network, at least one branch of neural network is determined;For each branch, according to the size for the output memory that each feature unit in branch needs, the first memory and the second memory are distributed for branch, the first memory size is not less than the second memory size, and the first memory size and the second memory size are not less than other memory sizes that branch needs;By the first memory and the second memory alternately as the input memory of feature unit in branch and output memory.Each branch that the embodiment of the present invention is directed in neural network only distributes two memories, input memory alternately as feature unit and output memory, both it can ensure that is calculated is normally carried out, and realize memory multiplexing, save the memory of occupancy, memory requirements is reduced, ensures normally realize neural network in terminal.

Description

EMS memory management process, device and storage medium
Technical field
The present embodiments relate to field of computer technology, more particularly to a kind of EMS memory management process, device and storage are situated between Matter.
Background technology
In recent years, deep learning is widely used in the fields such as speech recognition and computer vision, and with The fast development of deep learning drives AI (Artificial Intelligence, artificial intelligence) algorithm and continues to bring out, and Promptly change the developing direction and people’s lives of technology.
Deep learning be unable to do without the realization of neural network, and traditional neural network is usually realized in server end, however, with The continuous development for AI algorithms realizes that the mode of neural network has been unable to meet the growing business of people and needs in server end It asks, there is an urgent need for propose the mode in terminal realization neural network.
But neural network includes multiple network layers, each network layer includes multiple feature units, each feature list again Member, which calculates, to be completed to be required to when output data to occupy one piece of memory, causes to need to occupy a large amount of memory when realizing neural network. And compared with server end, the limited memory of terminal cannot be satisfied the memory requirements of neural network, cause to realize nerve in terminal The mode of network is difficult to realize.
Invention content
An embodiment of the present invention provides a kind of EMS memory management process, device and storage mediums, can solve the relevant technologies Problem.The technical solution is as follows:
In a first aspect, providing a kind of EMS memory management process, it is applied in terminal, the method includes:
According to the feature unit connection relation in neural network, at least one branch of the neural network is determined, it is described Neural network includes the multiple network layers arranged in sequence, and each network layer includes at least one feature unit, each branch It is connected and composed by multiple feature units positioned at heterogeneous networks layer;
Each branch is divided according to the size for the output memory that each feature unit in the branch needs to be described Branch distributes the first memory and the second memory, and the first memory size is not less than the second memory size, first memory size and institute It states the second memory size and is not less than other memory sizes that the branch needs;
By first memory and second memory alternately as the input memory of feature unit in the branch and defeated Go out memory.
Second aspect provides a kind of memory management device, is applied in terminal, described device includes:
Determining module, for according to the feature unit connection relation in neural network, determining the neural network at least One branch, the neural network include the multiple network layers arranged in sequence, and each network layer includes at least one feature Unit, each branch are connected and composed by multiple feature units positioned at heterogeneous networks layer;
Distribution module is used for for each branch, according to the output memory of each feature unit needs in the branch Size distributes the first memory and the second memory for the branch, and the first memory size is not less than the second memory size, and described first Memory size and second memory size are not less than other memory sizes that the branch needs;
The distribution module is additionally operable to first memory and second memory alternately as feature in the branch The input memory and output memory of unit.
The third aspect, provides a kind of memory management device, and the memory management device includes processor and memory, institute It states and is stored at least one instruction, at least one section of program, code set or instruction set, described instruction, described program, institute in memory It states code set or described instruction collection is loaded by the processor and executed to realize EMS memory management process as described in relation to the first aspect In performed operation.
Fourth aspect provides a kind of computer readable storage medium, is stored in the computer readable storage medium At least one instruction, at least one section of program, code set or instruction set, described instruction, described program, the code set or the finger Collection is enabled to be loaded by processor and executed to realize operation performed in EMS memory management process as described in relation to the first aspect.
The advantageous effect that technical solution provided in an embodiment of the present invention is brought is:
Method, apparatus provided in an embodiment of the present invention and storage medium, by being connected according to the feature unit in neural network Relationship is connect, determines at least one branch of neural network, for each branch, is needed according to each feature unit in branch defeated Go out the size of memory, distributes the first memory for branch and the second memory, the first memory size and the second memory size are not less than Other memory sizes that branch needs;By the first memory and the second memory alternately as the input memory of feature unit in branch and Export memory.Each branch that the embodiment of the present invention is directed in neural network only distributes two memories, alternately as feature unit Input memory and output memory, can both ensure calculate be normally carried out, and realize memory multiplexing, save occupancy Memory reduces memory requirements, ensures that the memory of terminal can meet the memory requirements of neural network, can be normal in terminal Neural network is realized, to extend the function of terminal.
Description of the drawings
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for For those of ordinary skill in the art, without creative efforts, other are can also be obtained according to these attached drawings Attached drawing.
Fig. 1 is a kind of structural schematic diagram of neural network provided in an embodiment of the present invention;
Fig. 2 is a kind of flow chart of EMS memory management process provided in an embodiment of the present invention;
Fig. 3 is a kind of schematic diagram of calculation process provided in an embodiment of the present invention;
Fig. 4 is a kind of structural schematic diagram of memory management device provided in an embodiment of the present invention;
Fig. 5 is a kind of structural schematic diagram of terminal provided in an embodiment of the present invention.
Specific implementation mode
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation describes, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall within the protection scope of the present invention.
Before the embodiment of the present invention is described in detail, first to the present embodiments relate to neural network progress As described below:
Fig. 1 is a kind of structural schematic diagram of neural network provided in an embodiment of the present invention, and referring to Fig. 1, neural network includes Multiple network layers, each network layer include at least one feature unit, and the feature unit in adjacent two network layers can phase It connects, to constitute a network.During realizing neural network, the feature unit in previous network layer can be counted Calculate, the data of output inputed into its connection, feature unit in next network layer, by next feature unit into Row calculates, and so on, until the feature unit positioned at the last one network layer exports result of calculation.
In practical application, which can be CNN (Convolutional Neural Network, convolutional Neural Network), RNN (Recurrent Neural Network, Recognition with Recurrent Neural Network) or DNN (DeepNeural Network, it is deep Spend neural network) etc. a plurality of types of neural networks, for different types of neural network, included by network layer type not Together, the feature unit type in network layer is also different.
For example, CNN includes the multiple network layers such as input layer, convolutional layer, sample level and output layer, each network layer includes Characteristic pattern, characteristic pattern include neuron, then the feature unit in the embodiment of the present invention can refer to characteristic pattern or neuron.
Neural network provided in an embodiment of the present invention can be applied under several scenes, such as recognition of face, spam The image collected is input in neural network and calculates such as under the scene of recognition of face by screening, information recommendation etc., To differentiate whether comprising face in the image, or under the scene of information recommendation, much information is input in neural network It is calculated, filters out the possible interested information of user, recommend user.
Fig. 2 is a kind of flow chart of EMS memory management process provided in an embodiment of the present invention, the execution of the EMS memory management process Main body is terminal, and referring to Fig. 2, this method includes:
201, terminal obtains neural network and determines neural network according to the feature unit connection relation in neural network At least one branch.
Wherein, neural network includes the multiple network layers arranged in sequence, and each network layer includes at least one feature Unit, the feature unit in adjacent two network layers can be connected with each other, to constitute a network.For being located at a certain net For a certain feature unit of network layers, this feature unit can be one or more of with a upper network layer for place network layer Feature unit connects, and can also be connect with one or more of next network layer of place network layer feature unit.
It therefore, can will be neural according to the connection relation of multiple feature units in neural network there is many branches Network is divided at least one branch, and each branch positioned at the feature unit of heterogeneous networks layer by connecting and composing.
For example, with reference to Fig. 1, neural network can be divided into " 11-21-31-41 ", " 12-22-32-41 ", " 13-23- The branches such as 32-41 ".
202, it for each branch, needs the data volume exported to count each feature unit in branch, is divided The size for the output memory that each feature unit needs in branch.
For each branch, committed memory is needed during being calculated based on the branch.For the branch In for adjacent two feature units, the data that previous feature unit exports after calculating are stored in memory, under One feature unit extracts the data of previous feature unit output from the memory, and is calculated according to the data of extraction, The memory is the output memory of previous feature unit, the input memory of next feature unit.
Can be in the related technology that the distribution of each feature unit is defeated in server end realization neural network, calculating process Enter memory and output memory, causes to occupy a large amount of memory.Since the memory of server end is larger, the calculating of neural network Cheng Buhui is limited by memory size, does not interfere with the normal operation of server end.
And in the embodiment of the present invention, it is contemplated that the limited memory of terminal, to realize that neural network is then easy in terminal Excessive memory is occupied, and operating system may recycle allocated memory when committed memory is excessive, this influences whether end The normal operation at end.Therefore, in order to save memory, before actually being calculated, neural network is divided at least one point Branch, two memories only distributes for each branch of neural network, with two memories of distribution alternately as exporting memory and defeated Enter memory, and is no longer that each feature unit distributes input memory and output memory.
For this purpose, in storage allocation, first determine that the data volume that each feature unit needs export in branch, the data volume are For the size for the output memory that corresponding feature unit needs, got with this in output that each feature unit needs in branch The size deposited.
203, the size of the output memory needed according to each feature unit in branch distributes the first memory and for branch Two memories.
Wherein, the first memory size is not less than the second memory size, and the first memory size and the second memory size be not small In other memory sizes that branch needs.
In order to ensure that the memory of distribution stores the data of each feature unit output in branch enough, can get often After the size for the output memory that a feature unit needs, according to sequence from big to small, the output that each feature unit is needed The size of memory is ranked up, and using the size to make number one as the first memory size, will come deputy size conduct Second memory size, to distribute with matched first memory of the first memory size for the branch and be matched with the second memory size The second memory.
For example, with reference to Fig. 1, upper 4 feature units of branch " 11-21-31-41 " need the data volume exported be followed successively by 20M, 30M, 20M, 50M illustrate that the size for the output memory that feature unit 11 needs is 20M, the output memory that feature unit 21 needs Size be 30M, feature unit 31 need output memory size be 20M, feature unit 41 need output memory ruler Very little is 50M, then can determine that the first memory size is 50M after sequence, the second memory size is 30M, therefore is the branch Distribute the memory of the memory and one piece of 30M of one piece of 50M.
204, by the first memory and the second memory alternately as the input memory and output memory of feature unit in branch, root It is calculated according to the input memory and output memory of distribution.
In the first possible realization method, when the first memory size is equal with the second memory size, the branch is indicated On the size of output memory that needs of at least two feature units it is maximum, which is equal to the first memory size, and other The size for the output memory that feature unit needs is no more than the first memory size.Then the rule of storage allocation is:Each feature The input memory of unit is different with output memory, in the input that next feature unit is saved as in the output of previous feature unit It deposits.
When being allocated according to the rule, the first memory need to be only chosen for some feature unit as output memory, or Person chooses the second memory as output memory, you can determines other feature units according to the position relationship between these feature units Input memory and output memory be the first memory or the second memory.
By taking fisrt feature unit as an example, using the first memory as the output memory of fisrt feature unit, and as the second spy The input memory for levying unit, using the second memory as the output memory of second feature unit;Wherein, fisrt feature unit is branch On any feature unit, second feature unit be branch on fisrt feature unit next feature unit.
In second of possible mode, when the first memory size is more than the second memory size, indicate to need in the branch Maximum output memory size and time big output memory size it is different, then only there are one feature unit needs in the branch The size for exporting memory is maximum, and maximum value is the first memory size and is more than the second memory size, so if will be in second The data volume that depositing can cause this feature unit to export as the output memory of this feature unit is more than the second memory size, in turn Cause data can not normal storage.
In order to avoid the above situation, the rule of storage allocation is:The input memory and output memory of each feature unit are not Together, the size for the output memory that the input memory of next feature unit is saved as in the output of previous feature unit, and is needed Output memory for the feature unit of the first memory size should be the first memory.
Therefore, first determine that the size of the output memory needed in branch is the specific characteristic unit of the first memory size, it will Output memory of first memory as specific characteristic unit, according still further to each between feature unit and specific characteristic unit in branch Position relationship, using the first memory and the second memory as the defeated of each feature unit in branch in addition to specific characteristic unit Enter memory or output memory, so that the input memory and output of the first memory and the second memory alternately as feature unit in branch Memory.
For example, if specific characteristic unit putting in order as even number in branch, using the first memory as in branch The output memory of the feature unit of even bit, using the second memory as the output memory of the feature unit of odd bits in branch.Such as Fruit specific characteristic unit putting in order as odd number in branch, then using the first memory as the feature unit of odd bits in branch Output memory, using the second memory as the output memory of the feature unit of even bit in branch.
In a kind of possible realization method, during above-mentioned storage allocation, terminal can call OpenCL (Open Computing Language, open operation language) the clCreateBuffer functions that provide carry out storage allocation.
After the input memory and the output memory that determine each feature unit in the neural network in each branch, you can According to the input memory and output memory distributed, calculated based on the neural network.In calculating process, with fisrt feature For unit, the data that fisrt feature unit exports are stored in the output memory of fisrt feature unit, then from fisrt feature The data of extraction fisrt feature unit output, count the data of extraction by second feature unit in the output memory of unit It calculates, the data that second feature unit exports is stored in the output memory of second feature unit.Wherein, fisrt feature unit is Any feature unit on either branch, second feature unit are next feature list of the fisrt feature unit in the branch The output memory of member, fisrt feature unit is identical as the input memory of second feature unit.
Based on above-mentioned neural network shown in FIG. 1, by taking branch " 11-21-31-41 " as an example, using the first memory as feature The input memory of unit 21 and feature unit 41, as the output memory of feature unit 11 and feature unit 31, by the second memory As the input memory of feature unit 11 and feature unit 31, the output memory as feature unit 21 and feature unit 41.Then Referring to Fig. 3, calculating process includes the following steps:
1, the input data a in the second memory, then feature unit 11 data a is extracted from the second memory, after being calculated Data b is obtained, data b is stored in the first memory.
2, feature unit 21 extracts data b from the first memory, and data c is obtained after being calculated, and data c is stored in In two memories, the data in the second memory become c by a at this time.
3, feature unit 31 extracts data c from the second memory, and data d is obtained after being calculated, and data d is stored in In one memory, the data in the first memory become d by b at this time.
4, feature unit 41 extracts data d from the first memory, and data e is obtained after being calculated, and is stored in the second memory In, the data in the second memory become e by d at this time.
It should be noted that any two branches in neural network may be kept completely separate, it is also possible to intersect, have There is common feature unit.So, for some feature unit in neural network, this feature unit may be located at In multiple branches, for different branches, different input memories and output memory can be arranged in this feature unit.
It that is to say, the input terminal of this feature unit may connect the multiple of the previous network layer positioned at place network layer Upper layer feature unit then distributes different input memories to constitute multiple branches for different this feature units that branches into. This feature unit can extract data respectively from multiple input memory when calculating, be calculated according to multiple data of extraction.
The output end of this feature unit may also can connect multiple lower layers of next network layer positioned at place network layer Feature unit distributes different output memories to constitute multiple branches for different this feature units that branches into.When calculating Calculated data are stored respectively in multiple output memories of distribution by this feature unit, are carried for multiple lower layer's feature units It takes.
It should be noted that in practical application operation frame, the operation frame can be disposed for neural network in terminal Frame is developed for terminal platform, is not required excessively translation and compiling environment and hardware environment, compiling is relatively easy to, and calls the operation frame When frame, the step of storage allocation provided in an embodiment of the present invention may be implemented and calculate, to realize neural network in terminal Calculating.In view of will include a large amount of concurrent operation in neural network, and CPU (Central Processing Unit, in Central processor) performance be difficult to meet operation requirement, GPU (Graphic Processing Unit, graphics processor) is due to it Architectural characteristic is more suitable for carrying out concurrent operation, therefore, CPU can be used to realize to memory when realizing neural network in terminal Optimization, and using GPU carry out main operational, so as to improve computational efficiency, more meet neural network technology development Trend.
Correspondingly, it for the neural network, the step of terminal can execute above-mentioned storage allocation by CPU, determines each The input memory and output memory of feature unit, and calculating task is issued, may include each feature unit in the calculating task Input memory and output memory, then terminal the task that CPU is issued is got by GPU, according to the memory distributed in the task It is calculated.
In a kind of possible realization method, when CPU is that GPU issues calculating task, OpenCL can be called to provide ClCreateCommandQueue function creation task queues call the addToCommandQueue functions that OpenCL is provided to exist Task is added in task queue, is got task from the task queue of establishment by GPU and is calculated.
Method provided in an embodiment of the present invention, by according to the feature unit connection relation in neural network, determining nerve At least one branch of network, according to the size for the output memory that each feature unit in branch needs, is for each branch Branch distributes the first memory and the second memory, and the first memory size and the second memory size are not less than in other of branch's needs Deposit size;By the first memory and the second memory alternately as the input memory of feature unit in branch and output memory.The present invention Each branch that embodiment is directed in neural network only distributes two memories, alternately as the input memory of feature unit and output Memory can both ensure that is calculated is normally carried out, and realize memory multiplexing, save the memory of occupancy, reduce memory Demand ensures that the memory of terminal can meet the memory requirements of neural network, neural network can be normally realized in terminal, from And extend the function of terminal.
Fig. 4 is a kind of structural schematic diagram of memory management device provided in an embodiment of the present invention.Referring to Fig. 4, which answers For in terminal, which includes:
Determining module 401, for the step of executing at least one branch for determining neural network in above-described embodiment.
Distribution module 402, for executing the step of distributing the first memory and the second memory in above-described embodiment for branch.
Distribution module 402 is additionally operable to execute the first memory and the second memory in above-described embodiment alternately as in branch The step of input memory and output memory of feature unit.
Optionally, which further includes:
Statistical module needs the data volume exported to carry out each feature unit in branch for executing in above-described embodiment The step of statistics.
Optionally, distribution module 402, including:
Sequencing unit, the size for executing the output memory needed to each feature unit in branch in above-described embodiment The step of being ranked up;
First allocation unit determines the first memory size and the second memory size for executing in above-described embodiment, and point With the first memory and the step of the second memory.
Optionally, distribution module 402, including:
Second allocation unit, it is true when the first memory size is equal with the second memory size for executing in above-described embodiment The step of input memory and output memory of fixed each feature unit.
Optionally, distribution module 402, including:
Third allocation unit, for execute in above-described embodiment when the first memory size be more than the second memory size when determine The step of input memory and output memory of each feature unit.
Optionally, which further includes:
The data that fisrt feature unit exports are stored in the first spy by the first computing module for executing in above-described embodiment Levy the step in the output memory of unit;
Second computing module, for executing in above-described embodiment after extracting data in the output memory of fisrt feature unit The step being calculated and stored in the output memory of second feature unit.
It should be noted that:The memory management device that above-described embodiment provides is in managing internal memory, only with above-mentioned each function The division progress of module, can be as needed and by above-mentioned function distribution by different function moulds for example, in practical application Block is completed, i.e., the internal structure of terminal is divided into different function modules, to complete all or part of work(described above Energy.In addition, the memory management device that above-described embodiment provides belongs to same design with EMS memory management process embodiment, it is specific real Existing process refers to embodiment of the method, and which is not described herein again.
Fig. 5 shows the structure diagram for the terminal 500 that an illustrative embodiment of the invention provides.The terminal 500 can be with It is portable mobile termianl, such as:Smart mobile phone, tablet computer, MP3 player (Moving Picture Experts Group Audio Layer III, dynamic image expert's compression standard audio level 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert's compression standard audio level 4) player, laptop Or desktop computer.Terminal 500 is also possible to be referred to as other names such as user equipment, portable terminal, laptop terminal, terminal console Claim.
In general, terminal 500 includes:Processor 501 and memory 502.
Processor 501 may include one or more processing cores, such as 4 core processors, 5 core processors etc..Place DSP (Digital Signal Processing, Digital Signal Processing), FPGA (Field- may be used in reason device 501 Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, may be programmed Logic array) at least one of example, in hardware realize.Processor 501 can also include primary processor and coprocessor, master Processor is the processor for being handled data in the awake state, also referred to as CPU (Central Processing Unit, central processing unit);Coprocessor is the low power processor for being handled data in the standby state. In some embodiments, processor 501 can be integrated with GPU (Graphics Processing Unit, image processor), GPU is used to be responsible for the rendering and drafting of content to be shown needed for display screen.In some embodiments, processor 501 can also wrap AI (Artificial Intelligence, artificial intelligence) processor is included, the AI processors are for handling related machine learning Calculating operation.
Memory 502 may include one or more computer readable storage mediums, which can To be non-transient.Memory 502 may also include high-speed random access memory and nonvolatile memory, such as one Or multiple disk storage equipments, flash memory device.In some embodiments, the non-transient computer in memory 502 can Storage medium is read for storing at least one instruction, at least one instruction is for performed to realize this Shen by processor 501 Please in embodiment of the method provide EMS memory management process.
In some embodiments, terminal 500 is also optional includes:Peripheral device interface 503 and at least one peripheral equipment. It can be connected by bus or signal wire between processor 501, memory 502 and peripheral device interface 503.Each peripheral equipment It can be connected with peripheral device interface 503 by bus, signal wire or circuit board.Specifically, peripheral equipment includes:Radio circuit 504, at least one of touch display screen 505, camera 506, voicefrequency circuit 507, positioning component 508 and power supply 509.
Peripheral device interface 503 can be used for I/O (Input/Output, input/output) is relevant at least one outer Peripheral equipment is connected to processor 501 and memory 502.In some embodiments, processor 501, memory 502 and peripheral equipment Interface 503 is integrated on same chip or circuit board;In some other embodiments, processor 501, memory 502 and outer Any one or two in peripheral equipment interface 503 can realize on individual chip or circuit board, the present embodiment to this not It is limited.
Radio circuit 504 is for receiving and emitting RF (Radio Frequency, radio frequency) signal, also referred to as electromagnetic signal.It penetrates Frequency circuit 504 is communicated by electromagnetic signal with communication network and other communication equipments.Radio circuit 504 turns electric signal It is changed to electromagnetic signal to be sent, alternatively, the electromagnetic signal received is converted to electric signal.Optionally, radio circuit 504 wraps It includes:Antenna system, RF transceivers, one or more amplifiers, tuner, oscillator, digital signal processor, codec chip Group, user identity module card etc..Radio circuit 504 can be carried out by least one wireless communication protocol with other terminals Communication.The wireless communication protocol includes but not limited to:Metropolitan Area Network (MAN), each third generation mobile communication network (2G, 3G, 4G and 5G), wireless office Domain net and/or WiFi (Wireless Fidelity, Wireless Fidelity) network.In some embodiments, radio circuit 504 may be used also To include the related circuits of NFC (Near Field Communication, wireless near field communication), the application is not subject to this It limits.
Display screen 505 is for showing UI (User Interface, user interface).The UI may include figure, text, figure Mark, video and its their arbitrary combination.When display screen 505 is touch display screen, display screen 505 also there is acquisition to show The ability of the surface of screen 505 or the touch signal of surface.The touch signal can be used as control signal to be input to processor 501 are handled.At this point, display screen 505 can be also used for providing virtual push button and/or dummy keyboard, also referred to as soft button and/or Soft keyboard.In some embodiments, display screen 505 can be one, and the front panel of terminal 500 is arranged;In other embodiments In, display screen 505 can be at least two, be separately positioned on the different surfaces of terminal 500 or in foldover design;In still other reality Apply in example, display screen 505 can be flexible display screen, be arranged on the curved surface of terminal 500 or fold plane on.Even, it shows Display screen 505 can also be arranged to non-rectangle irregular figure, namely abnormity screen.LCD (Liquid may be used in display screen 505 Crystal Display, liquid crystal display), OLED (OrganicLight-Emitting Diode, Organic Light Emitting Diode) Etc. materials prepare.
CCD camera assembly 506 is for acquiring image or video.Optionally, CCD camera assembly 506 include front camera and Rear camera.In general, the front panel in terminal is arranged in front camera, rear camera is arranged at the back side of terminal.One In a little embodiments, rear camera at least two is main camera, depth of field camera, wide-angle camera, focal length camera shooting respectively Any one in head, to realize that main camera and the fusion of depth of field camera realize background blurring function, main camera and wide-angle Camera fusion realizes that pan-shot and VR (Virtual Reality, virtual reality) shooting functions or other fusions are clapped Camera shooting function.In some embodiments, CCD camera assembly 506 can also include flash lamp.Flash lamp can be monochromatic warm flash lamp, It can also be double-colored temperature flash lamp.Double-colored temperature flash lamp refers to the combination of warm light flash lamp and cold light flash lamp, be can be used for not With the light compensation under colour temperature.
Voicefrequency circuit 507 may include microphone and loud speaker.Microphone is used to acquire the sound wave of user and environment, and will Sound wave, which is converted to electric signal and is input to processor 501, to be handled, or is input to radio circuit 504 to realize voice communication. For stereo acquisition or the purpose of noise reduction, microphone can be multiple, be separately positioned on the different parts of terminal 500.Mike Wind can also be array microphone or omnidirectional's acquisition type microphone.Loud speaker is then used to that processor 501 or radio circuit will to be come from 504 electric signal is converted to sound wave.Loud speaker can be traditional wafer speaker, can also be piezoelectric ceramic loudspeaker.When When loud speaker is piezoelectric ceramic loudspeaker, the audible sound wave of the mankind can be not only converted electrical signals to, it can also be by telecommunications Number the sound wave that the mankind do not hear is converted to carry out the purposes such as ranging.In some embodiments, voicefrequency circuit 507 can also include Earphone jack.
Positioning component 508 is used for the current geographic position of positioning terminal 500, to realize navigation or LBS (Location Based Service, location based service).Positioning component 508 can be the GPS (Global based on the U.S. Positioning System, global positioning system), the dipper system of China, Russia Gray receive this system or European Union The positioning component of Galileo system.
Power supply 509 is used to be powered for the various components in terminal 500.Power supply 509 can be alternating current, direct current, Disposable battery or rechargeable battery.When power supply 509 includes rechargeable battery, which can support wired charging Or wireless charging.The rechargeable battery can be also used for supporting fast charge technology.
In some embodiments, terminal 500 further include there are one or multiple sensors 510.The one or more sensors 510 include but not limited to:Acceleration transducer 511, gyro sensor 512, pressure sensor 513, fingerprint sensor 514, Optical sensor 515 and proximity sensor 516.
The acceleration that acceleration transducer 511 can detect in three reference axis of the coordinate system established with terminal 500 is big It is small.For example, acceleration transducer 511 can be used for detecting component of the acceleration of gravity in three reference axis.Processor 501 can With the acceleration of gravity signal acquired according to acceleration transducer 511, control touch display screen 505 is regarded with transverse views or longitudinal direction Figure carries out the display of user interface.Acceleration transducer 511 can be also used for game or the acquisition of the exercise data of user.
Gyro sensor 512 can be with the body direction of detection terminal 500 and rotational angle, and gyro sensor 512 can To cooperate with acquisition user to act the 3D of terminal 500 with acceleration transducer 511.Processor 501 is according to gyro sensor 512 Following function may be implemented in the data of acquisition:When action induction (for example changing UI according to the tilt operation of user), shooting Image stabilization, game control and inertial navigation.
The lower layer of side frame and/or touch display screen 505 in terminal 500 can be arranged in pressure sensor 513.Work as pressure The gripping signal that user can be detected in the side frame of terminal 500 to terminal 500 is arranged in sensor 513, by processor 501 Right-hand man's identification or prompt operation are carried out according to the gripping signal that pressure sensor 513 acquires.When the setting of pressure sensor 513 exists When the lower layer of touch display screen 505, the pressure operation of touch display screen 505 is realized to UI circle according to user by processor 501 Operability control on face is controlled.Operability control includes button control, scroll bar control, icon control, menu At least one of control.
Fingerprint sensor 514 is used to acquire the fingerprint of user, collected according to fingerprint sensor 514 by processor 501 The identity of fingerprint recognition user, alternatively, by fingerprint sensor 514 according to the identity of collected fingerprint recognition user.It is identifying When the identity for going out user is trusted identity, the user is authorized to execute relevant sensitive operation, the sensitive operation packet by processor 501 Include solution lock screen, check encryption information, download software, payment and change setting etc..Terminal can be set in fingerprint sensor 514 500 front, the back side or side.When being provided with physical button or manufacturer Logo in terminal 500, fingerprint sensor 514 can be with It is integrated with physical button or manufacturer's mark.
Optical sensor 515 is for acquiring ambient light intensity.In one embodiment, processor 501 can be according to optics The ambient light intensity that sensor 515 acquires controls the display brightness of touch display screen 505.Specifically, when ambient light intensity is higher When, the display brightness of touch display screen 505 is turned up;When ambient light intensity is relatively low, the display for turning down touch display screen 505 is bright Degree.In another embodiment, the ambient light intensity that processor 501 can also be acquired according to optical sensor 515, dynamic adjust The acquisition parameters of CCD camera assembly 506.
Proximity sensor 516, also referred to as range sensor are generally arranged at the front panel of terminal 500.Proximity sensor 516 The distance between front for acquiring user and terminal 500.In one embodiment, when proximity sensor 516 detects use When family and the distance between the front of terminal 500 taper into, touch display screen 505 is controlled from bright screen state by processor 501 It is switched to breath screen state;When proximity sensor 516 detects user and the distance between the front of terminal 500 becomes larger, Touch display screen 505 is controlled by processor 501 and is switched to bright screen state from breath screen state.
It will be understood by those skilled in the art that the restriction of the not structure paired terminal 500 of structure shown in Fig. 5, can wrap It includes than illustrating more or fewer components, either combine certain components or is arranged using different components.
The embodiment of the present invention additionally provides a kind of memory management device, which includes processor and storage Device, is stored at least one instruction, at least one section of program, code set or instruction set in memory, instruction, program, code set or Instruction set is loaded by processor and is executed operation performed in the EMS memory management process to realize above-described embodiment.
The embodiment of the present invention additionally provides a kind of computer readable storage medium, is stored in the computer readable storage medium Have at least one instruction, at least one section of program, code set or instruction set, the instruction, the program, the code set or the instruction set by Processor loads and executes operation performed in the EMS memory management process to realize above-described embodiment.
One of ordinary skill in the art will appreciate that realizing that all or part of step of above-described embodiment can pass through hardware It completes, relevant hardware can also be instructed to complete by program, the program can be stored in a kind of computer-readable In storage medium, storage medium mentioned above can be read-only memory, disk or CD etc..
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all the present invention spirit and Within principle, any modification, equivalent replacement, improvement and so on should all be included in the protection scope of the present invention.

Claims (14)

1. a kind of EMS memory management process, which is characterized in that it is applied in terminal, the method includes:
According to the feature unit connection relation in neural network, at least one branch of the neural network, the nerve are determined Network includes the multiple network layers arranged in sequence, and each network layer includes at least one feature unit, and each branch is by position It is connected and composed in multiple feature units of heterogeneous networks layer;
For each branch, according to the size for the output memory that each feature unit in the branch needs, for the branch point With the first memory and the second memory, the first memory size is not less than the second memory size, first memory size and described the Two memory sizes are not less than other memory sizes that the branch needs;
By first memory and second memory alternately as in the input memory and output of feature unit in the branch It deposits.
2. according to the method described in claim 1, it is characterized in that, described according to each feature unit needs in the branch The size for exporting memory, before distributing the first memory and the second memory for the branch, the method further includes:
It needs the data volume exported to count each feature unit in the branch, obtains each feature list in the branch The size for the output memory that member needs.
3. according to the method described in claim 1, it is characterized in that, described according to each feature unit needs in the branch The size for exporting memory distributes the first memory and the second memory for the branch, including:
According to sequence from big to small, the size for the output memory that each feature unit needs in the branch is ranked up;
Using the size to make number one as the first memory size, deputy size will be come as the second memory size;
It is matched with matched first memory of first memory size and with second memory size for branch distribution Second memory.
4. according to the method described in claim 1, it is characterized in that, it is described by first memory and second memory in turn As the input memory and output memory of feature unit in the branch, including:
When first memory size is equal with second memory size, using first memory as the branch upper The output memory of one feature unit, and as the input memory of second feature unit, using second memory as described second The output memory of feature unit;
Wherein, the fisrt feature unit is any feature unit in the branch, and the second feature unit is described point Next feature unit of the fisrt feature unit in branch.
5. according to the method described in claim 1, it is characterized in that, it is described by first memory and second memory in turn As the input memory and output memory of feature unit in the branch, including:
When first memory size is more than second memory size, the ruler of the output memory needed in the branch is determined The very little specific characteristic unit for first memory size;
Using first memory as the output memory of the specific characteristic unit;
According to each position relationship between feature unit and the specific characteristic unit in the branch, by first memory With second memory as the input memory of each feature unit in the branch in addition to the specific characteristic unit or Export memory so that first memory and second memory alternately as feature unit in the branch input memory and Export memory.
6. according to claim 1-5 any one of them methods, which is characterized in that described by first memory and described second After memory is alternately as the input memory and output memory of feature unit in the branch, the method further includes:
During being calculated based on the branch, the data that fisrt feature unit exports are stored in the fisrt feature In the output memory of unit;
The data that the fisrt feature unit output is extracted from the output memory of the fisrt feature unit, pass through second feature Unit calculates the data of extraction, and the data that the second feature unit exports are stored in the second feature unit It exports in memory;
Wherein, the fisrt feature unit is any feature unit in the branch, and the second feature unit is described point Next feature unit of the fisrt feature unit in branch, output memory and the second feature of the fisrt feature unit The input memory of unit is identical.
7. a kind of memory management device, which is characterized in that be applied in terminal, described device includes:
Determining module, for according to the feature unit connection relation in neural network, determining at least one of the neural network Branch, the neural network include the multiple network layers arranged in sequence, and each network layer includes at least one feature unit, Each branch is connected and composed by multiple feature units positioned at heterogeneous networks layer;
Distribution module, for for each branch, according to the size for exporting memory that each feature unit in the branch needs, The first memory and the second memory are distributed for the branch, the first memory size is not less than the second memory size, first memory Size and second memory size are not less than other memory sizes that the branch needs;
The distribution module is additionally operable to first memory and second memory alternately as feature unit in the branch Input memory and output memory.
8. device according to claim 7, which is characterized in that described device further includes:
Statistical module obtains described point for needing the data volume exported to count each feature unit in the branch The size for the output memory that each feature unit needs in branch.
9. device according to claim 7, which is characterized in that the distribution module, including:
Sequencing unit is used for according to sequence from big to small, to the output memory of each feature unit needs in the branch Size is ranked up;
First allocation unit, for that using the size to make number one as the first memory size, deputy size will be come and made For the second memory size, for branch distribution with matched first memory of first memory size and with second memory Matched second memory of size.
10. device according to claim 7, which is characterized in that the distribution module, including:
Second allocation unit is used for when first memory size is equal with second memory size, in described first The output memory as fisrt feature unit in the branch is deposited, and as the input memory of second feature unit, by described Output memory of two memories as the second feature unit;
Wherein, the fisrt feature unit is any feature unit in the branch, and the second feature unit is described point Next feature unit of the fisrt feature unit in branch.
11. device according to claim 7, which is characterized in that the distribution module, including:
Third allocation unit, for when first memory size is more than second memory size, determining in the branch The size of the output memory needed is the specific characteristic unit of first memory size;Using first memory as the finger Determine the output memory of feature unit;It is closed according to each position between feature unit and the specific characteristic unit in the branch System, using first memory and second memory as each feature in the branch in addition to the specific characteristic unit The input memory or output memory of unit, so that first memory and second memory are alternately as feature in the branch The input memory and output memory of unit.
12. according to claim 7-11 any one of them devices, which is characterized in that described device further includes:
First computing module, the data for during being calculated based on the branch, fisrt feature unit to be exported It is stored in the output memory of the fisrt feature unit;
Second computing module, for extracting the fisrt feature unit output from the output memory of the fisrt feature unit Data calculate the data of extraction by second feature unit, and the data that the second feature unit exports are stored in In the output memory of the second feature unit;
Wherein, the fisrt feature unit is any feature unit in the branch, and the second feature unit is described point Next feature unit of the fisrt feature unit in branch, output memory and the second feature of the fisrt feature unit The input memory of unit is identical.
13. a kind of memory management device, which is characterized in that the memory management device includes processor and memory, described to deposit At least one instruction, at least one section of program, code set or instruction set, described instruction, described program, the generation are stored in reservoir Code collection or described instruction collection are loaded by the processor and are executed to realize as described in claim 1 to 6 any claim Performed operation in EMS memory management process.
14. a kind of computer readable storage medium, which is characterized in that be stored at least one in the computer readable storage medium Item instruction, at least one section of program, code set or instruction set, described instruction, described program, the code set or described instruction collection by Processor is loaded and is executed with performed by realizing in EMS memory management process as described in claim 1 to 6 any claim Operation.
CN201810064209.2A 2018-01-23 2018-01-23 Memory management method, device and storage medium Active CN108304265B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810064209.2A CN108304265B (en) 2018-01-23 2018-01-23 Memory management method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810064209.2A CN108304265B (en) 2018-01-23 2018-01-23 Memory management method, device and storage medium

Publications (2)

Publication Number Publication Date
CN108304265A true CN108304265A (en) 2018-07-20
CN108304265B CN108304265B (en) 2022-02-01

Family

ID=62866254

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810064209.2A Active CN108304265B (en) 2018-01-23 2018-01-23 Memory management method, device and storage medium

Country Status (1)

Country Link
CN (1) CN108304265B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109491784A (en) * 2018-10-18 2019-03-19 北京旷视科技有限公司 Reduce method, apparatus, the electronic equipment, readable storage medium storing program for executing of EMS memory occupation amount
CN110058943A (en) * 2019-04-12 2019-07-26 三星(中国)半导体有限公司 Memory Optimize Method for electronic equipment and equipment
CN110750363A (en) * 2019-12-26 2020-02-04 中科寒武纪科技股份有限公司 Computer storage management method and device, electronic equipment and storage medium
CN110866589A (en) * 2018-08-10 2020-03-06 高德软件有限公司 Operation method, device and framework of deep neural network model
TWI694413B (en) * 2018-12-12 2020-05-21 奇景光電股份有限公司 Image processing circuit
US10867399B2 (en) * 2018-12-02 2020-12-15 Himax Technologies Limited Image processing circuit for convolutional neural network
WO2020253117A1 (en) * 2019-06-19 2020-12-24 深圳云天励飞技术有限公司 Data processing method and apparatus
CN112256441A (en) * 2020-12-23 2021-01-22 上海齐感电子信息科技有限公司 Memory allocation method and device for neural network inference
CN112286694A (en) * 2020-12-24 2021-01-29 瀚博半导体(上海)有限公司 Hardware accelerator memory allocation method and system based on deep learning computing network
CN112346877A (en) * 2021-01-11 2021-02-09 瀚博半导体(上海)有限公司 Memory allocation method and system for effectively accelerating deep learning calculation
CN112783640A (en) * 2019-11-11 2021-05-11 上海肇观电子科技有限公司 Method and apparatus for pre-allocating memory, circuit, electronic device and medium
CN112862085A (en) * 2019-11-27 2021-05-28 杭州海康威视数字技术股份有限公司 Storage space optimization method and device
CN115293337A (en) * 2022-10-09 2022-11-04 深圳比特微电子科技有限公司 Method and device for constructing neural network, computing equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3816762B2 (en) * 2001-06-05 2006-08-30 独立行政法人科学技術振興機構 Neural network, neural network system, and neural network processing program
CN105809248A (en) * 2016-03-01 2016-07-27 中山大学 Method for configuring DANN onto SDN and an interaction method between them
CN106447035A (en) * 2015-10-08 2017-02-22 上海兆芯集成电路有限公司 Processor with variable rate execution unit
US20170228643A1 (en) * 2016-02-05 2017-08-10 Google Inc. Augmenting Neural Networks With Hierarchical External Memory

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3816762B2 (en) * 2001-06-05 2006-08-30 独立行政法人科学技術振興機構 Neural network, neural network system, and neural network processing program
CN106447035A (en) * 2015-10-08 2017-02-22 上海兆芯集成电路有限公司 Processor with variable rate execution unit
US20170228643A1 (en) * 2016-02-05 2017-08-10 Google Inc. Augmenting Neural Networks With Hierarchical External Memory
CN105809248A (en) * 2016-03-01 2016-07-27 中山大学 Method for configuring DANN onto SDN and an interaction method between them

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LINNAN WANG 等: "SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks", 《ARXIV:1801.04380V1》 *
MA1998: "《卷积神经网络内存占用计算》", 《CSDN博客》 *
MINSOO RHU 等: "vDNN: Virtualized Deep Neural Networks for Scalable, Memory-Efficient Neural Network Design", 《IEEE XPLORE》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110866589B (en) * 2018-08-10 2023-06-30 阿里巴巴(中国)有限公司 Operation method, device and framework of deep neural network model
CN110866589A (en) * 2018-08-10 2020-03-06 高德软件有限公司 Operation method, device and framework of deep neural network model
CN109491784A (en) * 2018-10-18 2019-03-19 北京旷视科技有限公司 Reduce method, apparatus, the electronic equipment, readable storage medium storing program for executing of EMS memory occupation amount
CN109491784B (en) * 2018-10-18 2021-01-22 北京旷视科技有限公司 Method and device for reducing memory occupation amount, electronic equipment and readable storage medium
US10867399B2 (en) * 2018-12-02 2020-12-15 Himax Technologies Limited Image processing circuit for convolutional neural network
TWI694413B (en) * 2018-12-12 2020-05-21 奇景光電股份有限公司 Image processing circuit
CN110058943A (en) * 2019-04-12 2019-07-26 三星(中国)半导体有限公司 Memory Optimize Method for electronic equipment and equipment
CN110058943B (en) * 2019-04-12 2021-09-21 三星(中国)半导体有限公司 Memory optimization method and device for electronic device
WO2020253117A1 (en) * 2019-06-19 2020-12-24 深圳云天励飞技术有限公司 Data processing method and apparatus
CN112783640A (en) * 2019-11-11 2021-05-11 上海肇观电子科技有限公司 Method and apparatus for pre-allocating memory, circuit, electronic device and medium
CN112783640B (en) * 2019-11-11 2023-04-04 上海肇观电子科技有限公司 Method and apparatus for pre-allocating memory, circuit, electronic device and medium
CN112862085A (en) * 2019-11-27 2021-05-28 杭州海康威视数字技术股份有限公司 Storage space optimization method and device
CN112862085B (en) * 2019-11-27 2023-08-22 杭州海康威视数字技术股份有限公司 Storage space optimization method and device
CN110750363A (en) * 2019-12-26 2020-02-04 中科寒武纪科技股份有限公司 Computer storage management method and device, electronic equipment and storage medium
CN112256441A (en) * 2020-12-23 2021-01-22 上海齐感电子信息科技有限公司 Memory allocation method and device for neural network inference
CN112286694A (en) * 2020-12-24 2021-01-29 瀚博半导体(上海)有限公司 Hardware accelerator memory allocation method and system based on deep learning computing network
CN112286694B (en) * 2020-12-24 2021-04-02 瀚博半导体(上海)有限公司 Hardware accelerator memory allocation method and system based on deep learning computing network
CN112346877A (en) * 2021-01-11 2021-02-09 瀚博半导体(上海)有限公司 Memory allocation method and system for effectively accelerating deep learning calculation
CN115293337B (en) * 2022-10-09 2022-12-30 深圳比特微电子科技有限公司 Method and device for constructing neural network, computing equipment and storage medium
CN115293337A (en) * 2022-10-09 2022-11-04 深圳比特微电子科技有限公司 Method and device for constructing neural network, computing equipment and storage medium

Also Published As

Publication number Publication date
CN108304265B (en) 2022-02-01

Similar Documents

Publication Publication Date Title
CN108304265A (en) EMS memory management process, device and storage medium
CN110210571B (en) Image recognition method and device, computer equipment and computer readable storage medium
CN109299315A (en) Multimedia resource classification method, device, computer equipment and storage medium
CN109800877A (en) Parameter regulation means, device and the equipment of neural network
CN109284445B (en) Network resource recommendation method and device, server and storage medium
CN108762881B (en) Interface drawing method and device, terminal and storage medium
CN110222789A (en) Image-recognizing method and storage medium
CN109840584B (en) Image data classification method and device based on convolutional neural network model
CN111569435B (en) Ranking list generation method, system, server and storage medium
CN110163160A (en) Face identification method, device, equipment and storage medium
CN110673944B (en) Method and device for executing task
CN110147347A (en) For the chip of matrix disposal, matrix disposal method, apparatus and storage medium
CN109522146A (en) The method, apparatus and storage medium of abnormality test are carried out to client
CN110705614A (en) Model training method and device, electronic equipment and storage medium
CN110535890A (en) The method and apparatus that file uploads
CN110490389A (en) Clicking rate prediction technique, device, equipment and medium
CN112560435B (en) Text corpus processing method, device, equipment and storage medium
CN110166275A (en) Information processing method, device and storage medium
CN111695981A (en) Resource transfer method, device and storage medium
CN109036463A (en) Obtain the method, apparatus and storage medium of the difficulty information of song
CN112766389B (en) Image classification method, training method, device and equipment of image classification model
CN109345636A (en) The method and apparatus for obtaining conjecture face figure
CN111641853B (en) Multimedia resource loading method and device, computer equipment and storage medium
CN115293841A (en) Order scheduling method, device, equipment and storage medium
CN110109813A (en) Information for GPU performance determines method, apparatus, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant