CN116611479B - Data processing method, device, storage medium and chip - Google Patents

Data processing method, device, storage medium and chip Download PDF

Info

Publication number
CN116611479B
CN116611479B CN202310903073.0A CN202310903073A CN116611479B CN 116611479 B CN116611479 B CN 116611479B CN 202310903073 A CN202310903073 A CN 202310903073A CN 116611479 B CN116611479 B CN 116611479B
Authority
CN
China
Prior art keywords
feature
data
subsets
processing model
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310903073.0A
Other languages
Chinese (zh)
Other versions
CN116611479A (en
Inventor
夏立超
丁维浩
陈健明
赵东宇
张法朝
牟小峰
唐剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Midea Robozone Technology Co Ltd
Original Assignee
Midea Robozone Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Midea Robozone Technology Co Ltd filed Critical Midea Robozone Technology Co Ltd
Priority to CN202310903073.0A priority Critical patent/CN116611479B/en
Publication of CN116611479A publication Critical patent/CN116611479A/en
Application granted granted Critical
Publication of CN116611479B publication Critical patent/CN116611479B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a data processing method, a data processing device, a storage medium and a chip, and relates to the technical field of model processing. The data processing method comprises the following steps: under the condition that the feature data set is acquired, the feature data set is stored into at least two memory areas, and at least two groups of first feature subsets are respectively stored in the at least two memory areas; reading first feature subsets in at least two memory areas, and sequentially inputting the first feature subsets into a first processing model so that the first processing model sequentially outputs second feature subsets; and updating the feature data set through the second feature subset. The technical scheme provided by the application can improve the efficiency of data processing.

Description

Data processing method, device, storage medium and chip
Technical Field
The present application relates to the field of model processing technologies, and in particular, to a data processing method, device, storage medium, and chip.
Background
The data processing model typically employs a TCN (Temporal Convolutional Network, time sequential convolutional network) architecture.
In the related art, the process of extracting data in a feature data set and returning data of model data by a data processing model of a TCN structure is performed on a discontinuous memory, and the process needs to skip the memory to read the data, so that the data processing efficiency is low.
Disclosure of Invention
The present application aims to solve one of the technical problems existing in the prior art or related technologies.
To this end, a first aspect of the application proposes a data processing method.
A second aspect of the application proposes a data processing apparatus.
A third aspect of the present application proposes a readable storage medium.
A fourth aspect of the application proposes a computer program product.
A fifth aspect of the application proposes a chip.
In view of this, a data processing method is proposed according to a first aspect of the present application, comprising: under the condition that the feature data set is acquired, the feature data set is stored into at least two memory areas, and at least two groups of first feature subsets are respectively stored in the at least two memory areas; reading first feature subsets in at least two memory areas, and sequentially inputting the first feature subsets into a first processing model so that the first processing model sequentially outputs second feature subsets; and updating the feature data set through the second feature subset.
According to the technical scheme, the data processing method is provided, and can copy data in a continuous memory area in the process of selecting the characteristic data input into the first processing model and outputting the characteristic data by the first processing model, so that the efficiency of selecting a first characteristic subset in the characteristic data set and updating the characteristic data set through a second characteristic subset is improved, and the data processing efficiency is further improved.
In the technical scheme, the first processing model is used for reasoning a first feature subset in the feature data set and outputting a second feature subset. After the feature data set is received, a plurality of groups of first feature subsets are obtained through grouping the feature data set, the groups of first feature subsets obtained through grouping are respectively stored in different memory areas, and two adjacent memory areas are continuous memory areas. Each group of first feature subsets can be input into a first processing model to be processed, and corresponding second feature subsets can be obtained after processing. After the first processing model outputs the second feature subset, the acquired feature data set is updated by the second feature subset. In the process of inputting the first feature subset into the first processing model and updating the feature data set according to the second feature subset output by the first processing model, data copying and reading are needed, and as the area stored by the first feature subset is a continuous memory area, the process of reading and returning data can be ensured to be carried out on the continuous memory area, so that the reading and returning efficiency of the feature data is improved, and the overall efficiency of data processing is further improved.
Specifically, the first processing model is a non-hollow convolution model, and when a feature data set is acquired, it is necessary to extract part of feature data in the feature data set and input the extracted part of feature data into the first processing model. According to the technical scheme, the received feature data sets are grouped in advance, so that each group of first feature subsets is guaranteed to be feature data sets which can be directly input into the first processing model, and the fact that when the first feature subsets are input into the first processing model, only the first feature subsets of different memory areas are sequentially input into the first processing model is achieved. Each group of first feature subsets are respectively stored in different memory areas, and two adjacent memory areas are continuous memory areas, so that the first feature subsets are input into a first processing model, and the second feature subsets output after the first processing model reasoning on the first feature subsets is finished are sequentially copied into the corresponding memory areas.
In this technical solution, after the first feature subset is input into the first processing model, the first processing model processes the first feature subset, and can output the second feature subset. And the first feature subset in the feature data set is replaced and updated through the second feature subset, and the data processing result can be determined through the updated feature data set, so that the reasoning speed of the first processing model is improved, and meanwhile, the accuracy of the reasoning result is ensured.
According to the technical scheme, the characteristic data sets are grouped to obtain the corresponding multiple groups of first characteristic subsets which can be directly input into the first processing model, and the multiple groups of first characteristic subsets are respectively stored in different memory areas, so that the processes of inputting the first characteristic subsets into the first processing model and updating the characteristic data sets according to the second characteristic subsets output by the first processing model can be performed on continuous memory areas, data reading is not needed to be carried out in a jumping mode, the copying efficiency of the characteristic data is improved, and the overall efficiency of data processing is improved.
In some embodiments, optionally, in a case that the feature data set is acquired, storing the feature data set in at least two memory areas includes: grouping and processing the feature data sets to obtain at least two first feature subsets; and storing each of the at least two first feature subsets into at least two memory areas, wherein the at least two memory areas are continuous memory areas.
According to the technical scheme, the characteristic data sets are subjected to grouping processing, so that a plurality of groups of grouped first characteristic subsets can be obtained, and each group of first characteristic subsets can be directly input into a first processing model for reasoning. And the plurality of groups of first feature subsets are respectively stored in different memory areas, and the adjacent first feature subsets are stored in adjacent and continuous memory areas, so that the process of selecting the first feature subsets and updating the first feature subsets through the second feature subsets can be carried out on the continuous memory areas.
In the technical scheme, the feature data set comprises a plurality of feature data, and a plurality of groups of first feature subsets can be obtained by grouping the plurality of feature data, wherein the number of the feature data in each group of first feature subsets is equal, and the same feature data is not included among the plurality of groups of first feature subsets.
According to the technical scheme, the plurality of groups of first feature subsets are obtained by grouping the feature data sets, and then the plurality of groups of first feature subsets are stored in the continuous memory area, so that the first feature subsets can be directly input into the first processing model for reasoning, copying can be performed on the continuous memory when the feature data sets are updated through the second feature subsets and the first feature subsets are read, and the data copying efficiency is improved.
In some embodiments, optionally, grouping the feature data sets to obtain at least two first feature subsets includes: acquiring operator parameters of a convolution operator in a first processing model; determining the data quantity in the first feature subset and the selection interval of the data of the first feature subset in the feature data set according to the operator parameters; and carrying out grouping processing on the characteristic data set according to the data quantity and the selection interval.
In the technical scheme of the application, the first processing model comprises a convolution operator, and a grouping rule for grouping the characteristic data set can be determined according to operator parameters corresponding to the convolution operator. The feature data sets can be grouped through the grouping rules, and the first feature subset obtained through grouping can be directly input into the feature data sets.
In the technical scheme, the grouping rule comprises a selection interval and the data quantity, wherein the selection interval is the data interval quantity between two adjacent selection first feature subsets, and the data quantity is the feature data quantity in each selection first feature subset.
According to the technical scheme, the feature data sets are grouped according to the data quantity in each group of the first feature subsets and the data interval between the feature data in the first feature subsets selected twice, so that a plurality of groups of the first feature subsets are obtained, and the first feature subsets obtained through grouping can be directly input into a first processing model for processing by storing the plurality of groups of the first feature subsets in a continuous memory area.
In some aspects, optionally, the operator parameters include a dilation factor and a convolution kernel size; determining the data quantity in the first feature subset and the selection interval of the data of the first feature subset in the feature data set according to the operator parameters, wherein the method comprises the following steps: determining a selection interval according to the expansion factor; and determining the data quantity according to the convolution kernel size.
In the technical scheme of the application, the operator parameters of the convolution operator in the first processing model comprise an expansion factor and a convolution kernel size. The selection interval in the grouping rule, that is, the number of data spaced between two adjacent selection feature data, can be determined by the expansion factor. The number of data in the grouping rule, which is the number of data in each first feature subset, can be determined by the convolution kernel size.
According to the technical scheme, the data selection information can be determined by acquiring the expansion factors and the convolution kernel sizes of the first convolution operators, after the first convolution operators in the first processing model are replaced by the second convolution operators, the selection interval among a plurality of first feature subsets in the feature data set and the number of data in each first feature subset obtained through selection are determined according to the data selection information, the accuracy of the first feature subsets input into the optimized second processing model is improved, and the matching degree of reasoning between the second processing model and the first processing model is ensured while the reasoning efficiency is improved.
According to the technical scheme, the selection interval and the data quantity are determined according to the convolution kernel size and the expansion factor of the convolution operator in the first processing model. Before the feature data set is input into the first processing model, the feature data set is selected through data selection information to obtain a first feature subset, and the accuracy of the first feature subset input into the first processing model is improved through inputting the first feature subset into the first processing model, and meanwhile processing efficiency is improved.
In some embodiments, optionally, storing each of the at least two first feature subsets into at least two memory areas includes: acquiring the reading sequence of at least two memory areas; acquiring a first arrangement sequence of at least two groups of first feature subsets in a feature data set; and storing at least two groups of first feature subsets into at least two memory areas in sequence according to the first arrangement sequence and the reading sequence so as to enable the reading sequence to be matched with the first arrangement sequence.
In the technical scheme of the application, before a plurality of groups of first feature subsets are stored in corresponding memory areas, the plurality of groups of first feature subsets are ordered, the reading sequence among the plurality of memory areas is determined, the plurality of groups of first feature subsets are stored in the plurality of memory areas according to the reading sequence and the first arrangement sequence of the first feature subsets, and the corresponding first feature subsets can be read according to the first arrangement sequence by reading according to the reading sequence of the plurality of continuous memory areas.
In the technical scheme, when the first feature subset is acquired, the first feature subset in the memory area is sequentially read according to the reading sequence, so that the process of reading the first feature subset is kept on a continuous memory. The first feature subsets are sequentially stored in the memory area according to the first arrangement order, so that when the memory area is read according to the reading order, the corresponding first feature subsets can be read according to the first arrangement order.
According to the technical scheme, the first feature subsets are stored in the memory areas according to the first arrangement sequence and the reading sequence of the memory areas, so that the continuity of reading the first feature subsets and the accuracy of inputting the first feature subsets into the first processing model can be ensured.
In some embodiments, optionally, updating the feature data set by the second feature subset includes: acquiring a second arrangement sequence of at least two second feature subsets under the condition that the first processing model outputs the at least two second feature subsets; and sequentially inputting at least two second feature subsets into at least two memory areas according to a second arrangement order to replace the corresponding first feature subsets.
In the technical scheme of the application, after the first processing model outputs a plurality of second feature subsets, a second arrangement sequence of the plurality of second feature subsets is acquired and is input into the memory area according to the second arrangement sequence, so that the first feature subsets stored in the original memory area are replaced, the feature data set is updated, and the updated feature data set is obtained.
In the technical scheme, when the feature data set is updated, the feature data set is updated according to a first-in first-out rule, namely, a first feature subset which is firstly input into a first processing model for processing is preferentially replaced by a corresponding second feature subset. The second ranking order may be an order in which the second feature subset was output in the first processing model.
In the technical scheme of the application, the second feature subset is input into the voice data set through the second arrangement sequence, so that the voice data set is updated, and the second feature subset is ensured to be transmitted back in a continuous memory area.
In some embodiments, optionally, in a case where the first processing model outputs at least two second feature subsets, obtaining a second ranking of the at least two second feature subsets includes: acquiring output moments of at least two second feature subsets; and sequencing at least two second feature subsets according to the output time to obtain a second sequence.
According to the technical scheme, the output time of the at least two second feature subsets is output according to the first processing model, the at least two second feature subsets are ordered, a second arrangement sequence can be obtained, and the second feature subsets are ensured to update the voice data set according to the second arrangement sequence.
Specifically, the order of the plurality of first feature subsets input to the first processing model matches the order of the plurality of second feature subsets output by the first processing model.
In the technical scheme of the application, the plurality of second feature subsets are ordered according to the output time of the first processing model, so that a second arrangement sequence is obtained, and the second feature subsets are updated to the feature data set according to the second arrangement sequence, so that the accuracy of updating the feature data set is improved.
In some embodiments, optionally, after the updating process is performed on the feature data set by using the second feature subset, the method further includes: acquiring preset processing times of a first processing model; and stopping updating the feature data set when the updating times of the feature data set reach the preset processing times.
According to the technical scheme, the number of times of updating the feature data set according to the second feature subset is counted, and under the condition that the counted number of times of updating reaches the preset processing number, the feature data set is determined to be updated, the process of updating the feature data set is stopped, and the process of extracting the first feature subset in the feature data set is synchronously stopped.
In the technical scheme, the preset processing times are preset times threshold values, and the updating times of the characteristic data set are reasoning times of the first processing model. The number of times of reasoning of the first processing model can be determined by acquiring the number of times of updating the feature data set, and when the number of times of reasoning reaches the preset number of times of processing, the feature data set is determined to be updated, so that the updated feature data set is obtained.
According to the technical scheme, the obtained updating times of the feature data set are compared with the preset processing times set in advance, when the updating times reach the preset processing times, the feature data set is determined to be updated to be the updated feature data set, the first feature subset is stopped from being continuously input to the first processing model, and the data processing efficiency is further improved.
In some aspects, optionally, the feature data set includes any one of: an audio feature set, an image feature set, and a text feature set.
In the technical scheme of the application, the first processing model can be any one of a voice recognition model, a text processing model and an image processing model, namely, different characteristic data sets can be processed by selecting different first processing models, so that the method is suitable for different application scenes.
In this technical solution, the first processing model may be a speech recognition model. In the case where the first processing model is a speech recognition model, the feature data set comprises an audio feature set. The voice recognition model is deployed in the household equipment, so that the functions of voice control, voice awakening and the like of the household are realized.
In this technical solution, the first processing model may be a text processing model, and the feature data set includes a text feature set when the first processing model is the text processing model. By deploying the text processing model into the home equipment, the functions of machine translation and the like of the home equipment on text contents can be realized.
In this technical solution, the first processing model may be an image processing model, and the feature data set includes an image feature set when the first processing model is the image processing model. By deploying the image processing model into the cooking device, the cooking device can automatically identify food material images.
In the technical scheme of the application, the first processing model can be a time sequence convolution network model, namely a TCN (Temporal Convolutional Network, time sequence convolution network) model.
The possible application scenarios of the data processing method provided in the technical scheme of the application comprise: speech wake-up, machine translation, etc. are tasks related to time series.
In some embodiments, optionally, in a case where the feature data set includes an audio feature set, the data processing method includes: under the condition that the audio feature set is acquired, storing the audio feature set into at least two memory areas, wherein at least two first feature subsets are respectively stored in the at least two memory areas; reading first feature subsets in at least two memory areas, and sequentially inputting the first feature subsets into a voice recognition model so that the voice recognition model sequentially outputs second feature subsets; updating the audio feature set through the second feature subset to obtain a target feature set; and outputting a voice recognition result according to the target feature set.
In the technical solution of the present application, when the feature data set includes an audio feature set, the first processing model may be a speech recognition model. In the case where the first processing model is a speech recognition model, the feature data set comprises an audio feature set. The voice recognition model is deployed in the household equipment, so that the functions of voice control, voice awakening and the like of the household are realized.
In the technical scheme of the application, after the audio feature set is received, a plurality of groups of first feature subsets are obtained by grouping the audio feature set, the groups of first feature subsets obtained by grouping are respectively stored in different memory areas, and two adjacent memory areas are continuous memory areas. Each group of first feature subsets can be input into a voice recognition model to be processed, and corresponding second feature subsets can be obtained after processing. After the speech recognition model outputs the second feature subset, the acquired audio feature set is updated by the second feature subset. In the process of inputting the first feature subset into the voice recognition model and updating the audio feature set according to the second feature subset output by the voice recognition model, data copying and reading are needed, and as the area stored by the first feature subset is a continuous memory area, the process of reading and returning data can be ensured to be carried out on the continuous memory area, so that the reading and returning efficiency of the audio feature is improved, and the overall efficiency of data processing is further improved.
The temporary array is maintained and updated through the feature subset input to the first processing model and the feature subset output by the first processing model, and when the number of maintenance and update times reaches the preset number, that is, the number of reasoning times of the first processing model on the feature data set reaches the preset number, the first processing model is determined to complete the corresponding reasoning steps.
In some embodiments, optionally, before reading the first feature subset in the at least two memory areas and sequentially inputting the first feature subset into the first processing model, the method further includes: under the condition that a second processing model is acquired, generating a first convolution operator based on model parameters in the second processing model; and replacing the second convolution operator in the second processing model by the first convolution operator to obtain the first processing model.
According to the technical scheme, the second convolution operator in the second processing model is replaced by the first convolution operator, and the first processing model is configured with corresponding data selection information, so that the first processing model with smaller input data size can be obtained. When the first processing model is deployed to the edge equipment with lower computing power, the data processing speed of the first processing model in the edge equipment can be improved.
In some technical schemes, optionally, the second convolution operator is a convolution operator in the second processing model, the convolution operator may be a hole convolution operator, the first convolution operator is a non-hole convolution operator, the first processing model is obtained by replacing the second convolution operator with the first convolution operator and adjusting the input data size of the second processing model, and optimization of the second processing model is completed.
In some embodiments, optionally, the second processing model is a neural network model before optimization, the input data size of the second processing model is larger, and the second processing model has a function of selecting the received data set. The first processing model is a neural network model obtained by optimizing the second processing model, the size of input data in the first processing model is smaller than that of input data of the second processing model, and the first processing model does not have a function of selecting a data set. Data selection information for selecting the feature data set is determined based on the second processing model, and the feature data set is data-selected based on the data selection information before the feature data is input to the first processing model. And inputting the feature subset obtained by the advance selection into a first processing model for processing, wherein the reasoning speed of the first processing model is faster than that of the second processing model in household equipment with smaller calculation power because the input data size of the first processing model is smaller than that of the second processing model.
In the technical scheme of the application, the input data size of the second processing model is larger than that of the first processing model, and the input data size of the convolution operator in the second processing model is the same as that of the convolution operator in the first processing model, so that the model reasoning speed is improved, and the equivalent calculation result of the second processing model and the first processing model is ensured.
According to the technical scheme, the second processing model is optimized to be the first processing model, the size of input data input to the first processing model is reduced, and the step of selecting the feature subset in the feature data set is executed before the model reasoning step, so that the reasoning efficiency of the first processing model can be improved. In any of the foregoing solutions, the first processing model includes: a time-series convolution network model.
In the technical solution of the present application, the first processing model may be a time-series convolutional network model, i.e., a TCN (Temporal Convolutional Network, time-series convolutional network) model.
According to a second aspect of the present application there is provided a data processing apparatus comprising: the storage module is used for storing the feature data set into at least two memory areas under the condition that the feature data set is acquired, and at least two groups of first feature subsets are respectively stored in the at least two memory areas; the reading module is used for reading the first feature subsets in at least two memory areas and sequentially inputting the first feature subsets into the first processing model so that the first processing model sequentially outputs the second feature subsets; and the updating module is used for updating the feature data set through the second feature subset.
According to the technical scheme, the data processing device is provided, and can copy data in a continuous memory area in the process of selecting the characteristic data input into the first processing model and outputting the characteristic data by the first processing model, so that the efficiency of selecting a first characteristic subset in the characteristic data set and updating the characteristic data set by a second characteristic subset is improved, and the data processing efficiency is further improved.
According to the technical scheme, the characteristic data sets are grouped to obtain the corresponding multiple groups of first characteristic subsets which can be directly input into the first processing model, and the multiple groups of first characteristic subsets are respectively stored in different memory areas, so that the processes of inputting the first characteristic subsets into the first processing model and updating the characteristic data sets according to the second characteristic subsets output by the first processing model can be performed on continuous memory areas, data reading is not needed to be carried out in a jumping mode, the copying efficiency of the characteristic data is improved, and the overall efficiency of data processing is improved.
According to a third aspect of the present application there is provided a readable storage medium having stored thereon a program or instructions which when executed by a processor performs the steps of the data processing method as in any of the above-mentioned first aspects. Therefore, the method has all the advantages of the method for processing data in any of the above first aspects, and will not be described in detail herein.
According to a fourth aspect of the present application a computer program product is presented which, when executed by a processor, implements the steps of the data processing method as in any of the above-mentioned first aspects. Therefore, the method has all the advantages of the method for processing data in any of the above first aspects, and will not be described in detail herein.
According to a fifth aspect of the present application there is provided a chip comprising a program or instructions for implementing the steps of the data processing method as in any of the above-mentioned first aspects when the chip is running. Therefore, the method has all the advantages of the method for processing data in any of the above first aspects, and will not be described in detail herein.
Additional aspects and advantages of the application will be set forth in part in the description which follows, or may be learned by practice of the application.
Drawings
The foregoing and/or additional aspects and advantages of the application will become apparent and may be better understood from the following description of embodiments taken in conjunction with the accompanying drawings in which:
FIG. 1 illustrates one of the schematic flow diagrams of a data processing method provided in some embodiments of the application;
FIG. 2 illustrates a schematic diagram of the connection of operators in a first process model provided in some embodiments of the application
FIG. 3 illustrates a second schematic flow diagram of a data processing method provided in some embodiments of the application;
FIG. 4 illustrates a third schematic flow diagram of a data processing method provided in some embodiments of the application;
FIG. 5 illustrates one of the schematics of a feature data set provided by some embodiments of the application;
FIG. 6 illustrates a schematic diagram of a first feature subset provided by some embodiments of the application;
FIG. 7 illustrates a fourth schematic flow diagram of a data processing method provided in some embodiments of the application;
FIG. 8 illustrates a second schematic diagram of a feature data set provided by some embodiments of the application;
fig. 9 illustrates a block diagram of a data processing apparatus according to some embodiments of the application.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will be more clearly understood, a more particular description of the application will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, without conflict, the present embodiment and the features in the embodiment may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, however, the present application may be practiced in other ways than those described herein, and therefore the scope of the present application is not limited to the specific embodiments disclosed below.
Data processing methods, apparatuses, storage media, and chips according to some embodiments of the present application are described below with reference to fig. 1 to 9.
In one embodiment of the present application, as shown in fig. 1, a data processing method is provided, including:
102, storing the feature data set into at least two memory areas under the condition that the feature data set is acquired;
at least two first feature subsets are respectively stored in at least two memory areas;
104, reading first feature subsets in at least two memory areas, and sequentially inputting the first feature subsets into a first processing model so that the first processing model sequentially outputs second feature subsets;
and step 106, updating the feature data set through the second feature subset.
The embodiment of the application provides a data processing method, which can copy data in a continuous memory area in the process of selecting the characteristic data input into a first processing model and outputting the characteristic data by the first processing model, thereby improving the efficiency of selecting a first characteristic subset in the characteristic data set and updating the characteristic data set through a second characteristic subset and further improving the efficiency of data processing.
In some embodiments, optionally, the first processing model is used to infer a first feature subset in the feature dataset and output a second feature subset. After the feature data set is received, a plurality of groups of first feature subsets are obtained through grouping the feature data set, the groups of first feature subsets obtained through grouping are respectively stored in different memory areas, and two adjacent memory areas are continuous memory areas. Each group of first feature subsets can be input into a first processing model to be processed, and corresponding second feature subsets can be obtained after processing. After the first processing model outputs the second feature subset, the acquired feature data set is updated by the second feature subset. In the process of inputting the first feature subset into the first processing model and updating the feature data set according to the second feature subset output by the first processing model, data copying and reading are needed, and as the area stored by the first feature subset is a continuous memory area, the process of reading and returning data can be ensured to be carried out on the continuous memory area, so that the reading and returning efficiency of the feature data is improved, and the overall efficiency of data processing is further improved.
Specifically, the first processing model is a non-hollow convolution model, and when a feature data set is acquired, it is necessary to extract part of feature data in the feature data set and input the extracted part of feature data into the first processing model. According to the embodiment of the application, the received feature data sets are grouped in advance, so that each group of first feature subsets is a feature data set which can be directly input into the first processing model, and the fact that when the first feature subsets are input into the first processing model, only the first feature subsets of different memory areas are sequentially input into the first processing model is realized. Each group of first feature subsets are respectively stored in different memory areas, and two adjacent memory areas are continuous memory areas, so that the first feature subsets are input into a first processing model, and the second feature subsets output after the first processing model reasoning on the first feature subsets is finished are sequentially copied into the corresponding memory areas.
As shown in fig. 2, the first processing model includes: the method comprises the steps that a FullyConnected operator, an Add operator, a con-position operator and a DephwiseConv2D operator are connected, an output channel of the FullyConnected operator is connected with an input channel of the Add operator, one output channel of the Add operator is connected with an input channel of the con-position operator, the other output channel of the Add operator is used as an output channel of a first processing model, the output channel of the Add operator is connected with one input channel of the con-position operator, the other input channel of the con-position operator is used as an input channel of the first processing model, and an output end of the con-position operator is connected with the DephwiseConv2D operator. The FullyConnected operator is a full-connection operator, the Add operator is a superposition operator, the con-nation operator is a tandem operator, and the DephwiseConv2D operator in the first processing model is a convolution operator. As shown in fig. 2, the data size of the input data of the model in the first processing model is 1 x 7 x 256, i.e. the data size of the first feature subset is 1 x 7 x 256, the data size of the input data of the convolution operator is 1 x 8 x 256, the data size of the output data of the first processing model is 1 x 256, the data size of the second feature subset is 1×1×1×256. The data format is n×h×w×c, N is a lot, H is a height, W is a width, and C is a channel number.
In some embodiments, optionally, the second feature subset can be output after the first feature subset is input into the first processing model, which processes the first feature subset. And the first feature subset in the feature data set is replaced and updated through the second feature subset to obtain an updated feature data set, and a data processing result (such as a voice recognition result) can be determined through the updated feature data set, so that the reasoning speed of the first processing model is improved, and meanwhile, the accuracy of the reasoning result is ensured.
In the embodiment of the application, the process of inputting the first feature subset into the first processing model and updating the feature data set according to the second feature subset output by the first processing model can be performed on a continuous memory area by grouping the feature data sets to obtain a plurality of corresponding groups of first feature subsets which can be directly input into the first processing model and respectively storing the plurality of groups of first feature subsets in different memory areas, so that the copying efficiency of the feature data is improved, and the overall efficiency of data processing is improved.
As shown in fig. 3, in some embodiments, optionally, in a case where the feature data set is acquired, storing the feature data set to at least two memory areas includes:
step 302, grouping and processing the feature data sets to obtain at least two first feature subsets;
step 304, storing each of the at least two first feature subsets into at least two memory regions.
Wherein, at least two memory areas are continuous memory areas.
In the embodiment of the application, the grouped multiple groups of first feature subsets can be obtained by grouping the feature data sets, and each group of first feature subsets can be directly input into a first processing model for reasoning. And the plurality of groups of first feature subsets are respectively stored in different memory areas, and the adjacent first feature subsets are stored in adjacent and continuous memory areas, so that the process of selecting the first feature subsets and updating the first feature subsets through the second feature subsets can be carried out on the continuous memory areas.
In some embodiments, optionally, the feature data set includes a plurality of feature data, and the plurality of first feature subsets can be obtained by grouping the plurality of feature data, where the number of feature data in each of the plurality of first feature subsets is equal, and the plurality of first feature subsets do not include the same feature data.
In the embodiment of the application, a plurality of groups of first feature subsets are obtained by grouping the feature data sets, and then the plurality of groups of first feature subsets are stored in the continuous memory area, so that the first feature subsets can be directly input into the first processing model for reasoning, the first feature subsets can be read and the feature data sets can be updated through the second feature subsets, the copying can be carried out on the continuous memory, and the data copying efficiency is improved.
As shown in fig. 4, in some embodiments, optionally, the feature data sets are grouped to obtain at least two first feature subsets, including:
step 402, obtaining operator parameters of a convolution operator in a first processing model;
step 404, determining the data quantity in the first feature subset and the selection interval of the data of the first feature subset in the feature data set according to the operator parameters;
and step 406, grouping the characteristic data sets according to the data quantity and the selection interval.
In the embodiment of the application, the first processing model comprises a convolution operator, and a grouping rule for grouping the characteristic data set can be determined according to the operator parameters corresponding to the convolution operator. The feature data sets can be grouped through the grouping rules, and the first feature subset obtained through grouping can be directly input into the feature data sets.
In some embodiments, optionally, the grouping rule includes a selection interval and a data amount, where the selection interval is an amount of data interval between two adjacent selections of the first feature subset, and the data amount is an amount of feature data in each selection of the first feature subset.
In the embodiment of the application, the feature data sets are grouped according to the data quantity in each group of the first feature subsets and the data interval between the feature data in the first feature subsets selected twice adjacently to obtain a plurality of groups of the first feature subsets, and the first feature subsets obtained by grouping can be directly input into the first processing model for processing by storing the plurality of groups of the first feature subsets in the continuous memory area.
In some embodiments, optionally, the operator parameters include a dilation factor and a convolution kernel size; determining the data quantity in the first feature subset and the selection interval of the data of the first feature subset in the feature data set according to the operator parameters, wherein the method comprises the following steps: determining a selection interval according to the expansion factor; and determining the data quantity according to the convolution kernel size.
In an embodiment of the present application, the operator parameters of the convolution operator in the first processing model include a dilation factor, and a convolution kernel size. The selection interval in the grouping rule, that is, the number of data spaced between two adjacent selection feature data, can be determined by the expansion factor. The number of data in the grouping rule, which is the number of data in each first feature subset, can be determined by the convolution kernel size.
In the embodiment of the application, the data selection information can be determined by acquiring the expansion factor and the convolution kernel size of the first convolution operator, after the first convolution operator in the first processing model is replaced by the second convolution operator, the selection interval among a plurality of first feature subsets in the feature data set and the number of data in each first feature subset obtained by selection are determined according to the data selection information, so that the accuracy of the first feature subsets input into the optimized second processing model is improved, and the matching degree of reasoning of the second processing model and the first processing model is ensured while the reasoning efficiency is improved.
Illustratively, the convolution operator's input data and convolution kernel in the first processing model are both 1-direct multiplications in the H (height) dimension, and do not expand in the H dimension. In the W (width) dimension, the convolution kernel size is 7 and the expansion factor is 8, and then the first convolution operator selects one data every 7 data in the feature data set input to the first processing model, and multiplies the first convolution kernel by a total of 7 data selected.
Fig. 5 illustrates one of the schematic diagrams of the feature data set provided in some embodiments of the present application, where 1 to 56 represent sequence numbers of the data sets, and each square represents one data set, and each data set includes 1×256 data. Wherein, the selection interval is 7, and the data quantity is 7.
Fig. 6 is a schematic diagram of a first feature subset according to some embodiments of the present application, as shown in fig. 6, where 8 groups of 7×256 first feature subsets are screened out from the feature data set shown in fig. 5, where each column is a group of first feature subsets.
In an embodiment of the application, the selection interval and the data quantity are determined according to the convolution kernel size and the expansion factor of the convolution operator in the first processing model. Before the feature data set is input into the first processing model, the feature data set is selected through data selection information to obtain a first feature subset, and the accuracy of the first feature subset input into the first processing model is improved through inputting the first feature subset into the first processing model, and meanwhile processing efficiency is improved.
As shown in fig. 7, in some embodiments, optionally, storing each of the at least two first feature subsets to at least two memory regions separately includes:
step 702, obtaining a reading sequence of at least two memory areas;
step 704, obtaining a first arrangement sequence of at least two groups of first feature subsets in a feature data set;
step 706, sequentially storing at least two first feature subsets in at least two memory areas according to the first arrangement order and the reading order, so as to match the reading order with the first arrangement order.
In the embodiment of the present application, before storing the plurality of sets of first feature subsets in the corresponding memory areas, it is necessary to sort the plurality of sets of first feature subsets, determine the reading order among the plurality of memory areas, and store the plurality of sets of first feature subsets in the plurality of memory areas according to the reading order and the first arrangement order of the first feature subsets, and by reading according to the reading order of the plurality of continuous memory areas, the corresponding first feature subsets can be read according to the first arrangement order.
In some embodiments, optionally, the first feature subset in the memory area is sequentially read in the reading order while the first feature subset is acquired, so that the process of reading the first feature subset is performed on a continuous memory. The first feature subsets are sequentially stored in the memory area according to the first arrangement order, so that when the memory area is read according to the reading order, the corresponding first feature subsets can be read according to the first arrangement order.
In the embodiment of the application, the first feature subsets are stored in the memory areas according to the first arrangement sequence and the reading sequence of the memory areas, so that the continuity of reading the first feature subsets and the accuracy of inputting the first feature subsets into the first processing model can be ensured.
In some embodiments, optionally, updating the feature data set by the second feature subset includes: acquiring a second arrangement sequence of at least two second feature subsets under the condition that the first processing model outputs the at least two second feature subsets; and sequentially inputting at least two second feature subsets into at least two memory areas according to a second arrangement order to replace the corresponding first feature subsets.
In the embodiment of the application, after the first processing model outputs a plurality of second feature subsets, a second arrangement sequence of the plurality of second feature subsets is acquired and is input into the memory area according to the second arrangement sequence, so that the first feature subsets stored in the original memory area are replaced, the update of the feature data set is completed, and the updated feature data set is obtained.
In some embodiments, optionally, when updating the feature data set, the feature data set is updated according to a rule of first in first out, that is, a first feature subset that is input first to the first processing model for processing is preferentially replaced by a corresponding second feature subset. The second ranking order may be an order in which the second feature subset was output in the first processing model.
Fig. 8 shows a second schematic view of a feature data set provided by some embodiments of the application, as shown in fig. 8, the first set of first feature subsets is 1, 9, 17, 25, 33, 41, 49, the second set of first feature subsets is 2, 10, 18, 26, 34, 42, 50, and the third set of first feature subsets 3, 11, 19, 27, 35, 43, 51. In case the first processing model outputs three second feature subsets 57, 58, 59 and the second order of arrangement of the three second feature subsets is 57, 58, 59, then the second feature subset 57 is substituted for the first feature subset 1 and the second feature subset 58 is substituted for the first feature subset 2 and then the second feature subset 59 is substituted for the first feature subset 3.
In the embodiment of the application, the second feature subset is input into the voice data set through the second arrangement sequence, so that the voice data set is updated, and the second feature subset is ensured to be transmitted back in a continuous memory area.
In some embodiments, optionally, where the first processing model outputs at least two second feature subsets, obtaining a second ranking of the at least two second feature subsets includes: acquiring output moments of at least two second feature subsets; and sequencing at least two second feature subsets according to the output time to obtain a second sequence.
In the embodiment of the application, the output time of the at least two second feature subsets is output according to the first processing model, and the at least two second feature subsets are ordered, so that a second arrangement order can be obtained, and the second feature subsets are ensured to update the voice data set according to the second arrangement order.
Specifically, the order of the plurality of first feature subsets input to the first processing model matches the order of the plurality of second feature subsets output by the first processing model.
In the embodiment of the application, the plurality of second feature subsets are ordered according to the output time of the first processing model, so that a second arrangement sequence is obtained, and the second feature subsets are updated to the feature data set according to the second arrangement sequence, so that the accuracy of updating the feature data set is improved.
In some embodiments, optionally, after the updating of the feature data set by the second feature subset, the method further includes: acquiring preset processing times of a first processing model; and stopping updating the feature data set when the updating times of the feature data set reach the preset processing times.
In the embodiment of the application, the process of updating the feature data set is stopped by counting the times of updating the feature data set according to the second feature subset and determining that the feature data set is updated under the condition that the counted updating times reach the preset processing times, and the process of extracting the first feature subset in the feature data set is synchronously stopped.
In some embodiments, optionally, the preset processing frequency is a frequency threshold set in advance, and the update frequency of the feature data set is the reasoning frequency of the first processing model. The number of times of reasoning of the first processing model can be determined by acquiring the number of times of updating the feature data set, and when the number of times of reasoning reaches the preset number of times of processing, the feature data set is determined to be updated, so that the updated feature data set is obtained.
Illustratively, a counter is set in advance for the first processing model, the number of times of updating the feature data set is counted by the counter, and after the number of times of updating reaches the preset number of times of processing, the current reasoning process is completed.
In the embodiment of the application, the update times of the acquired feature data set are compared with the preset processing times set in advance, when the update times reach the preset processing times, the feature data set is determined to be updated to be the updated feature data set, and the first feature subset is stopped from being continuously input to the first processing model, so that the data processing efficiency is further improved.
In some embodiments, optionally, the feature data set includes any one of:
an audio feature set, an image feature set, and a text feature set.
In the embodiment of the application, the first processing model can be any one of a voice recognition model, a text processing model and an image processing model, namely, different characteristic data sets can be processed by selecting different first processing models so as to adapt to different application scenes.
In this embodiment, the first process model may be a speech recognition model. In the case where the first processing model is a speech recognition model, the feature data set comprises an audio feature set. The voice recognition model is deployed in the household equipment, so that the functions of voice control, voice awakening and the like of the household are realized.
In this embodiment, the first processing model may be a text processing model, and the feature data set comprises a text feature set when the first processing model is a text processing model. By deploying the text processing model into the home equipment, the functions of machine translation and the like of the home equipment on text contents can be realized.
In this embodiment, the first processing model may be an image processing model, and the feature data set comprises an image feature set when the first processing model is an image processing model. By deploying the image processing model into the cooking device, the cooking device can automatically identify food material images.
In the embodiment of the present application, the first processing model may be a time-series convolutional network model, i.e., a TCN (Temporal Convolutional Network, time-series convolutional network) model.
The possible application scenarios of the data processing method provided by the embodiment of the application comprise: speech wake-up, machine translation, etc. are tasks related to time series.
In some embodiments, optionally, where the feature data set comprises an audio feature set, the data processing method comprises: under the condition that the audio feature set is acquired, storing the audio feature set into at least two memory areas, wherein at least two first feature subsets are respectively stored in the at least two memory areas; reading first feature subsets in at least two memory areas, and sequentially inputting the first feature subsets into a voice recognition model so that the voice recognition model sequentially outputs second feature subsets; updating the audio feature set through the second feature subset to obtain a target feature set; and outputting a voice recognition result according to the target feature set.
In an embodiment of the application, the first processing model may be a speech recognition model when the feature data set comprises an audio feature set. In the case where the first processing model is a speech recognition model, the feature data set comprises an audio feature set. The voice recognition model is deployed in the household equipment, so that the functions of voice control, voice awakening and the like of the household are realized.
In the embodiment of the application, after the audio feature set is received, a plurality of groups of first feature subsets are obtained by grouping the audio feature set, the groups of first feature subsets obtained by grouping are respectively stored in different memory areas, and two adjacent memory areas are continuous memory areas. Each group of first feature subsets can be input into a voice recognition model to be processed, and corresponding second feature subsets can be obtained after processing. After the speech recognition model outputs the second feature subset, the acquired audio feature set is updated by the second feature subset. In the process of inputting the first feature subset into the voice recognition model and updating the audio feature set according to the second feature subset output by the voice recognition model, data copying and reading are needed, and as the area stored by the first feature subset is a continuous memory area, the process of reading and returning data can be ensured to be carried out on the continuous memory area, so that the reading and returning efficiency of the feature data is improved, and the overall efficiency of data processing is further improved.
Illustratively, the speech recognition model may be used for any of a speech wake scenario, a text translation scenario, an image recognition scenario.
The possible application scenarios of the data processing method provided by the embodiment of the application comprise: speech wake-up, machine translation, etc. are tasks related to time series.
In some embodiments, optionally, the feature data in the feature data set input into the first processing model may be MFCC (Mel-Frequency Cepstral Coefficients, mel-frequency cepstral coefficient) features or Fbank (Filter bank) features of the speech data. The output of the first processing model is the classification of factors, in a voice awakening task, the number of times of reasoning on current voice data is not required to be recorded, and when the number of times of reasoning reaches a preset number of times, whether the output data is awakened to the household equipment is judged through a decoding module. Other types of data are corresponding if other application scenarios are, for example: in the case of a machine translation task, the feature dataset is a text feature in the text data.
Illustratively, the voice wake task implemented by the home device is described as follows:
after receiving the MFCC feature set in the voice command, extracting a feature subset in the MFCC feature set according to the data selection information obtained by the model optimization method. At this time, a temporary array is created for the feature data set, for example: a temporary array of size 1 x 56 x 256, the temporary array being a feature data set, each inference selecting data as an input 1 x 7 x 256 of the model according to an expansion rule, the output of the previous reasoning of the first processing model is 1 x 256, data of size 1 x 256 is updated according to the first-in-first-out rule.
The temporary array is maintained and updated through the feature subset input to the first processing model and the feature subset output by the first processing model, and when the number of maintenance and update times reaches the preset number, that is, the number of reasoning times of the first processing model on the feature data set reaches the preset number, the first processing model is determined to complete the corresponding reasoning steps.
In some embodiments, optionally, before reading the first feature subset in the at least two memory areas and sequentially inputting the first feature subset into the first processing model, the method further includes: under the condition that a second processing model is acquired, generating a first convolution operator based on model parameters in the second processing model; and replacing the second convolution operator in the second processing model by the first convolution operator to obtain the first processing model.
In the embodiment of the application, the second convolution operator in the second processing model is replaced by the first convolution operator, and the first processing model is configured with corresponding data selection information, so that the first processing model with smaller input data size can be obtained. When the first processing model is deployed to the edge equipment with lower computing power, the data processing speed of the first processing model in the edge equipment can be improved.
In some embodiments, optionally, the second convolution operator is a convolution operator in the second processing model, where the convolution operator may be a hole convolution operator, and the first convolution operator is a non-hole convolution operator, and the optimization of the second processing model is completed by replacing the second convolution operator with the first convolution operator and adjusting the input data size of the second processing model to obtain the first processing model.
In some embodiments, optionally, the second processing model is a neural network model before optimization, the input data size of the second processing model is larger, and the second processing model has a function of selecting the received data set. The first processing model is a neural network model obtained by optimizing the second processing model, the size of input data in the first processing model is smaller than that of input data of the second processing model, and the first processing model does not have a function of selecting a data set. Data selection information for selecting the feature data set is determined based on the second processing model, and the feature data set is data-selected based on the data selection information before the feature data is input to the first processing model. And inputting the feature subset obtained by the advance selection into a first processing model for processing, wherein the reasoning speed of the first processing model is faster than that of the second processing model in household equipment with smaller calculation power because the input data size of the first processing model is smaller than that of the second processing model.
The second processing model may be a hole convolution model, where holes are injected into a standard convolution kernel, so that a receptive field of the model is increased more quickly, but due to discontinuity of the convolution kernel of the hole convolution, when an operator operates on home equipment with limited resources under the condition of higher expansion rate setting, due to discontinuity of memory, a cache hit rate is low, so that execution efficiency of the hole convolution operator is low and reasoning speed is slow. The first processing model can be a standard convolution model, and the operator execution efficiency is improved and the model reasoning speed is accelerated by replacing the second processing model with the first processing model.
In the embodiment of the application, the input data size of the second processing model is larger than that of the first processing model, and the input data size of the convolution operator in the second processing model is the same as that of the convolution operator in the first processing model, so that the model reasoning speed is improved, and the equivalent calculation result of the second processing model and the first processing model is ensured.
In the embodiment of the application, the second processing model is optimized as the first processing model, so that the size of input data input to the first processing model is reduced, and the step of selecting the feature subset in the feature data set is executed before the model reasoning step, thereby improving the reasoning efficiency of the first processing model.
In some embodiments, optionally, the first process model comprises: a time-series convolution network model.
In an embodiment of the present application, the first processing model may be a time-series convolutional network model, i.e., a TCN (Temporal Convolutional Network, time-series convolutional network) model. In one embodiment according to the present application, as shown in fig. 9, a data processing apparatus 900 is provided, including:
a storage module 902, configured to store the feature data set to at least two memory areas, where at least two first feature subsets are stored in the at least two memory areas, respectively, when the feature data set is acquired;
the reading module 904 is configured to read the first feature subsets in the at least two memory areas, and sequentially input the first feature subsets into the first processing model, so that the first processing model sequentially outputs the second feature subsets;
an updating module 906, configured to update the feature data set through the second feature subset.
The embodiment of the application provides a data processing device, which can copy data in a continuous memory area in the process of selecting the characteristic data input into a first processing model and outputting the characteristic data by the first processing model, thereby improving the efficiency of selecting a first characteristic subset in the characteristic data set and updating the characteristic data set by a second characteristic subset and further improving the efficiency of data processing.
In the embodiment of the application, the process of inputting the first feature subset into the first processing model and updating the feature data set according to the second feature subset output by the first processing model can be performed on a continuous memory area by grouping the feature data sets to obtain a plurality of corresponding groups of first feature subsets which can be directly input into the first processing model and respectively storing the plurality of groups of first feature subsets in different memory areas, so that the data reading is not needed to jump memory, the copying efficiency of the feature data is improved, and the overall efficiency of data processing is improved.
In some embodiments, optionally, the data processing apparatus 900 comprises:
the grouping module is used for grouping and processing the feature data sets to obtain at least two first feature subsets;
the storage module 902 is configured to store each of the at least two sets of first feature subsets to at least two memory regions, where the at least two memory regions are consecutive memory regions.
In the embodiment of the application, the grouped multiple groups of first feature subsets can be obtained by grouping the feature data sets, and each group of first feature subsets can be directly input into a first processing model for reasoning. And the plurality of groups of first feature subsets are respectively stored in different memory areas, and the adjacent first feature subsets are stored in adjacent and continuous memory areas, so that the process of selecting the first feature subsets and updating the first feature subsets through the second feature subsets can be carried out on the continuous memory areas.
In this embodiment, the feature data set includes a plurality of feature data, and the plurality of sets of first feature subsets can be obtained by grouping the plurality of feature data, wherein the number of feature data in each set of first feature subsets is equal, and the plurality of sets of first feature subsets do not include the same feature data.
In the embodiment of the application, a plurality of groups of first feature subsets are obtained by grouping the feature data sets, and then the plurality of groups of first feature subsets are stored in the continuous memory area, so that the first feature subsets can be directly input into the first processing model for reasoning, the first feature subsets can be read and the feature data sets can be updated through the second feature subsets, the copying can be carried out on the continuous memory, and the data copying efficiency is improved.
In some embodiments, optionally, the data processing apparatus 900 comprises:
the acquisition module is used for acquiring operator parameters of the convolution operator in the first processing model;
the determining module is used for determining the data quantity in the first feature subset and the selection interval of the data of the first feature subset in the feature data set according to the operator parameters;
and the grouping module is used for grouping the characteristic data sets according to the data quantity and the selection interval.
In the embodiment of the application, the first processing model comprises a convolution operator, and a grouping rule for grouping the characteristic data set can be determined according to the operator parameters corresponding to the convolution operator. The feature data sets can be grouped through the grouping rules, and the first feature subset obtained through grouping can be directly input into the feature data sets.
In this embodiment, the grouping rule includes a selection interval and a data amount, where the selection interval is an amount of data interval between two adjacent selections of the first feature subset, and the data amount is an amount of feature data in each selection of the first feature subset.
In the embodiment of the application, the feature data sets are grouped according to the data quantity in each group of the first feature subsets and the data interval between the feature data in the first feature subsets selected twice adjacently to obtain a plurality of groups of the first feature subsets, and the first feature subsets obtained by grouping can be directly input into the first processing model for processing by storing the plurality of groups of the first feature subsets in the continuous memory area.
In some embodiments, optionally, the operator parameters include a dilation factor and a convolution kernel size; a data processing apparatus 900 comprising:
The determining module is used for determining a selection interval according to the expansion factors;
and the determining module is used for determining the data quantity according to the convolution kernel size.
In an embodiment of the present application, the operator parameters of the convolution operator in the first processing model include a dilation factor, and a convolution kernel size. The selection interval in the grouping rule, that is, the number of data spaced between two adjacent selection feature data, can be determined by the expansion factor. The number of data in the grouping rule, which is the number of data in each first feature subset, can be determined by the convolution kernel size.
In the embodiment of the application, the data selection information can be determined by acquiring the expansion factor and the convolution kernel size of the first convolution operator, after the first convolution operator in the first processing model is replaced by the second convolution operator, the selection interval among a plurality of first feature subsets in the feature data set and the number of data in each first feature subset obtained by selection are determined according to the data selection information, so that the accuracy of the first feature subsets input into the optimized second processing model is improved, and the matching degree of reasoning of the second processing model and the first processing model is ensured while the reasoning efficiency is improved.
In an embodiment of the application, the selection interval and the data quantity are determined according to the convolution kernel size and the expansion factor of the convolution operator in the first processing model. Before the feature data set is input into the first processing model, the feature data set is selected through data selection information to obtain a first feature subset, and the accuracy of the first feature subset input into the first processing model is improved through inputting the first feature subset into the first processing model, and meanwhile processing efficiency is improved.
In some embodiments, optionally, the data processing apparatus 900 comprises:
the acquisition module is used for acquiring the reading sequence of at least two memory areas;
the acquisition module is used for acquiring a first arrangement sequence of at least two groups of first feature subsets in the feature data set;
the storage module 902 is configured to store at least two sets of first feature subsets in at least two memory areas in sequence according to the first arrangement order and the reading order, so that the reading order matches with the first arrangement order.
In the embodiment of the present application, before storing the plurality of sets of first feature subsets in the corresponding memory areas, it is necessary to sort the plurality of sets of first feature subsets, determine the reading order among the plurality of memory areas, and store the plurality of sets of first feature subsets in the plurality of memory areas according to the reading order and the first arrangement order of the first feature subsets, and by reading according to the reading order of the plurality of continuous memory areas, the corresponding first feature subsets can be read according to the first arrangement order.
In this embodiment, when the first feature subset is acquired, the first feature subset in the memory area is sequentially read in the reading order, so that the process of reading the first feature subset is performed on the continuous memory. The first feature subsets are sequentially stored in the memory area according to the first arrangement order, so that when the memory area is read according to the reading order, the corresponding first feature subsets can be read according to the first arrangement order.
In the embodiment of the application, the first feature subsets are stored in the memory areas according to the first arrangement sequence and the reading sequence of the memory areas, so that the continuity of reading the first feature subsets and the accuracy of inputting the first feature subsets into the first processing model can be ensured.
In some embodiments, optionally, the data processing apparatus 900 comprises:
the acquisition module is used for acquiring a second arrangement sequence of at least two second feature subsets under the condition that the first processing model outputs the at least two second feature subsets;
and the replacing module is used for sequentially inputting at least two second feature subsets into at least two memory areas according to the second arrangement sequence so as to replace the corresponding first feature subsets.
In the embodiment of the application, after the first processing model outputs a plurality of second feature subsets, a second arrangement sequence of the plurality of second feature subsets is acquired and is input into the memory area according to the second arrangement sequence, so that the first feature subsets stored in the original memory area are replaced, the update of the feature data set is completed, and the updated feature data set is obtained.
In this embodiment, when updating the feature data set, the feature data set is updated according to a first-in first-out rule, i.e. the first feature subset that is input first to the first processing model for processing is preferentially replaced by the corresponding second feature subset. The second ranking order may be an order in which the second feature subset was output in the first processing model.
In the embodiment of the application, the second feature subset is input into the voice data set through the second arrangement sequence, so that the voice data set is updated, and the second feature subset is ensured to be transmitted back in a continuous memory area.
In some embodiments, optionally, the data processing apparatus 900 comprises:
the acquisition module is used for acquiring output moments of at least two second feature subsets;
and the ordering module is used for ordering the at least two second feature subsets according to the output time to obtain a second arrangement sequence.
In the embodiment of the application, the output time of the at least two second feature subsets is output according to the first processing model, and the at least two second feature subsets are ordered, so that a second arrangement order can be obtained, and the second feature subsets are ensured to update the voice data set according to the second arrangement order.
Specifically, the order of the plurality of first feature subsets input to the first processing model matches the order of the plurality of second feature subsets output by the first processing model.
In the embodiment of the application, the plurality of second feature subsets are ordered according to the output time of the first processing model, so that a second arrangement sequence is obtained, and the second feature subsets are updated to the feature data set according to the second arrangement sequence, so that the accuracy of updating the feature data set is improved.
In some embodiments, optionally, the data processing apparatus 900 comprises:
the acquisition module is used for acquiring the preset processing times of the first processing model;
and the determining module is used for stopping updating the characteristic data set when the updating times of the characteristic data set reach the preset processing times.
In the embodiment of the application, the process of updating the feature data set is stopped by counting the times of updating the feature data set according to the second feature subset and determining that the feature data set is updated under the condition that the counted updating times reach the preset processing times, and the process of extracting the first feature subset in the feature data set is synchronously stopped.
In this embodiment, the preset processing frequency is a frequency threshold set in advance, and the update frequency of the feature data set is the reasoning frequency of the first processing model. The number of times of reasoning of the first processing model can be determined by acquiring the number of times of updating the feature data set, and when the number of times of reasoning reaches the preset number of times of processing, the feature data set is determined to be updated, so that the updated feature data set is obtained.
In the embodiment of the application, the update times of the acquired feature data set are compared with the preset processing times set in advance, when the update times reach the preset processing times, the feature data set is determined to be updated to be the updated feature data set, and the first feature subset is stopped from being continuously input to the first processing model, so that the data processing efficiency is further improved.
In some embodiments, optionally, the feature data set includes any one of:
an audio feature set, an image feature set, and a text feature set.
In the embodiment of the application, the first processing model can be any one of a voice recognition model, a text processing model and an image processing model, namely, different characteristic data sets can be processed by selecting different first processing models so as to adapt to different application scenes.
In this embodiment, the first process model may be a speech recognition model. In the case where the first processing model is a speech recognition model, the feature data set comprises an audio feature set. The voice recognition model is deployed in the household equipment, so that the functions of voice control, voice awakening and the like of the household are realized.
In this embodiment, the first processing model may be a text processing model, and the feature data set comprises a text feature set when the first processing model is a text processing model. By deploying the text processing model into the home equipment, the functions of machine translation and the like of the home equipment on text contents can be realized.
In this embodiment, the first processing model may be an image processing model, and the feature data set comprises an image feature set when the first processing model is an image processing model. By deploying the image processing model into the cooking device, the cooking device can automatically identify food material images.
In the embodiment of the present application, the first processing model may be a time-series convolutional network model, i.e., a TCN (Temporal Convolutional Network, time-series convolutional network) model.
The possible application scenarios of the data processing method provided by the embodiment of the application comprise: speech wake-up, machine translation, etc. are tasks related to time series.
In some embodiments, optionally, where the feature data set comprises an audio feature set, the data processing apparatus 900 comprises:
the storage module 902 is configured to store the audio feature set to at least two memory areas, where at least two first feature subsets are respectively stored in the at least two memory areas, when the audio feature set is acquired;
the reading module 904 is configured to read the first feature subsets in the at least two memory areas, and sequentially input the first feature subsets to the speech recognition model, so that the speech recognition model sequentially outputs the second feature subsets;
an updating module 906, configured to update the audio feature set through the second feature subset to obtain a target feature set; and outputting a voice recognition result according to the target feature set.
In an embodiment of the application, the first processing model may be a speech recognition model when the feature data set comprises an audio feature set. In the case where the first processing model is a speech recognition model, the feature data set comprises an audio feature set. The voice recognition model is deployed in the household equipment, so that the functions of voice control, voice awakening and the like of the household are realized.
In the embodiment of the application, after the audio feature set is received, a plurality of groups of first feature subsets are obtained by grouping the audio feature set, the groups of first feature subsets obtained by grouping are respectively stored in different memory areas, and two adjacent memory areas are continuous memory areas. Each group of first feature subsets can be input into a voice recognition model to be processed, and corresponding second feature subsets can be obtained after processing. After the speech recognition model outputs the second feature subset, the acquired audio feature set is updated by the second feature subset. In the process of inputting the first feature subset into the voice recognition model and updating the audio feature set according to the second feature subset output by the voice recognition model, data copying and reading are needed, and as the area stored by the first feature subset is a continuous memory area, the process of reading and returning data can be ensured to be carried out on the continuous memory area, so that the reading and returning efficiency of the feature data is improved, and the overall efficiency of data processing is further improved.
In some embodiments, optionally, a generating module is configured to generate, in a case where the second processing model is acquired, a first convolution operator based on model parameters in the second processing model;
and the replacing module is used for replacing the second convolution operator in the second processing model by the first convolution operator to obtain the first processing model.
In the embodiment of the application, the second convolution operator in the second processing model is replaced by the first convolution operator, and the first processing model is configured with corresponding data selection information, so that the first processing model with smaller input data size can be obtained. When the first processing model is deployed to the edge equipment with lower computing power, the data processing speed of the first processing model in the edge equipment can be improved.
In some embodiments, optionally, the second convolution operator is a convolution operator in the second processing model, where the convolution operator may be a hole convolution operator, and the first convolution operator is a non-hole convolution operator, and the optimization of the second processing model is completed by replacing the second convolution operator with the first convolution operator and adjusting the input data size of the second processing model to obtain the first processing model.
In some embodiments, optionally, the second processing model is a neural network model before optimization, the input data size of the second processing model is larger, and the second processing model has a function of selecting the received data set. The first processing model is a neural network model obtained by optimizing the second processing model, the size of input data in the first processing model is smaller than that of input data of the second processing model, and the first processing model does not have a function of selecting a data set. Data selection information for selecting the feature data set is determined based on the second processing model, and the feature data set is data-selected based on the data selection information before the feature data is input to the first processing model. And inputting the feature subset obtained by the advance selection into a first processing model for processing, wherein the reasoning speed of the first processing model is faster than that of the second processing model in household equipment with smaller calculation power because the input data size of the first processing model is smaller than that of the second processing model. In some embodiments, optionally, the first process model comprises: a time-series convolution network model.
In an embodiment of the present application, the first processing model may be a time-series convolutional network model, i.e., a TCN (Temporal Convolutional Network, time-series convolutional network) model.
In an embodiment according to the present application, a readable storage medium is presented, on which a program or instructions is stored which, when executed by a processor, implement the steps of the data processing method as in any of the embodiments described above. Therefore, the method has all the advantages of the data processing method in any of the above embodiments, and will not be described in detail herein.
In an embodiment according to the present application, a computer program product is provided, which when executed by a processor, implements the steps of the data processing method in any of the foregoing embodiments, so that all the beneficial technical effects of the data processing method in any of the foregoing embodiments are provided, and will not be described in detail herein.
In an embodiment according to the application, a chip is presented, the chip comprising a program or instructions for implementing the steps of the data processing method as in any of the embodiments of the first aspect described above, when the chip is running. Therefore, the method has all the advantages of the method for processing data in any embodiment of the first aspect, and will not be described in detail herein.
The application provides a cavity convolution memory optimization method based on data grouping, which is beneficial to terminal equipment with limited resources by grouping intermediate arrays maintained during data return and ensuring that the data memory required during each data copying is continuous, thereby improving the data return efficiency and reducing time consumption.
The technical scheme provided by the application can be applied to different side systems such as linux/rtos/android/ios and the like, and provides instruction level acceleration for different side platforms such as armv7/v8, dsp and the like. The technical scheme of the application has the characteristics of light-weight deployment, strong universality, strong usability, high-performance reasoning and the like, comprehensively solves the low-resource bottleneck of the intelligent equipment, greatly shortens the AI model deployment period, and achieves the industry leading level in the side AI deployment field. The technical scheme provided by the application can be applied to a self-grinding chip, for example, the first three-in-one chip FL119 supporting voice, connection and display in the industry. The related achievements have comprehensively energized intelligent household electric quantity production land of voice refrigerators, air conditioners, robots and the like, and are intelligent and synergistic.
It is to be understood that in the claims, specification and drawings of the present application, the term "plurality" means two or more, and unless otherwise explicitly defined, the orientation or positional relationship indicated by the terms "upper", "lower", etc. are based on the orientation or positional relationship shown in the drawings, only for the convenience of describing the present application and making the description process easier, and not for the purpose of indicating or implying that the apparatus or element in question must have the particular orientation described, be constructed and operated in the particular orientation, so that these descriptions should not be construed as limiting the present application; the terms "connected," "mounted," "secured," and the like are to be construed broadly, and may be, for example, a fixed connection between a plurality of objects, a removable connection between a plurality of objects, or an integral connection; the objects may be directly connected to each other or indirectly connected to each other through an intermediate medium. The specific meaning of the terms in the present application can be understood in detail from the above data by those of ordinary skill in the art.
In the claims, specification, and drawings of the present application, the descriptions of the terms "one embodiment," "some embodiments," "particular embodiments," and the like, mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in the embodiment or example of the present application. In the claims, specification and drawings of the present application, the schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The above is only a preferred embodiment of the present application, and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (12)

1. A method of data processing, comprising:
under the condition that a feature data set is acquired, storing the feature data set into at least two memory areas, wherein at least two groups of first feature subsets are respectively stored in the at least two memory areas;
Generating a first convolution operator based on model parameters in a second processing model under the condition that the second processing model is acquired;
replacing a second convolution operator in the second processing model by the first convolution operator to obtain a first processing model, wherein the second convolution operator is a cavity convolution operator;
reading the first feature subsets in the at least two memory areas, and sequentially inputting the first feature subsets into the first processing model so that the first processing model sequentially outputs second feature subsets;
updating the feature data set through the second feature subset;
the storing the feature data set in at least two memory areas under the condition that the feature data set is acquired includes:
grouping the feature data sets to obtain the at least two first feature subsets, wherein the method specifically comprises the following steps:
acquiring operator parameters of a convolution operator in the first processing model;
determining the data quantity in the first feature subset and the selection interval of the data of the first feature subset in the feature data set according to the operator parameters;
and carrying out grouping processing on the characteristic data set according to the data quantity and the selection interval.
2. The method according to claim 1, wherein in the case that the feature data set is acquired, storing the feature data set in at least two memory areas, further comprises:
and storing each of the at least two first feature subsets into at least two memory areas, wherein the at least two memory areas are continuous memory areas.
3. The data processing method of claim 2, wherein the operator parameters include a dilation factor and a convolution kernel size;
the determining, according to the operator parameter, the data amount in the first feature subset and the selection interval of the data of the first feature subset in the feature data set includes:
determining the selected interval according to the expansion factor; and
and determining the data quantity according to the convolution kernel size.
4. The method of claim 2, wherein storing each of the at least two first feature subsets into at least two memory areas comprises:
acquiring the reading sequence of the at least two memory areas;
Acquiring a first arrangement sequence of the at least two first feature subsets in the feature data set;
and storing the at least two groups of first feature subsets into the at least two memory areas in sequence according to the first arrangement sequence and the reading sequence so as to enable the reading sequence to be matched with the first arrangement sequence.
5. The data processing method according to any one of claims 1 to 4, characterized in that the updating of the feature data set by the second feature subset includes:
acquiring a second arrangement sequence of at least two second feature subsets under the condition that the first processing model outputs the at least two second feature subsets;
and sequentially inputting at least two second feature subsets into at least two memory areas according to the second arrangement order so as to replace the corresponding first feature subsets.
6. The method according to claim 5, wherein, in the case where the first processing model outputs at least two of the second feature subsets, obtaining a second arrangement order of the at least two of the second feature subsets includes:
Acquiring output moments of at least two second feature subsets;
and sequencing at least two second feature subsets according to the output time to obtain a second sequence.
7. The data processing method according to any one of claims 1 to 4, characterized by further comprising, after the update processing of the feature data set by the second feature subset:
acquiring preset processing times of the first processing model;
and stopping updating the characteristic data set when the updating times of the characteristic data set reach the preset processing times.
8. The data processing method according to any one of claims 1 to 4, characterized in that the feature data set includes any one of:
an audio feature set, an image feature set, and a text feature set.
9. The data processing method according to claim 8, wherein in the case where the feature data set includes an audio feature set, the data processing method includes:
storing the audio feature set to at least two memory areas under the condition that the audio feature set is acquired, wherein at least two groups of first feature subsets are respectively stored in the at least two memory areas;
Reading the first feature subsets in the at least two memory areas, and sequentially inputting the first feature subsets into a voice recognition model so that the voice recognition model sequentially outputs second feature subsets;
updating the audio feature set through the second feature subset to obtain a target feature set;
and outputting a voice recognition result according to the target feature set.
10. A data processing apparatus, comprising:
the storage module is used for storing the feature data set to at least two memory areas under the condition that the feature data set is acquired, and at least two groups of first feature subsets are respectively stored in the at least two memory areas;
the reading module is used for generating a first convolution operator based on model parameters in a second processing model under the condition that the second processing model is acquired, and replacing a second convolution operator in the second processing model by the first convolution operator to obtain the first processing model, wherein the second convolution operator is a cavity convolution operator;
the reading module is further configured to read the first feature subsets in the at least two memory areas, and sequentially input the first feature subsets into the first processing model, so that the first processing model sequentially outputs second feature subsets;
The updating module is used for updating the feature data set through the second feature subset;
the storing, by the storage module, the feature data set into at least two memory areas when the feature data set is acquired, specifically includes:
grouping the feature data sets to obtain the at least two first feature subsets, wherein the method specifically comprises the following steps:
acquiring operator parameters of a convolution operator in the first processing model;
determining the data quantity in the first feature subset and the selection interval of the data of the first feature subset in the feature data set according to the operator parameters;
and carrying out grouping processing on the characteristic data set according to the data quantity and the selection interval.
11. A readable storage medium having stored thereon a program or instructions, which when executed by a processor, implement the steps of the data processing method according to any of claims 1 to 9.
12. A chip comprising a program or instructions for implementing the steps of the data processing method according to any one of claims 1 to 9 when the chip is run.
CN202310903073.0A 2023-07-21 2023-07-21 Data processing method, device, storage medium and chip Active CN116611479B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310903073.0A CN116611479B (en) 2023-07-21 2023-07-21 Data processing method, device, storage medium and chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310903073.0A CN116611479B (en) 2023-07-21 2023-07-21 Data processing method, device, storage medium and chip

Publications (2)

Publication Number Publication Date
CN116611479A CN116611479A (en) 2023-08-18
CN116611479B true CN116611479B (en) 2023-10-03

Family

ID=87685786

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310903073.0A Active CN116611479B (en) 2023-07-21 2023-07-21 Data processing method, device, storage medium and chip

Country Status (1)

Country Link
CN (1) CN116611479B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114037054A (en) * 2021-11-01 2022-02-11 青岛信芯微电子科技股份有限公司 Data processing method, device, chip, equipment and medium
CN114662794A (en) * 2022-04-25 2022-06-24 未鲲(上海)科技服务有限公司 Enterprise default risk prediction method, device, equipment and storage medium
WO2023284745A1 (en) * 2021-07-14 2023-01-19 华为技术有限公司 Data processing method, system and related device
CN116258178A (en) * 2023-03-24 2023-06-13 美的集团(上海)有限公司 Model conversion method, device, electronic equipment and readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2603895B (en) * 2021-02-11 2023-02-22 Advanced Risc Mach Ltd Data transfers in neural processing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023284745A1 (en) * 2021-07-14 2023-01-19 华为技术有限公司 Data processing method, system and related device
CN114037054A (en) * 2021-11-01 2022-02-11 青岛信芯微电子科技股份有限公司 Data processing method, device, chip, equipment and medium
CN114662794A (en) * 2022-04-25 2022-06-24 未鲲(上海)科技服务有限公司 Enterprise default risk prediction method, device, equipment and storage medium
CN116258178A (en) * 2023-03-24 2023-06-13 美的集团(上海)有限公司 Model conversion method, device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN116611479A (en) 2023-08-18

Similar Documents

Publication Publication Date Title
CN104239137B (en) Multi-model Method of Scheduling Parallel and device based on DAG node optimal paths
CN107368891A (en) A kind of compression method and device of deep learning model
CN106156355B (en) Log processing method and device
CN107123415A (en) A kind of automatic music method and system
US20210132990A1 (en) Operator Operation Scheduling Method and Apparatus
CN109272109A (en) The instruction dispatching method and device of neural network model
CN103745225A (en) Method and system for training distributed CTR (Click To Rate) prediction model
CN110047512A (en) A kind of ambient sound classification method, system and relevant apparatus
CN105184367A (en) Model parameter training method and system for depth neural network
CN109637531B (en) Voice control method and device, storage medium and air conditioner
CN110928576A (en) Convolution processing method and device of convolutional neural network and storage medium
CN112288175A (en) Production line real-time optimization method and device
CN105868222A (en) Task scheduling method and device
CN108241531A (en) A kind of method and apparatus for distributing resource for virtual machine in the cluster
CN111680085A (en) Data processing task analysis method and device, electronic equipment and readable storage medium
JP2022518508A (en) Image analysis system and how to use the image analysis system
CN116109139A (en) Wind control strategy generation method, decision method, server and storage medium
CN116611479B (en) Data processing method, device, storage medium and chip
Bensmaine et al. Simulation-based NSGA-II approach for multi-unit process plans generation in reconfigurable manufacturing system
CN108805333A (en) The batch operation operation of core banking system takes prediction technique and device
CN117215789A (en) Resource allocation method and device for data processing task and computer equipment
CN112416301A (en) Deep learning model development method and device and computer readable storage medium
CN115238559B (en) Method and system for automatically extracting boundary components in three-dimensional rolled piece stretching modeling process
CN116629339B (en) Model optimization method, data processing device, storage medium and chip
CN112633516B (en) Performance prediction and machine learning compiling optimization method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant