CN114416910A - Data processing method and device based on machine learning - Google Patents

Data processing method and device based on machine learning Download PDF

Info

Publication number
CN114416910A
CN114416910A CN202210068252.2A CN202210068252A CN114416910A CN 114416910 A CN114416910 A CN 114416910A CN 202210068252 A CN202210068252 A CN 202210068252A CN 114416910 A CN114416910 A CN 114416910A
Authority
CN
China
Prior art keywords
sequence
machine learning
index
processing method
data processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210068252.2A
Other languages
Chinese (zh)
Inventor
那彦波
段然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202210068252.2A priority Critical patent/CN114416910A/en
Publication of CN114416910A publication Critical patent/CN114416910A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/31Indexing; Data structures therefor; Storage structures
    • G06F16/316Indexing structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a data processing method and device based on machine learning, and solves the problem that a self-attention mechanism of a SoftMax module is difficult to realize. An embodiment of the present invention provides a data processing method based on machine learning, including: receiving a digital sequence, and obtaining an index sequence based on the number of objects in the digital sequence; performing weight query on a real query table based on the index sequence to obtain a probability sequence; and configuring a machine learning system based on the probability sequence.

Description

Data processing method and device based on machine learning
Technical Field
The invention relates to the technical field of machine learning, in particular to a data processing method and device based on machine learning.
Background
In recent years, Attention models (Attention models) have been widely used in various deep learning tasks such as natural language processing, image recognition and speech recognition, and are one of the most important core technologies for deep learning.
Normalization in the SoftMax module is essential because the SoftMax module of the attention model weights certain features more than others, thereby focusing the entire system on a particular area of the image, and if the normalization of the SoftMax module is deleted, the SoftMax can no longer constitute the attention mechanism, but merely represents a dot product of the features. The self-attention mechanism is realized by a pre-calculated division method in the conventional SoftMax module, so that the calculation difficulty and time are increased when the SoftMax module is used intensively, and the transform module and the self-attention mechanism are difficult to realize in an FPGA.
Disclosure of Invention
In view of this, embodiments of the present invention provide a data processing method and apparatus based on machine learning, which solve the problem that it is difficult for a SoftMax module to implement a self-attention mechanism.
An embodiment of the present invention provides a data processing method based on machine learning, including: receiving a digital sequence, and obtaining an index sequence based on the number of objects in the digital sequence; performing weight query on a real query table based on the index sequence to obtain a probability sequence; and configuring a machine learning system based on the probability sequence.
In one embodiment, the sequence of numbers includes a preset number of numbers; the step of deriving an index sequence based on the number of objects in the number sequence comprises: obtaining the preset number of integers based on the number sequence; and arranging the integers of the preset number according to a first preset rule to obtain the index series.
In one embodiment, the step of performing a weight query in a real query table based on the index sequence to obtain a probability sequence includes: obtaining a weight of each object in the index sequence from the reality look-up table based on the index sequence; and sequencing according to a second preset rule based on the weight of each object to obtain the probability sequence.
In one embodiment, the data in the real lookup table is floating point format numerical values; the sum of all data in the real look-up table equals 1.
In one embodiment, the data in the real look-up table is an integer format value, and all data in the real look-up table is a power of 2.
In one embodiment, the reality lookup table includes fixed values and predetermined values.
A configuration apparatus of a machine learning system, comprising: a receiving module for receiving a digital sequence; the processing module is used for obtaining an index sequence based on the number of the objects in the digital sequence; performing weight query on a real query table based on the index sequence to obtain a probability sequence; and the configuration module is used for configuring the machine learning system based on the probability sequence.
In one embodiment, the sequence of numbers includes a preset number of numbers; the processing module is further configured to: obtaining the number of integers equal to the preset number based on the number sequence; arranging the integers with the preset number according to a first preset rule to obtain the index series; obtaining a weight of each object in the index sequence from the reality look-up table based on the index sequence; and sequencing according to a second preset rule based on the weight of each object to obtain the probability sequence.
An electronic device comprising a memory and a processor, the memory for storing one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the machine learning-based data processing method described above.
A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, is adapted to implement the machine learning-based data processing method described above.
According to the data processing method and device based on machine learning, provided by the embodiment of the invention, an index sequence is obtained based on the number of objects in a digital sequence by receiving the digital sequence; performing weight query on a real query table based on the index sequence to obtain a probability sequence; and configuring a machine learning system based on the probability sequence. By the machine learning-based data processing method, division operation required by normalization when a SoftMax module is applied in a self-attention mechanism and a transformer module is eliminated, and the self-attention mechanism and the transformer module are allowed to be applied without division operation, so that very high data throughput is supported in applications such as image and video processing.
Drawings
Fig. 1 is a schematic structural diagram of a convolutional neural network in the prior art.
Fig. 2 is a schematic diagram illustrating input and output of a SoftMax module in the prior art.
Fig. 3 is a flowchart illustrating a data processing method based on machine learning according to an embodiment of the present invention.
Fig. 4 is a flowchart illustrating a method for obtaining an index sequence according to an embodiment of the present invention.
Fig. 5 is a flowchart illustrating a method for obtaining a probability sequence according to an embodiment of the present invention.
Fig. 6 is a schematic diagram illustrating input and output of a hierarchical SoftMax module according to an embodiment of the present invention.
Fig. 7 is a schematic structural diagram of a machine learning apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Convolutional neural networks, or convolutional networks for short, are a neural network architecture that uses images as input/output and replaces scalar weights with filters (convolution). As an example, fig. 1 shows a simple structure comprising 3 layers. The structure takes 4 input images on the left side, with 3 units in the middle hidden layerOutput image) there are 2 cells in the output layer, generating 2 output images. Each weight is
Figure BDA0003481069210000031
Corresponds to a filter (e.g., a 3x3 or 5x5 kernel), where k is a label representing the input layer number and i and j are labels representing the input and output units, respectively. Biasing
Figure BDA0003481069210000032
Is a scalar added to the convolution output. The result of adding multiple convolutions and offsets is then passed through an activation function, often corresponding to a rectifying linear unit (ReLU), a sigmoid function, or a hyperbolic tangent, among others. The filters and offsets are fixed during system operation, obtained through a training process using a set of input/output sample images, and adjusted according to the application to meet certain optimization criteria. Typical configurations include one tenth or hundreds of filters per layer. Networks with 3 layers are generally considered shallow, while networks with more than 5 or 10 layers are generally considered deep.
The self-attention mechanism may be part of a "Transformer" module; the basic idea of the "Transformer" module is to use 3 inputs, respectively: query (Q), key (K), and value (V), while the attention function may be described as mapping the query and a set of key-value pairs to an output, where the query, key, value, and output are all vectors, and the output is computed as a weighted sum of values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key.
One basic module of the self-attention mechanism in the transform module is the SoftMax module, which, as shown in fig. 2, calculates an exponential function for each input feature and normalizes the output so that the sum of all features equals 1, where the input includes a sequence of N numbers, each x1~xnThe output probability series comprise N objects which are respectively p 1-pn; wherein,
Figure BDA0003481069210000041
some applications of attention models in computer vision, such as image segmentationClass, etc., because the algorithm reduces the resolution of the image by a factor of several before performing the division operation, other applications, such as image restoration and enhancement, require one division operation for each output pixel in a large image. In such applications, the computational resources required for the SoftMax module are high.
The invention replaces the traditional SoftMax module by arranging a grading SoftMax module, receives the sequence with N numbers, applies the sequenced module, obtains corresponding weight from a real query table according to the ranking index, and outputs the same probability sequence with N numbers according to the sequence. Thereby eliminating the division operation in the SoftMax module and allowing the configuration of the self-attention mechanism and the transform module without division calculation, thereby supporting very high data throughput in applications such as image and video processing. Specific embodiments are described in the following examples.
The present embodiment provides a data processing method based on machine learning, as shown in fig. 3, the data processing method based on machine learning includes:
and step 01, receiving the digital sequence and obtaining an index sequence based on the number of the objects in the digital sequence.
The sequence of numbers includes a preset number of digits. Wherein the preset number is N integers, and N is more than or equal to 0.
The step of obtaining the index sequence based on the number sequence comprises: obtaining a preset number of integers based on the number sequence; and arranging the integers of the preset number according to a first preset rule to obtain an index series. For example, a sequence of N numbers is received and N integers are output, the integers being arranged in descending order of value to form an index sequence. As shown in FIG. 4, a sequence of numbers with N integers of { x ] is input to the RANK (RANK) module1,x2,……xn}; output index sequence r1,r2,……rnWherein r isi∈{1,...,N},ri≠rj if i≠j,
Figure BDA0003481069210000042
And 02, performing weight query on a real query table based on the index sequence to obtain a probability sequence.
The step of performing weight query in a real query table based on the index sequence to obtain a probability sequence comprises: obtaining the weight of each object in the index sequence from a real look-up table based on the index sequence; and sequencing according to a second preset rule based on the weight of each object to obtain a probability sequence.
Optionally, the real look-up table contains fixed values and predetermined values. The fixed value is an input value, the preset value is an output value, and the corresponding preset value is inquired in the inquiry table according to the input value to be output.
The data in the real lookup table includes floating point format values. For floating point format, the sum of all values in the real lookup table equals 1.
The data in the real lookup table is numerical values in an integer format. For integer formats, the sum of all values in all real-world look-up tables should be constant, typically a power of 2.
As shown in FIG. 5, the LUT module is a real look-up table, in which the weight of each object in the index sequence is obtained as LUT [1 ]]~LUT[N]Wherein
Figure BDA0003481069210000051
c is LUT [1 ]]~LUT[N]Summing; will LUT [1 ]]~LUT[N]And the index sequence output by the RANK module { r1,r2,……rnInputting the reorder module, and outputting the probability sequence p1,p2,……pnIn which p isi=LUT[ri]。
If one feature input of the SoftMax module is much larger than the inputs of the remaining features, the feature probability will equal 1 and the probabilities of the remaining features will equal 0. In this case, the output of SoftMax is equal to the maximum of all inputs. Typically, the SoftMax module prefers the maximum value, but is "soft" to the maximum function. In the present invention, the weights in the LUT (real look-up table) represent a fixed configuration of softness levels, with the weights giving priority to the maximum values. The weights in the LUT may express different degrees of smoothness, but when the LUT includes the highest weight equal to 1 and the remaining weights equal to 0, the weights in the LUT may also express a maximum function.
Division by the constant can be realized through simple bit shift operation by inquiring the real lookup table, and compared with the prior art, the division pre-calculated in the SoftMax module is omitted, so that the calculation difficulty and time are reduced.
And 03, configuring the machine learning system based on the probability sequence.
As shown in FIG. 6, the RANK module, LUT and reorder are combined into a hierarchical SoftMax module, when a number sequence of N integers { x }1,x2,……xnAfter the input of the hierarchical SoftMax module, the probability sequence p can be directly output1,p2,……pnIn which p isi=LUT[ri]. Wherein p in the probability sequenceiRepresenting the probability for multiplying another sequence of N features in the self-attention mechanism.
When the floating point version is used, the multiplication in the hierarchical SoftMax module is the same as the multiplication in the existing SoftMax module. But when an integer version is used, the constant C (sum of values in the real look-up table) should be set to a power of 2, e.g. if C is 2L, each multiplication should be shifted by L bits. For example, if in the floating-point version we multiply a pi, then in the integer version (a pi) < < L is calculated.
The present embodiment provides a configuration apparatus 100 of a machine learning system, as shown in fig. 7, the configuration apparatus 100 of machine learning includes: a receiving module 10, a processing module 20 and a configuration module 30. Wherein,
the receiving module 10 is configured to receive a digital sequence, and obtain an index sequence based on the number of objects in the digital sequence;
the processing module 20 is configured to perform weight query on a real query table based on the index sequence to obtain a probability sequence;
the configuration module 30 is configured to configure the machine learning system based on the probability sequence.
In addition, the sequence of numbers comprises a preset number of numbers; the processing module 20 is further configured to:
obtaining the number of integers equal to the preset number based on the number sequence;
arranging the integers with the preset number according to a first preset rule to obtain the index series;
obtaining a weight of each object in the index sequence from the reality look-up table based on the index sequence;
and sequencing according to a second preset rule based on the weight of each object to obtain the probability sequence.
The present embodiment provides an electronic device, which may include a memory and a processor, wherein the memory stores a computer program, and the computer program is executed by the processor to implement the data processing method based on machine learning as described in the above embodiments. It is to be appreciated that the electronic device can also include input/output (I/O) interfaces, as well as communication components.
Wherein the processor is configured to perform all or part of the steps of the data processing method based on machine learning as in the embodiments. The memory is used to store various types of data, which may include, for example, instructions for any application or method in the electronic device, as well as application-related data.
The Processor may be an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components, and is configured to execute the data processing method based on machine learning in the foregoing embodiments.
The Memory may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk.
The present embodiments also provide a computer-readable storage medium. Each functional unit in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium.
Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention.
And the aforementioned storage medium includes: flash memory, hard disk, multimedia card, card type memory (e.g., SD or DX memory, etc.), Random Access Memory (RAM), Static Random Access Memory (SRAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Programmable Read Only Memory (PROM), magnetic memory, magnetic disk, optical disk, server, APP application mall, etc., various media that can store program check codes, on which computer programs are stored, which when executed by a processor can implement the following method steps:
step 01, receiving a digital sequence, and obtaining an index sequence based on the number of objects in the digital sequence;
step 02: performing weight query on a real query table based on the index sequence to obtain a probability sequence;
and 03, configuring the machine learning system based on the probability sequence. The specific implementation and the resulting effects can be referred to the above embodiments, and the present invention is not described herein again.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art.
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless explicitly specifically limited otherwise. All directional indicators in the embodiments of the present application (such as upper, lower, left, right, front, rear, top, bottom … …) are only used to explain the relative positional relationship between the components, the movement, etc. in a particular posture (as shown in the drawings), and if the particular posture is changed, the directional indicator is changed accordingly. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Furthermore, reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims. The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and the like that are within the spirit and principle of the present invention are included in the present invention.

Claims (10)

1. A method for processing data based on machine learning, comprising:
receiving a digital sequence, and obtaining an index sequence based on the number of objects in the digital sequence;
performing weight query on a real query table based on the index sequence to obtain a probability sequence;
and configuring a machine learning system based on the probability sequence.
2. The machine-learning based data processing method of claim 1, wherein the number sequence comprises a preset number of numbers; the step of deriving an index sequence based on the number of objects in the number sequence comprises:
obtaining the preset number of integers based on the number sequence;
and arranging the integers of the preset number according to a first preset rule to obtain the index series.
3. The machine learning-based data processing method according to claim 1, wherein the step of performing weight query on a real query table based on the index sequence to obtain a probability sequence comprises:
obtaining a weight of each object in the index sequence from the reality look-up table based on the index sequence;
and sequencing according to a second preset rule based on the weight of each object to obtain the probability sequence.
4. The machine learning-based data processing method according to claim 1, wherein the data in the real lookup table is floating point format numerical values; the sum of all data in the real look-up table equals 1.
5. The machine learning-based data processing method according to claim 1, wherein the data in the real lookup table is integer format numerical values, and all the data in the real lookup table is powers of 2.
6. The machine-learning based data processing method of claim 1, wherein the reality lookup table comprises fixed values and predetermined values.
7. An apparatus for configuring a machine learning system, comprising:
a receiving module for receiving a digital sequence;
the processing module is used for obtaining an index sequence based on the number of the objects in the digital sequence; performing weight query on a real query table based on the index sequence to obtain a probability sequence;
and the configuration module is used for configuring the machine learning system based on the probability sequence.
8. The apparatus for configuring a machine learning system according to claim 7, wherein the number sequence comprises a preset number of numbers; the processing module is further configured to:
obtaining the number of integers equal to the preset number based on the number sequence;
arranging the integers with the preset number according to a first preset rule to obtain the index series;
obtaining a weight of each object in the index sequence from the reality look-up table based on the index sequence;
and sequencing according to a second preset rule based on the weight of each object to obtain the probability sequence.
9. An electronic device comprising a memory and a processor, the memory configured to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the machine learning-based data processing method of any one of claims 1-6.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, is configured to implement the machine learning-based data processing method according to any one of claims 1 to 6.
CN202210068252.2A 2022-01-20 2022-01-20 Data processing method and device based on machine learning Pending CN114416910A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210068252.2A CN114416910A (en) 2022-01-20 2022-01-20 Data processing method and device based on machine learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210068252.2A CN114416910A (en) 2022-01-20 2022-01-20 Data processing method and device based on machine learning

Publications (1)

Publication Number Publication Date
CN114416910A true CN114416910A (en) 2022-04-29

Family

ID=81274920

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210068252.2A Pending CN114416910A (en) 2022-01-20 2022-01-20 Data processing method and device based on machine learning

Country Status (1)

Country Link
CN (1) CN114416910A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114676169A (en) * 2022-05-27 2022-06-28 富算科技(上海)有限公司 Data query method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114676169A (en) * 2022-05-27 2022-06-28 富算科技(上海)有限公司 Data query method and device
CN114676169B (en) * 2022-05-27 2022-08-26 富算科技(上海)有限公司 Data query method and device

Similar Documents

Publication Publication Date Title
CN109492666B (en) Image recognition model training method and device and storage medium
CN112613581B (en) Image recognition method, system, computer equipment and storage medium
AU2016225947B2 (en) System and method for multimedia document summarization
CN110362723B (en) Topic feature representation method, device and storage medium
CN112396115A (en) Target detection method and device based on attention mechanism and computer equipment
CN110046249A (en) Training method, classification method, system, equipment and the storage medium of capsule network
Kumar et al. Faster algorithms for binary matrix factorization
CN111340820B (en) Image segmentation method and device, electronic equipment and storage medium
CN112434131A (en) Text error detection method and device based on artificial intelligence, and computer equipment
CN114780768A (en) Visual question-answering task processing method and system, electronic equipment and storage medium
CN110717405B (en) Face feature point positioning method, device, medium and electronic equipment
CN114416910A (en) Data processing method and device based on machine learning
CN118097293A (en) Small sample data classification method and system based on residual graph convolution network and self-attention
CN114493674A (en) Advertisement click rate prediction model and method
CN113516697A (en) Image registration method and device, electronic equipment and computer-readable storage medium
CN114187598A (en) Handwritten digit recognition method, system, device and computer readable storage medium
CN111400715B (en) Classification engine diagnosis method, classification engine diagnosis device and computer-readable storage medium
CN116758601A (en) Training method and device of face recognition model, electronic equipment and storage medium
CN113505838A (en) Image clustering method and device, electronic equipment and storage medium
CN112464958A (en) Multi-modal neural network information processing method and device, electronic equipment and medium
CN116501993B (en) House source data recommendation method and device
CN117312533B (en) Text generation method, device, equipment and medium based on artificial intelligent model
CN115688012A (en) Graph data classification method, device, equipment and storage medium
CN116246657A (en) Voice processing method, device, terminal and storage medium
CN114140807A (en) Copy image identification method and device, terminal device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination