CN110414678A - Data processing method and device and storage medium and electronic device - Google Patents

Data processing method and device and storage medium and electronic device Download PDF

Info

Publication number
CN110414678A
CN110414678A CN201910534815.0A CN201910534815A CN110414678A CN 110414678 A CN110414678 A CN 110414678A CN 201910534815 A CN201910534815 A CN 201910534815A CN 110414678 A CN110414678 A CN 110414678A
Authority
CN
China
Prior art keywords
vector
network model
training
dimension
solely
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910534815.0A
Other languages
Chinese (zh)
Other versions
CN110414678B (en
Inventor
郑立颖
徐亮
阮晓雯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910534815.0A priority Critical patent/CN110414678B/en
Publication of CN110414678A publication Critical patent/CN110414678A/en
Priority to PCT/CN2019/117724 priority patent/WO2020253049A1/en
Application granted granted Critical
Publication of CN110414678B publication Critical patent/CN110414678B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of data processing method and device and storage mediums and electronic device, wherein this method comprises: obtaining multiple first kind training samples, each first kind training sample includes that a n ties up only hot vector and corresponding first kind training label;Using multiple first kind training samples training first nerves network model, nervus opticus network model is obtained, the first layer of first nerves network model is embeding layer, and embeding layer is used for the vector of the vector output m dimension for n dimension, n > m;By each n dimension, solely hot vector inputs nervus opticus network model, and extracts the output vector that embeding layer ties up each n solely hot vector, establishes and stores each n and ties up the solely corresponding relationship between hot vector and corresponding m dimensional vector.Through the invention, it solves and identifies different objects by only hot vector in the related technology and lead to the technical problem that the training process committed memory of machine learning is excessive, training speed is slower.

Description

Data processing method and device and storage medium and electronic device
Technical field
The present invention relates to data processing fields, in particular to a kind of data processing method and device and storage medium And electronic device.
Background technique
In Machine Learning Problems, needs to distribute each object unique corresponding ID, usually pass through different vectors To identify different objects.There are mainly two types of the methods for distributing object unique corresponding ID at present: label encoder (mark Label coding) and one-hot encoder (one-hot coding).The specific practice of label encoder is for each object point With a numerical value, for example, corresponding distribution numerical value 1,2,3 indicates for three kinds of colors of red, yellow, and green, still, label Encoder can introduce numerical value size information, for example, 1+3 average out to 2, still, it is red and green it is flat be not yellow, therefore, Label encoder is not particularly suited for general scene.The specific practice of one-hot encoder is, with n tie up solely hot vector come N object is identified respectively, and each element in vector is 0 or 1, object specific for one, and corresponding solely hot vector is only The element for having a corresponding position is 1, remaining element is 0, for example, using vector respectively for three kinds of colors of red, yellow, and green [0,0,1], [0,1,0], [1,0,0] indicate.But one-hot encoder will increase vector dimension, if there is hundreds and thousands of A object is needed to be identified using the vector of hundreds and thousands of a dimensions, in operation, can consume a large amount of memory, so that operation It slows.
For the above problem present in the relevant technologies, at present it is not yet found that the solution of effect.
Summary of the invention
The embodiment of the invention provides a kind of data processing method and device and storage mediums and electronic device, at least Solve it is in the prior art identified by only hot vector different objects cause the training process committed memory of machine learning it is excessive, The slower technical problem of training speed.
According to one embodiment of present invention, a kind of data processing method is provided, comprising: obtain multiple first kind training Sample, wherein each first kind training sample includes that a n ties up only hot vector and corresponding first kind training label;Using more A first kind training sample training first nerves network model, obtains nervus opticus network model, wherein first nerves network mould The first layer of type is embeding layer, and embeding layer is used for the vector of the vector output m dimension for n dimension, n > m;By each n tie up solely heat to Amount input nervus opticus network model, and the output vector that embeding layer ties up each n solely hot vector is extracted, obtain each n dimension The solely corresponding m dimensional vector of hot vector;It establishes and stores each n and tie up the solely corresponding relationship between hot vector and corresponding m dimensional vector.
Further, multiple first kind training samples are obtained, comprising: obtain mutually different n n and tie up solely hot vector;It obtains Take each n dimension solely corresponding first kind training label of hot vector;It combines each n and ties up only hot vector and corresponding first kind training mark Label, obtain n first kind training sample.
Further, it obtains mutually different n n and ties up solely hot vector, comprising: obtain the mark of n object;It is right by n The mark of elephant is tieed up solely hot vector by mutually different n n and is indicated correspondingly.
Further, establish and store each n tie up solely the corresponding relationship between hot vector and corresponding m dimensional vector it Afterwards, this method further include: obtain the corresponding second class training label of each m dimensional vector of input;Based on each m dimensional vector with it is right The the second class training label answered, generates multiple second class training samples;Utilize multiple second class sample training third nerve networks Model obtains fourth nerve network model.
Further, before using multiple first kind training samples training first nerves network model, this method is also wrapped It includes: obtaining the parameter of the input vector dimension for configuring third nerve network model of input;According to the first mind of parameter configuration The output dimension of embeding layer through network model.
Further, before using multiple first kind training samples training first nerves network model, this method is also wrapped It includes: obtaining the parameter of the output dimension of the embeding layer for configuring first nerves network model of input;According to parameter configuration The output dimension of the embeding layer of one neural network model.
Further, the second layer of first nerves network model is full articulamentum, output layer is normalization layer.
According to another embodiment of the invention, a kind of data processing equipment is provided, which includes: the first acquisition mould Block, for obtaining multiple first kind training samples, wherein each first kind training sample include n tie up solely hot vector with it is right The first kind training label answered;First training module, for utilizing multiple first kind training samples training first nerves network mould Type obtains nervus opticus network model, wherein the first layer of first nerves network model is embeding layer, and embeding layer is used to be directed to n The vector of the vector output m dimension of dimension, n > m;First execution module, for solely hot vector to input nervus opticus network by each n dimension Model, and extract the output vector that embeding layer ties up each n solely hot vector, obtain each n dimension solely the corresponding m of hot vector tie up to Amount;Second execution module ties up the solely corresponding relationship between hot vector and corresponding m dimensional vector for establishing and storing each n.
Further, obtaining module includes: first acquisition unit, ties up solely hot vector for obtaining mutually different n n; Second acquisition unit, for obtaining each n dimension solely corresponding first kind training label of hot vector;Assembled unit, it is every for combining A n ties up only hot vector and corresponding first kind training label, obtains n first kind training sample.
Further, first acquisition unit includes: third acquiring unit, for obtaining the mark of n object;Mark is single Member indicates correspondingly for the mark of n object to be tieed up solely hot vector by mutually different n n.
Further, the device further include: second obtains module, for establish and store each n tie up solely hot vector with After corresponding relationship between corresponding m dimensional vector, the corresponding second class training label of each m dimensional vector of input is obtained;It is raw At module, for generating multiple second class training samples based on each m dimensional vector and corresponding second class training label;Second Training module obtains fourth nerve network model for utilizing multiple second class sample training third nerve network models.
Further, device further include: third obtains module, for utilizing multiple first kind training samples training the Before one neural network model, the parameter of the input vector dimension for configuring third nerve network model of input is obtained;The One configuration module, for the output dimension according to the embeding layer of parameter configuration first nerves network model.
Further, device further include: the 4th obtains module, for utilizing multiple first kind training samples training the Before one neural network model, the ginseng of the output dimension of the embeding layer for configuring first nerves network model of input is obtained Number;Second configuration module, for the output dimension according to the embeding layer of parameter configuration first nerves network model.
Further, the second layer of first nerves network model is full articulamentum, output layer is normalization layer.
According to still another embodiment of the invention, a kind of storage medium is additionally provided, meter is stored in the storage medium Calculation machine program, wherein the computer program is arranged to execute the step in any of the above-described embodiment of the method when operation.
According to still another embodiment of the invention, a kind of electronic device, including memory and processor are additionally provided, it is described Computer program is stored in memory, the processor is arranged to run the computer program to execute any of the above-described Step in embodiment of the method.
Through the invention, one is trained by identifying different objects by the different only hot vectors of n dimension in any needs In the case where a neural network model, the first layer embeding layer of neural network model is designed as one can be defeated by n-dimensional vector Out it is the embeding layer of m dimensional vector, and trains the n of label to tie up only hot vector to neural network model progress using some mark Training allows the embeding layer of the neural network model more accurately to express the information that n ties up solely hot vector with m dimensional vector, So as to be tieed up to tie up solely hot vector dimensionality reduction to any n to m using the neural network model after training, reduce in the training process The memory of occupancy improves the training speed of neural network model, shortens the training time of neural network model, solves related skill Identifying different objects in art by only hot vector leads to that the training process committed memory of machine learning is excessive, training speed is slower The technical issues of.
Detailed description of the invention
The drawings described herein are used to provide a further understanding of the present invention, constitutes part of this application, this hair Bright illustrative embodiments and their description are used to explain the present invention, and are not constituted improper limitations of the present invention.In the accompanying drawings:
Fig. 1 is the flow chart of data processing method according to an embodiment of the present invention;
Fig. 2 is the schematic diagram of data processing equipment according to an embodiment of the present invention;
Fig. 3 is a kind of hardware block diagram of electronic device of the embodiment of the present invention.
Specific embodiment
In order to make those skilled in the art more fully understand application scheme, below in conjunction in the embodiment of the present application Attached drawing, the technical scheme in the embodiment of the application is clearly and completely described, it is clear that described embodiment is only The embodiment of the application a part, instead of all the embodiments, in the absence of conflict, embodiment and reality in the application The feature applied in example can be combined with each other.Based on the embodiment in the application, those of ordinary skill in the art are not making wound Every other embodiment obtained under the premise of the property made labour, shall fall within the protection scope of the present application.
It should be noted that the description and claims of this application and term " first " in above-mentioned attached drawing, " Two " etc. be to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should be understood that using in this way Data be interchangeable under appropriate circumstances, so as to embodiments herein described herein can in addition to illustrating herein or Sequence other than those of description is implemented.In addition, term " includes " and " having " and their any deformation, it is intended that cover Cover it is non-exclusive include, for example, the process, method, system, product or equipment for containing a series of steps or units are not necessarily limited to Step or unit those of is clearly listed, but may include be not clearly listed or for these process, methods, product Or other step or units that equipment is intrinsic.
Embodiment 1
Present embodiments provide a kind of data processing method, can run in mobile terminal, handheld terminal or similar fortune It calculates among equipment.Operating in different arithmetic facilities only is difference of the scheme in executing subject, and those skilled in the art are contemplated that Operation can generate identical technical effect in nonidentity operation equipment.
Data processing method provided in this embodiment needs to identify difference by the different only hot vectors of n dimension any For object come in the case where training a neural network model, the first layer embeding layer of neural network model is designed as one can be with N-dimensional vector is exported as the embeding layer of m dimensional vector, and trains the only hot vectors of n dimension of label to the nerve using some mark Network model is trained, and the embeding layer of the neural network model is allowed more accurately to express n dimension solely heat with m dimensional vector The information of vector is reduced so as to be tieed up to tie up solely hot vector dimensionality reduction to any n to m using the neural network model after training The memory occupied in the training process improves the training speed of neural network model, shortens the training time of neural network model, Solve identified in the related technology by only hot vector different objects cause the training process committed memory of machine learning it is excessive, The slower technical problem of training speed.
As shown in Figure 1, data processing method provided in this embodiment includes the following steps:
Step 101, multiple first kind training samples are obtained, wherein each first kind training sample includes a n dimension solely heat Vector trains label with the corresponding first kind;
Each first kind training sample is an object, and n ties up solely hot vector and is used for through the vectorial object so that machine Device is able to carry out identification and processing, and corresponding first kind training label is also a vector, for identifying a kind of spy of the object Sign or attribute, this feature or attribute are the training objectives of neural network model, that is, neural network model is for after training The object of solely hot vectorial is tieed up to export a corresponding first kind training label with n for each.
For example, i-th of first kind training sample AiFor [xi1, xi2... ..., xin, yi], wherein xi1~xinSolely heat is tieed up for n Vector NiN element, in xi1~xinIn, only an element is 1, remaining element is 0, yiTo tie up solely hot vector N with ni Corresponding first kind training label, for indicating that n ties up solely hot vector NiThe classification of represented object.
Optionally, when obtaining multiple first kind training samples, firstly, obtaining mutually different n n ties up solely hot vector, Each n ties up the coding that solely hot vector is an object, for example, indicating Beijing with vector [... 0,1,0 ... ...];Secondly, obtaining The only corresponding first kind training label of hot vector of each n dimension, first kind training label is for training neural network model to realize the Oneclass classification target is corresponding to Pekinese's first kind training label for example, first kind training label can be the scale in city 1, for indicating that Beijing is a line city;Finally, combining each n ties up only hot vector and corresponding first kind training label, n is obtained A first kind training sample, after being trained using first kind training sample to model, obtained model is used for input City is classified, and determines the scale in input city.
Wherein, each n ties up solely hot vector and is used to indicate that an object optionally can first to obtain the mark of n object, And by mutually different n n dimension, solely hot vector indicates correspondingly by the mark of n object, to obtain each n dimension solely heat Corresponding relationship between vector and corresponding object identity, and store.For example, red is indicated with vector [0,0,1], it will The corresponding relationship of " red " and vector [0,0,1] stores.
First nerves network model can be trained for nervus opticus network model using multiple first kind training samples, with So that nervus opticus network model has and classification feature corresponding to first kind training sample.
For example, in order to obtain one after inputting city name, city size can be exported and (be divided into a line, two wires, three Line etc.) neural network model, obtain n first kind training sample, by taking one of city C as an example, city C corresponding first Class training label is 2, and for indicating that city C is tier 2 cities, it is [x that n corresponding to the title of city C, which ties up solely hot vector,i1, xi2... ..., 0 ... ..., xin], by the n dimension of city C, solely hot vector sum first kind training tag combination is got up, and obtains corresponding to city The first kind training sample of city C is [xi1, xi2... ..., 0 ... ..., xin, 2].
Step 102, using multiple first kind training samples training first nerves network model, nervus opticus network mould is obtained Type, wherein the first layer of first nerves network model be embeding layer, embeding layer be used for for n dimension vector output m dimension to Amount, wherein n > m;
After obtaining multiple first kind training samples, multiple first kind training samples training first nerves network mould is utilized Type, training objective are to enable first nerves network model to tie up solely hot vector for the n of input to obtain corresponding first kind training Label.
It should be noted that the application improves training speed, shortens training to reduce the memory occupied in training process Time, it is n, output vector dimension that the input layer (first layer) of first nerves network model, which is designed as an input vector dimension, For the embeding layer of m, wherein m < n.The dimension quilt of the input vector (namely input vector of first nerves network model) of embeding layer Be preconfigured to n, the output vector dimension of embeding layer is configured as m, thus, embeding layer n for receiving input dimension solely heat to Amount, and export corresponding m dimensional vector.Correspondingly, other layers after embeding layer can be designed as conventional neural network model Mode, for example, the second layer of first nerves network model is full articulamentum, output layer is normalization layer, output layer is for defeated Class object out, wherein the input vector dimension for the layer being connected with embeding layer is m, the output layer of first nerves network model Output vector dimension is the dimension of first kind training label.
Step 103, by each n dimension, solely hot vector inputs nervus opticus network model, and extracts embeding layer and each n is tieed up The output vector of only hot vector obtains each n dimension solely corresponding m dimensional vector of hot vector;
After using multiple first kind training samples training first nerves network model, obtained model is nervus opticus Network model, that is, the neural network model after training, thus, solely hot vector is tieed up to each n using nervus opticus network model Dimensionality reduction is carried out, specific way is that solely hot vector inputs nervus opticus network model, nervus opticus network model by each n dimension Embeding layer output m dimensional vector be then corresponding dimensionality reduction vector, wherein m be preconfigured parameter.
It needs to be to need to pass through the reason of solely hot vector carries out dimensionality reduction to n dimension using the neural network model after training First kind training sample is trained first nerves network model, so that the m that the insertion of the neural network model once exported Dimensional vector more accurately expresses n and ties up the object that solely hot vector is identified.
For example, the step is in a kind of application scenarios, if there is the city of 100 label vectors to be allocated, such as Fruit needs to be indicated with only hot vector of 100 dimensions using only hot vector, after using method provided in an embodiment of the present invention, energy It is enough that dimension is reduced to specified dimension m, for example, being identified with 10 dimensional vectors, also, the vector after dimensionality reduction can include other spies Sign, for example, the vector after dimensionality reduction carries the spy of city size if first kind training label is used to indicate the scale in city Reference breath.
Step 104, it establishes and stores each n and tie up the solely corresponding relationship between hot vector and corresponding m dimensional vector.
After obtaining each n and tieing up the solely corresponding relationship between hot vector and corresponding m dimensional vector, store, with In the coding as the other models of training.
Optionally, after establishing and storing the corresponding relationship that each n is tieed up between only hot vector and corresponding m dimensional vector, If necessary to another disaggregated model of training, this method further includes following steps:
Step 201, the corresponding second class training label of each m dimensional vector of input is obtained.
Step 202, based on each m dimensional vector and corresponding second class training label, multiple second class training samples are generated;
Step 203, using multiple second class sample training third nerve network models, fourth nerve network model is obtained, That is, having obtained another disaggregated model.
Wherein, the second class training label is the training objective for needing another disaggregated model of training, the second class training mark Label indicate that fourth nerve network model is required another disaggregated model by the vector of a specified dimension, that is, the Four neural network models are used for the second class training label of the m dimensional vector output specified dimension for each input.
Optionally, the parameter m of first nerves network model can be is determined by the input vector of third nerve network model 's.Before using multiple first kind training samples training first nerves network model, the refreshing for configuring third of input is obtained The parameter of input vector dimension through network model, and tieed up according to the output of the embeding layer of parameter configuration first nerves network model Degree.
It should be noted that step shown in the flowchart of the accompanying drawings can be in such as a group of computer-executable instructions It is executed in computer system, although also, logical order is shown in flow charts, and it in some cases, can be with not The sequence being same as herein executes shown or described step.
Through the above description of the embodiments, those skilled in the art can be understood that according to above-mentioned implementation The method of example can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but it is very much In the case of the former be more preferably embodiment.Based on this understanding, technical solution of the present invention is substantially in other words to existing The part that technology contributes can be embodied in the form of software products, which is stored in a storage In medium (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal device (can be mobile phone, calculate Machine, server or network equipment etc.) execute method described in each embodiment of the present invention.
Embodiment 2
A kind of data processing equipment is additionally provided in the present embodiment, and the device is for realizing above-described embodiment 1 and its excellent Embodiment is selected, to the term or implementation not being described in detail in this present embodiment, reference can be made to the related description in embodiment 1, Through carrying out repeating no more for explanation.
Term " module " as used below, can be achieved on the combination of the software and/or hardware of predetermined function.Although Device described in following embodiment is preferably realized with software, but the combined realization of hardware or software and hardware And can be contemplated.
Fig. 2 is the schematic diagram of data processing equipment according to an embodiment of the present invention, as shown in Fig. 2, the device includes: first Obtain module 10, the first training module 20, the first execution module 30, the second execution module 40.
Wherein, the first acquisition module is for obtaining multiple first kind training samples, wherein each first kind training sample packet It includes a n and ties up only hot vector and corresponding first kind training label;First training module is used to utilize multiple first kind training sample This training first nerves network model, obtains nervus opticus network model, wherein the first layer of first nerves network model is embedding Enter layer, embeding layer is used for the vector of the vector output m dimension for n dimension, n > m;First execution module is used to tieing up each n into solely heat Vector inputs nervus opticus network model, and extracts the output vector that embeding layer ties up each n solely hot vector, obtains each n Tie up the solely corresponding m dimensional vector of hot vector;Second execution module for establishing and storing each n dimension, solely tie up with corresponding m by hot vector Corresponding relationship between vector.
Optionally, obtaining module includes: first acquisition unit, ties up solely hot vector for obtaining mutually different n n;The Two acquiring units, for obtaining each n dimension solely corresponding first kind training label of hot vector;Assembled unit, for combining each n Only hot vector and corresponding first kind training label are tieed up, n first kind training sample is obtained.
Optionally, first acquisition unit includes: third acquiring unit, for obtaining the mark of n object;Unit is identified, It is indicated correspondingly for the mark of n object to be tieed up solely hot vector by mutually different n n.
Optionally, the device further include: second obtains module, for establish and store each n tie up solely hot vector with it is right After the corresponding relationship between m dimensional vector answered, the corresponding second class training label of each m dimensional vector of input is obtained;It generates Module, for generating multiple second class training samples based on each m dimensional vector and corresponding second class training label;Second instruction Practice module, for utilizing multiple second class sample training third nerve network models, obtains fourth nerve network model.
Optionally, device further include: third obtains module, for utilizing multiple first kind training samples training first Before neural network model, the parameter of the input vector dimension for configuring third nerve network model of input is obtained;First Configuration module, for the output dimension according to the embeding layer of parameter configuration first nerves network model.
Optionally, device further include: the 4th obtains module, for utilizing multiple first kind training samples training first Before neural network model, the parameter of the output dimension of the embeding layer for configuring first nerves network model of input is obtained; Second configuration module, for the output dimension according to the embeding layer of parameter configuration first nerves network model.
Optionally, the second layer of first nerves network model is full articulamentum, output layer is normalization layer.
It should be noted that above-mentioned modules can be realized by software or hardware, for the latter, Ke Yitong Following manner realization is crossed, but not limited to this: above-mentioned module is respectively positioned in same processor;Alternatively, above-mentioned modules are with any Combined form is located in different processors.
Obviously, those skilled in the art should be understood that each module of the above invention or each step can be with general Computing device realize that they can be concentrated on a single computing device, or be distributed in multiple computing devices and formed Network on, optionally, they can be realized with the program code that computing device can perform, it is thus possible to which they are stored It is performed by computing device in the storage device, and in some cases, it can be to be different from shown in sequence execution herein Out or description the step of, perhaps they are fabricated to each integrated circuit modules or by them multiple modules or Step is fabricated to single integrated circuit module to realize.In this way, the present invention is not limited to any specific hardware and softwares to combine.
Embodiment 3
The embodiments of the present invention also provide a kind of storage medium, computer program is stored in the storage medium, wherein The computer program is arranged to execute the step in any of the above-described embodiment of the method when operation.
Optionally, in the present embodiment, above-mentioned storage medium can include but is not limited to: USB flash disk, read-only memory (Read- Only Memory, referred to as ROM), it is random access memory (Random Access Memory, referred to as RAM), mobile hard The various media that can store computer program such as disk, magnetic or disk.
Embodiment 4
The embodiments of the present invention also provide a kind of electronic device, including memory and processor, stored in the memory There is computer program, which is arranged to run computer program to execute the step in any of the above-described embodiment of the method Suddenly.
Optionally, above-mentioned electronic device can also include transmission device and input-output equipment, wherein the transmission device It is connected with above-mentioned processor, which connects with above-mentioned processor.Fig. 3 is a kind of electronics dress of the embodiment of the present invention The hardware block diagram set.As shown in figure 3, electronic device may include one or more (only showing one in Fig. 3) processors 302 (processing units that processor 302 can include but is not limited to Micro-processor MCV or programmable logic device FPGA etc.) and use In the memory 304 of storing data, optionally, above-mentioned electronic device can also include the transmission device 306 for communication function And input-output equipment 308.It will appreciated by the skilled person that structure shown in Fig. 3 is only to illustrate, it is not right The structure of above-mentioned electronic device causes to limit.For example, electronic device may also include than shown in Fig. 3 more or less groups Part, or with the configuration different from shown in Fig. 3.
Memory 304 can be used for storing computer program, for example, the software program and module of application software, such as this hair The corresponding computer program of the recognition methods of image in bright embodiment, processor 302 are stored in memory 304 by operation Computer program realize above-mentioned method thereby executing various function application and data processing.Memory 304 can wrap Include high speed random access memory, may also include nonvolatile memory, as one or more magnetic storage device, flash memory or Other non-volatile solid state memories.In some instances, memory 304 can further comprise long-range relative to processor 302 The memory of setting, these remote memories can pass through network connection to electronic device.The example of above-mentioned network includes but not It is limited to internet, intranet, local area network, mobile radio communication and combinations thereof.
Transmitting device 306 is used to that data to be received or sent via a network.Above-mentioned network specific example may include The wireless network that the communication providers of electronic device provide.In an example, transmitting device 306 includes a network adapter (Network Interface Controller, referred to as NIC), can be connected by base station with other network equipments so as to It is communicated with internet.In an example, transmitting device 306 can be radio frequency (Radio Frequency, referred to as RF) Module is used to wirelessly be communicated with internet.
The foregoing is only a preferred embodiment of the present invention, is not intended to restrict the invention, for the skill of this field For art personnel, the invention may be variously modified and varied.It is all within principle of the invention, it is made it is any modification, etc. With replacement, improvement etc., should all be included in the protection scope of the present invention.

Claims (10)

1. a kind of data processing method, which is characterized in that the described method includes:
Obtain multiple first kind training samples, wherein each first kind training sample include n tie up solely hot vector with it is right The first kind training label answered;
Using the multiple first kind training sample training first nerves network model, nervus opticus network model is obtained, wherein The first layer of the first nerves network model be embeding layer, the embeding layer be used for for n dimension vector output m dimension to Amount, n > m;
By each n dimension, solely hot vector inputs the nervus opticus network model, and extracts the embeding layer for each institute The output vector that n ties up solely hot vector is stated, each n dimension solely corresponding m dimensional vector of hot vector is obtained;
It establishes and stores each n and tie up the solely corresponding relationship between hot vector and corresponding m dimensional vector.
2. the method according to claim 1, wherein described obtain multiple first kind training samples, comprising:
It obtains mutually different n n and ties up solely hot vector;
Obtain each n dimension solely corresponding first kind training label of hot vector;
It combines each n and ties up only hot vector and corresponding first kind training label, obtain n first kind training sample.
3. according to the method described in claim 2, it is characterized in that, described obtain mutually different n n dimension solely hot vector, packet It includes:
Obtain the mark of n object;
The mark of the n object is tieed up solely hot vector by the mutually different n n to be indicated correspondingly.
4. the method according to claim 1, wherein establish and store each n tie up solely hot vector with it is right After the corresponding relationship between m dimensional vector answered, the method also includes:
Obtain the corresponding second class training label of each of the input m dimensional vector;
Based on each m dimensional vector and corresponding second class training label, multiple second class training samples are generated;
Using the multiple second class sample training third nerve network model, fourth nerve network model is obtained.
5. according to the method described in claim 4, it is characterized in that, utilizing the multiple first kind training sample training first Before neural network model, the method also includes:
Obtain the parameter of the input vector dimension for configuring the third nerve network model of input;
According to the output dimension of the embeding layer of first nerves network model described in the parameter configuration.
6. the method according to claim 1, wherein utilizing the multiple first kind training sample training first Before neural network model, the method also includes:
Obtain the parameter of the output dimension of the embeding layer for configuring the first nerves network model of input;
According to the output dimension of the embeding layer of first nerves network model described in the parameter configuration.
7. the method according to claim 1, wherein the second layer of the first nerves network model is full connection Layer, output layer are normalization layer.
8. a kind of information processing unit, which is characterized in that described device includes:
First obtains module, for obtaining multiple first kind training samples, wherein each first kind training sample includes one A n ties up only hot vector and corresponding first kind training label;
First training module, for obtaining second using the multiple first kind training sample training first nerves network model Neural network model, wherein the first layer of the first nerves network model is embeding layer, and the embeding layer is used to tie up for n Vector output m dimension vector, n > m;
First execution module, for each only hot vector of n dimension to be inputted the nervus opticus network model, and described in extraction Embeding layer ties up each n the output vector of solely hot vector, obtains each n dimension solely corresponding m dimensional vector of hot vector;
Second execution module ties up the solely corresponding pass between hot vector and corresponding m dimensional vector for establishing and storing each n System.
9. a kind of storage medium, which is characterized in that be stored with computer program in the storage medium, wherein the computer Program is arranged to perform claim when operation and requires method described in 1 to 7 any one.
10. a kind of electronic device, including memory and processor, which is characterized in that be stored with computer journey in the memory Sequence, the processor are arranged to run the computer program in method described in perform claim 1 to 7 any one of requirement.
CN201910534815.0A 2019-06-20 2019-06-20 Data processing method and device, storage medium and electronic device Active CN110414678B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910534815.0A CN110414678B (en) 2019-06-20 2019-06-20 Data processing method and device, storage medium and electronic device
PCT/CN2019/117724 WO2020253049A1 (en) 2019-06-20 2019-11-13 Data processing method and device, storage medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910534815.0A CN110414678B (en) 2019-06-20 2019-06-20 Data processing method and device, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN110414678A true CN110414678A (en) 2019-11-05
CN110414678B CN110414678B (en) 2024-07-09

Family

ID=68359398

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910534815.0A Active CN110414678B (en) 2019-06-20 2019-06-20 Data processing method and device, storage medium and electronic device

Country Status (2)

Country Link
CN (1) CN110414678B (en)
WO (1) WO2020253049A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020253049A1 (en) * 2019-06-20 2020-12-24 平安科技(深圳)有限公司 Data processing method and device, storage medium, and electronic device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170372696A1 (en) * 2016-06-28 2017-12-28 Samsung Electronics Co., Ltd. Language processing method and apparatus
CN108665064A (en) * 2017-03-31 2018-10-16 阿里巴巴集团控股有限公司 Neural network model training, object recommendation method and device
CN109189889A (en) * 2018-09-10 2019-01-11 武汉斗鱼网络科技有限公司 A kind of barrage identification model method for building up, device, server and medium
CN109460821A (en) * 2018-10-29 2019-03-12 重庆中科云丛科技有限公司 A kind of neural network compression method, device, electronic equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201614958D0 (en) * 2016-09-02 2016-10-19 Digital Genius Ltd Message text labelling
CN110414678B (en) * 2019-06-20 2024-07-09 平安科技(深圳)有限公司 Data processing method and device, storage medium and electronic device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170372696A1 (en) * 2016-06-28 2017-12-28 Samsung Electronics Co., Ltd. Language processing method and apparatus
CN108665064A (en) * 2017-03-31 2018-10-16 阿里巴巴集团控股有限公司 Neural network model training, object recommendation method and device
CN109189889A (en) * 2018-09-10 2019-01-11 武汉斗鱼网络科技有限公司 A kind of barrage identification model method for building up, device, server and medium
CN109460821A (en) * 2018-10-29 2019-03-12 重庆中科云丛科技有限公司 A kind of neural network compression method, device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020253049A1 (en) * 2019-06-20 2020-12-24 平安科技(深圳)有限公司 Data processing method and device, storage medium, and electronic device

Also Published As

Publication number Publication date
WO2020253049A1 (en) 2020-12-24
CN110414678B (en) 2024-07-09

Similar Documents

Publication Publication Date Title
KR102645533B1 (en) Image identification method and device, identification model training method and device, and storage medium
CN107465528A (en) Optical Distribution Network ODN resource informations acquisition method and device
CN103718194B (en) Including the design code pattern of information decoded by digital device and its operating system
CN112418360B (en) Convolutional neural network training method, pedestrian attribute identification method and related equipment
CN110414987A (en) Recognition methods, device and the computer system of account aggregation
CN105224775A (en) Based on the method and apparatus that picture processing is arranged in pairs or groups to clothes
CN104951807B (en) The determination method and apparatus of stock market&#39;s mood
CN110390314A (en) A kind of visual perception method and apparatus
CN109784394A (en) A kind of recognition methods, system and the terminal device of reproduction image
CN110231974A (en) O&amp;M information visuallization method, apparatus, equipment and readable storage medium storing program for executing
CN108509994A (en) character image clustering method and device
CN109871791A (en) Image processing method and device
CN108345889A (en) A kind of application process carrying out registration identification to communication cabinet using Raspberry Pi
CN107330009A (en) Descriptor disaggregated model creation method, creating device and storage medium
CN110163293A (en) Red meat classification method, device, equipment and storage medium based on deep learning
JP7177294B2 (en) Individual flight training scheme generation system, method and apparatus
CN113379869A (en) License plate image generation method and device, electronic equipment and storage medium
CN109857861A (en) File classification method, device, server and medium based on convolutional neural networks
CN112597984A (en) Image data processing method, image data processing device, computer equipment and storage medium
CN110414678A (en) Data processing method and device and storage medium and electronic device
CN107886342A (en) Anti-forge inquiring method and its system based on random pattern anti-false sign
CN110163201A (en) Image measurement method and apparatus, storage medium and electronic device
CN110162957A (en) Method for authenticating and device, storage medium, the electronic device of smart machine
CN106682014A (en) Game display data generation method and device
CN110162769A (en) Text subject output method and device, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant