CN109360553A - A kind of novel Time-Delay Recurrent neural network for speech recognition - Google Patents

A kind of novel Time-Delay Recurrent neural network for speech recognition Download PDF

Info

Publication number
CN109360553A
CN109360553A CN201811380751.5A CN201811380751A CN109360553A CN 109360553 A CN109360553 A CN 109360553A CN 201811380751 A CN201811380751 A CN 201811380751A CN 109360553 A CN109360553 A CN 109360553A
Authority
CN
China
Prior art keywords
neural network
time
delay
layer
recurrent neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811380751.5A
Other languages
Chinese (zh)
Other versions
CN109360553B (en
Inventor
刘柏基
张伟彬
徐向民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201811380751.5A priority Critical patent/CN109360553B/en
Publication of CN109360553A publication Critical patent/CN109360553A/en
Application granted granted Critical
Publication of CN109360553B publication Critical patent/CN109360553B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/16Speech classification or search using artificial neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a kind of novel Time-Delay Recurrent neural networks for speech recognition, including linear discriminant analysis layer, time-delay neural network layer and depth Time-Delay Recurrent neural net layer, linear discriminant analysis layer is connect with time-delay neural network layer lowest level, depth Time-Delay Recurrent neural net layer is arranged between two time-delay neural network layers, including deep neural network layer and Time-Delay Recurrent neural net layer, Time-Delay Recurrent neural net layer is connect with upper layer and lower layer time-delay neural network layer respectively, general neural network structure in deep neural network layer is correspondingly connected with the Time-Delay Recurrent neural network structure in Time-Delay Recurrent neural net layer;A kind of novel Time-Delay Recurrent neural network for speech recognition of the invention can reach effect similar with memory unit in short-term is grown while keeping network structure simple, to improve training effectiveness, reduce operation consumption, reduce model volume.

Description

A kind of novel Time-Delay Recurrent neural network for speech recognition
Technical field
The present invention relates to the acoustic model of speech recognition model field, and in particular to it is a kind of for speech recognition it is novel when Prolong recurrent neural network.
Background technique
With the increasingly development of intelligent sound technology, the intelligent assistant as Siri, Alexa and Cortana is just come into Huge numbers of families greatly facilitate everybody daily life.Speech recognition is the key link of intelligent sound technology, passes through voice Identification technology can convert voice data into text data, so as to subsequent processing.In general, speech recognition system is by sound Learn model and language model composition.Now, the acoustic model based on neural network building is high based on mixing relative to early stage The acoustic model of this model, effect promoting is significant, and is widely used in various well-known speech recognition systems.
In speech recognition, the contextual information of sound characteristic frame how is effectively organized, extracted and processed, is one and grinds Study carefully focus.So far, the preferable neural network of Acoustic Modeling effect has time-delay neural network and base based on down-sampled technology In the length memory unit in short-term of recurrent neural network.Using the time-delay neural network of down-sampled technology due to there is no recurrence knot Structure has the characteristics that convergence speed is fast, model parameter amount is few;And long memory unit in short-term is due to long-term memory function Can, therefore it is more preferable to model effect, but training process is cumbersome, time-consuming, the complicated network structure, volume are big.In practice, two kinds of nets Network often mashed up use, complements each other.
Summary of the invention
In view of this, the present invention provides a kind of for the new of speech recognition to solve above-mentioned the problems of the prior art Type Time-Delay Recurrent neural network has the advantages that improve training effectiveness, reduces model volume.
To achieve the above object, technical scheme is as follows.
A kind of novel Time-Delay Recurrent neural network for speech recognition, including linear discriminant analysis layer, time delay nerve net Network layers and depth Time-Delay Recurrent neural net layer, the linear discriminant analysis layer are connect with time-delay neural network layer lowest level, institute It states depth Time-Delay Recurrent neural net layer to be arranged between two time-delay neural network layers, including deep neural network layer and time delay Recurrent neural net network layers, the Time-Delay Recurrent neural net layer are connect with upper layer and lower layer time-delay neural network layer respectively, the depth Spend the general neural network structure in neural net layer and the Time-Delay Recurrent neural network structure in Time-Delay Recurrent neural net layer It is correspondingly connected with, the deep neural network layer is used to increase the depth of recursion paths, reinforces the ability to express of recurrence information.
Further, the Time-Delay Recurrent neural network structure includes time-delay neural network structure and recurrent neural network knot The context input of structure, the time-delay neural network structure is directly inputted in Recursive Neural Network Structure, with recurrent neural net Network structure combines, and the Time-Delay Recurrent neural network structure is for reducing the network number of plies.
Further, in the time-delay neural network structure, output is calculated according to following formula:
Yt=f (WCt+b);
Ct={ XT-n, Xt+n}
Wherein Xt、YtIt is the input and output of t moment, f is nonlinear function, WCt+ b is Affine arithmetic, and W is in Affine arithmetic Two-dimensional matrix, b indicate direction vector, CtIt is the contextual information by splicing, n is lower layer's network context information frame number, More than or equal to 1;
In the Recursive Neural Network Structure, output is calculated according to following formula:
Yt=f (WXt+WYT-1+b);
In the Time-Delay Recurrent neural network structure, above-mentioned formula is merged, output is calculated according to following formula:
Yt=f (WCt+WYT-1+b);
Ct={ XT-n, Xt+n}。
Further, after general neural network structure being connect with Time-Delay Recurrent neural network structure, by non-linear change It changes, output is calculated according to following formula:
Yt=f (WCt+WDT-1+b);
Ct={ XT-n, Xt+n};
DT-1=f (WYT-1+b)。
Further, which includes the hyper parameter of two adjustables, one of hyper parameter For the number of plies of Time-Delay Recurrent neural net layer, debugging range is 1~3 layer, another hyper parameter is the length of recursion paths, i.e., deeply The number of plies of neural net layer is spent, debugging range is 1~2 layer.
Further, the context input length of the time-delay neural network structure is usually 8~20 speech sample frames.
Further, which uses the training method of data parallel, in data parallel training In the gradient updating step of process, this concept of momentum is introduced to carry out the smoothing processing of parameter, in primary parameter renewal amount After the completion of calculating, new parameter is smoothed according to following formula:
Value=α * value+ (1- α) * update
Wherein, value is model parameter, and α is parameter retention factor, and update is the gradient updating step meter of data parallel Obtained gradient to be updated.
Compared with the prior art, a kind of novel Time-Delay Recurrent neural network for speech recognition of the invention has following Advantages and beneficial effects:
In neural network acoustic model, although long memory unit in short-term works well to the modeling of context, its Training consumption resource is excessive.It is found in mashed up time-delay neural network and the long research of memory unit in short-term, at one common 6 In the down-sampled time-delay neural network of layer, the additional one layer long memory unit in short-term of addition can make the training time become about original Twice;It and is about the four of former network in the mashed up network training time-consuming that effect preferably adds three layers long memory unit in short-term Times.At the same time, the growth of parameter amount is also considerable.Based on this problem, it is believed that exist in mashed up network certain Network structure redundancy propose a kind of novel mashed up time-delay neural network and recurrent neural net to reduce this redundancy The net structure method of network, referred to as Time-Delay Recurrent neural network.By using this network, can keep modeling effect with Original mashed up time-delay neural network and long while memory unit network is similar in short-term, improves training effectiveness, reduction model Product.
Detailed description of the invention
Fig. 1 is the schematic diagram of typical down-sampled time-delay neural network structure.
Fig. 2 is the schematic diagram that one layer of Recursive Neural Network Structure is inserted into Fig. 1.
Fig. 3 is that time-delay neural network structure and Recursive Neural Network Structure are combined into Time-Delay Recurrent neural network knot in Fig. 2 The schematic diagram of structure.
Fig. 4 is a kind of novel Time-Delay Recurrent neural network structure schematic diagram for speech recognition of the invention.
Specific embodiment
Specific implementation of the invention is described further below in conjunction with attached drawing and specific embodiment.It may be noted that It is that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments, based on the reality in the present invention Example is applied, every other embodiment obtained by those of ordinary skill in the art without making creative efforts all belongs to In the scope of protection of the invention.
As shown in figure 4, be a kind of novel Time-Delay Recurrent neural network structure schematic diagram for speech recognition of the invention, Including linear discriminant analysis layer, time-delay neural network layer and depth Time-Delay Recurrent neural net layer, the linear discriminant analysis layer It is connect with time-delay neural network layer lowest level, the depth Time-Delay Recurrent neural net layer is arranged in two time-delay neural network layers Between, including deep neural network layer and Time-Delay Recurrent neural net layer, the Time-Delay Recurrent neural net layer is respectively and up and down Two layers of time-delay neural network layer connects, general neural network structure and Time-Delay Recurrent nerve net in the deep neural network layer Time-Delay Recurrent neural network structure in network layers is correspondingly connected with, and the deep neural network layer is used to increase the depth of recursion paths Degree reinforces the ability to express of recurrence information.
The Time-Delay Recurrent neural network structure includes time-delay neural network structure and Recursive Neural Network Structure, when described The context input of time-delay neural network structure is directly inputted in Recursive Neural Network Structure, mutually ties with Recursive Neural Network Structure It closes, the Time-Delay Recurrent neural network structure is for reducing the network number of plies.
Embodiment 1
As shown in Figure 1, be a pyramidal structure for the schematic diagram of typical down-sampled time-delay neural network structure, input The serial number such as table one of partial down-sampled context data frame.
Table a typical down-sampled time-delay neural network one by one
Wherein contextual information indicates network layer to the connecting method of input, and such as { -3,3 } were indicated the past of lower layer's network Third frame and third frame in future are stitched together the input as this layer.
As shown in Fig. 2, the second layer and third layer in the down-sampled time-delay neural network structure of the typical case described in Fig. 1 interleaves Enter one layer of arbitrary Recursive Neural Network Structure, increases the depth of network, so that network is more difficult to train, it therefore, will be adjacent Time-delay neural network structure and Recursive Neural Network Structure directly merge into Time-Delay Recurrent neural network structure, i.e., directly by when The context input of time-delay neural network structure is directly inputted in Recursive Neural Network Structure, as shown in Figure 3.
In stating time-delay neural network structure, output is calculated according to following formula:
Yt=f (WCt+b);
Ct={ XT-n, Xt+n}
Wherein Xt、YtIt is the input and output of t moment, f is nonlinear function, WCt+ b is Affine arithmetic, and W is in Affine arithmetic Two-dimensional matrix, b indicate direction vector, CtIt is the contextual information by splicing, n is lower layer's network context information frame number, More than or equal to 1;
In the Recursive Neural Network Structure, output is calculated according to following formula:
Yt=f (WXt+WYT-1+b);
In the Time-Delay Recurrent neural network structure, above-mentioned formula is merged, output is calculated according to following formula:
Yt=f (WCt+WYT-1+b);
Ct={ XT-n, Xt+n}
By this merging method, original three-wheel operation can be reduced to two-wheeled operation, raising efficiency reduces parameter Amount, and it is similar with the modelling effect before merging.
It is found by the long memory unit in short-term of observation, the output of previous frame can be after complicated Nonlinear Processing, The output of present frame can be influenced indirectly;The present invention simplifies this nonlinear operation.In depth Time-Delay Recurrent nerve net In network structure, the output of previous frame can nonlinear transformation Jing Guo one or more layers general neural network structure, be just input to and work as Previous frame, output are calculated according to following formula:
Yt=f (WCt+WDT-1+b);
Ct={ XT-n, Xt+n};
DT-1=f (WYT-1+b)
This method is equivalent to while greatly simplifying operation, also still recurrence information can be allowed along more complicated road Diameter is propagated, it can keeps the depth of recursion paths.It finds in an experiment, using the depth time delay of this operation mode Recursive Neural Network Structure, can obtain with it is existing when the similar effect of extension-short-term memory mixed network structure, and when training Between it is shorter, parameter is less, wherein in testing, the network with three layer depth Time-Delay Recurrent neural net layers, the training time is big It is approximately the half with three layers long memory network in short-term.
In the present invention, the number of plies of Time-Delay Recurrent neural net layer is a hyper parameter that can be debugged, and under study for action, is used One to three layer of effect is similar, therefore for performance considerations, one layer of Time-Delay Recurrent layer can be used;But it is not precluded within certain applications Using the situation that multitiered network performance is more preferable in scene.
Another hyper parameter that can be debugged is the number of plies of deep neural network layer, i.e. the length of recursion paths is;If road Diameter is excessively complicated, can increase trained difficulty, it is proposed that when use using one to two layers general neural network structure.
The setting of input feature vector frame context input length directly affects the training effectiveness and effect of model.Due to recurrence net The characteristics of network, recurrence information are difficult to be carried over long distances, and at the same time, recursive operation can not carry out parallel;It is growing in short-term In memory network, longer time frame is generallyd use, the context input length of such as 50 speech sample frames is to guarantee modeling effect Fruit, and the context of time-delay neural network structure input length is usually 8~20 speech sample frames, the two differs greatly, this It is also a main cause of operation time.A kind of novel Time-Delay Recurrent neural network for speech recognition of the invention can be with Reach effect similar with memory network in short-term is grown in the case where 16 frames with shorter context length.
Novel Time-Delay Recurrent neural network of the invention uses the training method of data parallel, in data parallel training process Gradient updating step in, introduce this concept of momentum to carry out the smoothing processing of parameter, calculated in primary parameter renewal amount After the completion, new parameter is smoothed according to following formula:
Value=α * value+ (1- α) * update
Wherein, value is model parameter, and α is parameter retention factor, and update is the gradient updating step meter of data parallel Obtained gradient to be updated.
In conclusion a kind of novel Time-Delay Recurrent neural network for speech recognition of the invention pass through by time delay nerve The net structure method that network structure and Recursive Neural Network Structure combine, keep modeling effect with it is originally mashed up when sprawl Through network with it is long memory unit network is similar in short-term while, improve training effectiveness, reduce model volume.

Claims (7)

1. a kind of novel Time-Delay Recurrent neural network for speech recognition, it is characterised in that: including linear discriminant analysis layer, when Time-delay neural network layer and depth Time-Delay Recurrent neural net layer, the linear discriminant analysis layer and time-delay neural network layer lowest level Connection, the depth Time-Delay Recurrent neural net layer is arranged between two time-delay neural network layers, including deep neural network Layer and Time-Delay Recurrent neural net layer, the Time-Delay Recurrent neural net layer connect with upper layer and lower layer time-delay neural network layer respectively It connects, the general neural network structure in the deep neural network layer and the nerve of the Time-Delay Recurrent in Time-Delay Recurrent neural net layer Network structure is correspondingly connected with, and the deep neural network layer is used to increase the depth of recursion paths, reinforces the expression of recurrence information Ability.
2. a kind of novel Time-Delay Recurrent neural network for speech recognition according to claim 1, it is characterised in that: institute Stating Time-Delay Recurrent neural network structure includes time-delay neural network structure and Recursive Neural Network Structure, the time-delay neural network The context input of structure is directly inputted in Recursive Neural Network Structure, combines with Recursive Neural Network Structure, when described Prolong Recursive Neural Network Structure for reducing the network number of plies.
3. a kind of novel Time-Delay Recurrent neural network for speech recognition according to claim 1, which is characterized in that institute It states in time-delay neural network structure, output is calculated according to following formula:
Yt=f (WCt+b);
Ct={ XT-n, Xt+n}
Wherein Xt、YtIt is the input and output of t moment, f is nonlinear function, WCt+ b is Affine arithmetic, and W is two in Affine arithmetic Matrix is tieed up, b indicates direction vector, CtIt is the contextual information by splicing, n is lower layer's network context information frame number, is greater than Equal to 1;
In the Recursive Neural Network Structure, output is calculated according to following formula:
Yt=f (WXt+WYT-1+b);
In the Time-Delay Recurrent neural network structure, above-mentioned formula is merged, output is calculated according to following formula:
Yt=f (WCt+WYT-1+b);
Ct={ XT-n, Xt+n}。
4. a kind of novel Time-Delay Recurrent neural network for speech recognition according to claim 1, which is characterized in that will After general neural network structure is connect with Time-Delay Recurrent neural network structure, by nonlinear transformation, export according to following public affairs Formula calculates:
Yt=f (WCt+WDT-1+b);
Ct={ XT-n, Xt+n};
DT-1=f (WYT-1+b)。
5. a kind of novel Time-Delay Recurrent neural network for speech recognition according to claim 1, it is characterised in that: should Novel Time-Delay Recurrent neural network includes two hyper parameters, and one of hyper parameter is the number of plies of Time-Delay Recurrent neural net layer, Debugging range is 1~3 layer, another hyper parameter is the number of plies of deep neural network layer, the i.e. length of recursion paths, debugs range To be 1~2 layer.
6. a kind of novel Time-Delay Recurrent neural network for speech recognition according to claim 1, it is characterised in that: institute The context input length for stating time-delay neural network structure is usually 8~20 speech sample frames.
7. a kind of novel Time-Delay Recurrent neural network for speech recognition according to claim 1, it is characterised in that: should Novel Time-Delay Recurrent neural network uses the training method of data parallel.
CN201811380751.5A 2018-11-20 2018-11-20 Delay recurrent neural network for speech recognition Active CN109360553B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811380751.5A CN109360553B (en) 2018-11-20 2018-11-20 Delay recurrent neural network for speech recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811380751.5A CN109360553B (en) 2018-11-20 2018-11-20 Delay recurrent neural network for speech recognition

Publications (2)

Publication Number Publication Date
CN109360553A true CN109360553A (en) 2019-02-19
CN109360553B CN109360553B (en) 2023-06-20

Family

ID=65332297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811380751.5A Active CN109360553B (en) 2018-11-20 2018-11-20 Delay recurrent neural network for speech recognition

Country Status (1)

Country Link
CN (1) CN109360553B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110211588A (en) * 2019-06-03 2019-09-06 北京达佳互联信息技术有限公司 Audio recognition method, device and electronic equipment
CN113113022A (en) * 2021-04-15 2021-07-13 吉林大学 Method for automatically identifying identity based on voiceprint information of speaker
CN116825114A (en) * 2023-08-31 2023-09-29 深圳市声扬科技有限公司 Voiceprint recognition method, voiceprint recognition device, electronic equipment and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090210218A1 (en) * 2008-02-07 2009-08-20 Nec Laboratories America, Inc. Deep Neural Networks and Methods for Using Same
CN106328122A (en) * 2016-08-19 2017-01-11 深圳市唯特视科技有限公司 Voice identification method using long-short term memory model recurrent neural network
CN108447490A (en) * 2018-02-12 2018-08-24 阿里巴巴集团控股有限公司 The method and device of Application on Voiceprint Recognition based on Memorability bottleneck characteristic
CN108510985A (en) * 2017-02-24 2018-09-07 百度(美国)有限责任公司 System and method for reducing the principle sexual deviation in production speech model
US20180308487A1 (en) * 2017-04-21 2018-10-25 Go-Vivace Inc. Dialogue System Incorporating Unique Speech to Text Conversion Method for Meaningful Dialogue Response

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090210218A1 (en) * 2008-02-07 2009-08-20 Nec Laboratories America, Inc. Deep Neural Networks and Methods for Using Same
CN106328122A (en) * 2016-08-19 2017-01-11 深圳市唯特视科技有限公司 Voice identification method using long-short term memory model recurrent neural network
CN108510985A (en) * 2017-02-24 2018-09-07 百度(美国)有限责任公司 System and method for reducing the principle sexual deviation in production speech model
US20180308487A1 (en) * 2017-04-21 2018-10-25 Go-Vivace Inc. Dialogue System Incorporating Unique Speech to Text Conversion Method for Meaningful Dialogue Response
CN108447490A (en) * 2018-02-12 2018-08-24 阿里巴巴集团控股有限公司 The method and device of Application on Voiceprint Recognition based on Memorability bottleneck characteristic

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
VIJAYADITYA PEDDINTI等: "Low Latency Acoustic Modeling Using Temporal Convolution and LSTMs", 《IEEE SIGNAL PROCESSING LETTERS》 *
王勇和等: "基于TDNN-FSMN的蒙古语语音识别技术研究", 《中文信息学报》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110211588A (en) * 2019-06-03 2019-09-06 北京达佳互联信息技术有限公司 Audio recognition method, device and electronic equipment
US11482208B2 (en) 2019-06-03 2022-10-25 Beijing Dajia Internet Information Technology Co., Ltd. Method, device and storage medium for speech recognition
CN113113022A (en) * 2021-04-15 2021-07-13 吉林大学 Method for automatically identifying identity based on voiceprint information of speaker
CN116825114A (en) * 2023-08-31 2023-09-29 深圳市声扬科技有限公司 Voiceprint recognition method, voiceprint recognition device, electronic equipment and computer readable storage medium
CN116825114B (en) * 2023-08-31 2023-11-10 深圳市声扬科技有限公司 Voiceprint recognition method, voiceprint recognition device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN109360553B (en) 2023-06-20

Similar Documents

Publication Publication Date Title
JP6466590B2 (en) Big data processing method based on deep learning model satisfying K-order sparse constraint
CN109360553A (en) A kind of novel Time-Delay Recurrent neural network for speech recognition
CN106650789A (en) Image description generation method based on depth LSTM network
CN107016406A (en) The pest and disease damage image generating method of network is resisted based on production
CN108073711A (en) A kind of Relation extraction method and system of knowledge based collection of illustrative plates
CN111161277A (en) Natural image matting method based on deep learning
CN109801230A (en) A kind of image repair method based on new encoder structure
CN103745482B (en) A kind of Dual-threshold image segmentation method based on bat algorithm optimization fuzzy entropy
CN109522416A (en) A kind of construction method of Financial Risk Control knowledge mapping
CN110008961A (en) Text real-time identification method, device, computer equipment and storage medium
CN109902805A (en) The depth measure study of adaptive sample synthesis and device
CN109920476A (en) The disease associated prediction technique of miRNA- based on chaos game playing algorithm
CN111626296B (en) Medical image segmentation system and method based on deep neural network and terminal
CN110246085A (en) A kind of single-image super-resolution method
CN109614896A (en) A method of the video content semantic understanding based on recursive convolution neural network
CN108961270B (en) Bridge crack image segmentation model based on semantic segmentation
CN109947948A (en) A kind of knowledge mapping expression learning method and system based on tensor
CN102129700A (en) Infrared simulated image platform under ocean background and image generation method thereof
CN104615679A (en) Multi-agent data mining method based on artificial immunity network
CN110472668A (en) A kind of image classification method
CN116152199A (en) Hand gesture and shape estimation method based on segmentation map guidance and regular constraint
CN116542991A (en) Network architecture for fracture image segmentation, training method and segmentation method thereof
CN107939371B (en) A kind of method and device of determining well pattern thickening feasibility
CN116363149A (en) Medical image segmentation method based on U-Net improvement
CN113468865B (en) Deep learning-based method for extracting relationship between entities in subway design field specification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant