CN113657374A - English address recognition and analysis method for international mail list - Google Patents

English address recognition and analysis method for international mail list Download PDF

Info

Publication number
CN113657374A
CN113657374A CN202110725042.1A CN202110725042A CN113657374A CN 113657374 A CN113657374 A CN 113657374A CN 202110725042 A CN202110725042 A CN 202110725042A CN 113657374 A CN113657374 A CN 113657374A
Authority
CN
China
Prior art keywords
image
address
english
mail list
international mail
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110725042.1A
Other languages
Chinese (zh)
Inventor
欧阳皓晗
宁重阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University of Forestry and Technology
Original Assignee
Central South University of Forestry and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University of Forestry and Technology filed Critical Central South University of Forestry and Technology
Priority to CN202110725042.1A priority Critical patent/CN113657374A/en
Publication of CN113657374A publication Critical patent/CN113657374A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Character Discrimination (AREA)

Abstract

An English address recognition and analysis method of an international mail list comprises the following steps: the method comprises the following steps: obtaining an original image of an international mail list, cutting an address area to obtain an original address image, and preprocessing the original address image; step two: performing English text confirmation on the gray level original address image through a neural network to obtain specific English text information in the gray level original address image; step three: performing word segmentation and sentence processing on the English text information to respectively generate corresponding English sentences; step four: translating a plurality of English sentences generated in the third step through a Chinese-English translation module to obtain corresponding Chinese addresses; therefore, the invention can effectively overcome the defects of the prior art, realize the English address identification of the postal international mail list and improve the identification speed.

Description

English address recognition and analysis method for international mail list
Technical Field
The invention relates to the technical field of mail order processing, in particular to an English address recognition and analysis method of an international mail order.
Background
Because the life rhythm is accelerated and the working efficiency is required to be improved, the key point of information processing is to input Chinese characters into a computer quickly and efficiently. In the past, people use paper to record information, at present, people use computers to record information and edit and arrange the information, in the explosion age of the information, characters input by a manual mode cannot meet the requirement, and if the computer can automatically recognize the characters, the computer can replace the manual work.
At present, the character input method mainly comprises keyboard input, handwriting recognition, voice input, automatic machine recognition input and the like. The manual keyboard input method can be mastered after a certain period of learning and training, although the handwriting recognition and the voice input are simple and convenient, the input speed is slow, and for a large amount of existing document data, the methods cost a large amount of manpower, material resources and financial resources. Therefore, only computer automatic recognition technology, i.e., optical character recognition technology (OCR), is available to enable high-speed and automatic input of character information.
As is known, the postal company is a comprehensive business company, and the business of the postal company includes several major categories, such as financial business, mail, newspapers and periodicals, securities, and philatelic, wherein the mail business is developed rapidly and occupies most of the domestic and foreign markets. International mail is one of the international services, sent from one country to another. The international mails are further classified into international plain letters and international registered letters. In 2020, due to epidemic situations, many people are separated from relatives and friends and float in other countries independently, the thinking and the country can only express the international mails through the mails, so the consignment of the international mails is gradually increased, the conventional operation mode of manual sorting and manual address identification causes the continuous low sorting rate, and how to utilize deep learning to quickly identify the single address becomes urgent.
Mail distribution work of postal companies is finished manually for a long time, labor intensity is high, work efficiency is seriously affected, and misdelivery sometimes happens inevitably. If the international mail address is automatically detected, translated, classified and delivered, the newly-built processing time can be greatly saved, and the delivery speed of the mail is accelerated. The computer is used for replacing manual letter sorting, so that a large amount of manpower and material resources of the postal company can be saved, the mail sorting efficiency of the postal company is effectively improved, and the method has a wide application prospect.
Therefore, in view of the above-mentioned drawbacks, the present inventors have conducted extensive research and design to overcome the above-mentioned drawbacks by designing and developing an english address recognition analysis method for an international mailing list, which combines the experience and results of related industries for many years.
Disclosure of Invention
The invention aims to provide an English address recognition and analysis method of an international mail list, which can effectively overcome the defects of the prior art, realize the English address recognition of the postal and international mail list and improve the recognition speed.
In order to achieve the purpose, the invention discloses an English address recognition and analysis method of an international mail list, which is characterized by comprising the following steps:
the method comprises the following steps: obtaining an original image of an international mail list, cutting an address area to obtain an original address image, and preprocessing the original address image;
step two: performing English text confirmation on the gray level original address image through a neural network to obtain specific English text information in the gray level original address image;
step three: performing word segmentation and sentence processing on the English text information to respectively generate corresponding English sentences;
step four: and translating the English sentences generated in the third step by a Chinese-English translation module to obtain corresponding Chinese addresses.
Wherein: in the first step, image acquisition is carried out on the international mail order through image acquisition equipment of a mobile phone, a scanner or a camera so as to obtain an original image of the international mail order, and the address position of the original image is cut through a cutting module so as to obtain an original address image.
Wherein: the pre-processing of the original address image comprises the following sub-steps:
step 1.1: denoising the original address image, and performing gray value conversion processing after denoising to obtain a gray original address image;
step 1.2: and carrying out binarization processing on the denoised gray level image according to a maximum inter-class variance method to generate a binary image.
Wherein: determining a threshold value T of the binarization process based on the maximum inter-class variance method, the determining the threshold value T further includes classifying pixels having a gradation value of T or less and pixels having a gradation value larger than T into two classes, set as class 1 and class 2, the number of pixels in class 1 being W1(T) mean value of the gray values M1(T) variance σ1(T) the number of pixels in class 2 is W2(T) mean value of the gray values M2(T) variance σ2(T) average value of all pixels is MT(ii) a Calculating the intra-class variance, and the formula is as follows:
σw 2=W1(T)σ1 2(T)+W2(T)σ2 2(T)
calculating the variance between classes, and the formula is as follows:
σb 2=W1(T)(M1(T)-MT)2+W2(T)(M2(T)-MT)2=W1(T)W2(T)(M1(T)-M2(T))2
let σ beb 2w 2Become maximum, i.e. byb 2Maximum, find the maximum σb 2The corresponding gray value T is the threshold value.
Wherein: and in the third step, inputting the gray level original address image into a pre-trained convolutional neural network model to obtain a spatial feature vector, inputting the spatial feature vector into a pre-trained encoder to obtain encoding information, using the spatial feature vector extracted by the convolutional neural network as an input sequence of the cyclic neural network by the encoder, updating the hidden state of the encoder, and performing bidirectional transmission on the spatial feature vector according to time steps in sequence.
Wherein: and after the two hidden layers execute the same operation, connecting the bidirectional hidden state of the last hidden layer as the input of an output layer to obtain the coding information of the encoder, and inputting the coding information into a pre-trained decoder to obtain a recognition result.
Wherein: and step two, constructing a convolutional neural network model, an encoder and a decoder, training the model, inputting the text image data transformed in the image preprocessing stage by the convolutional neural network, and outputting the spatial feature vector of the image.
Wherein: and in the fourth step, sentences irrelevant to the address information are filtered according to the keywords representing the regional levels, and the sentences belonging to the address information are transmitted into the built Chinese address database for matching and checking to obtain the corresponding Chinese addresses.
As can be seen from the above, the method for identifying and analyzing an english address of an international mailing list according to the present invention has the following effects:
1. the method realizes the English address recognition of the postal international mail list, improves the recognition speed and changes the traditional mail sorting mode.
2. The recognition efficiency is high, the Chinese address can be directly obtained, and manual intervention and operation are not needed.
The details of the present invention can be obtained from the following description and the attached drawings.
Drawings
Fig. 1 shows a schematic diagram of an english address recognition analysis method of an international mail list according to the present invention.
Detailed Description
Referring to fig. 1, the method for identifying and analyzing english addresses of the international mailing list of the present invention is shown.
The English address recognition and analysis method of the international mail list comprises the following steps:
the method comprises the following steps: obtaining an original image of an international mail list, cutting an address area to obtain an original address image, and preprocessing the original address image;
the method comprises the following steps that image acquisition equipment such as a mobile phone, a scanner and a camera can be used for acquiring an image of an international mail order to obtain an original image of the international mail order, and a cutting module is used for cutting the address position of the original image to obtain an original address image, wherein the preprocessing of the original address image comprises the following substeps:
step 1.1: the method comprises the steps of denoising an original address image, denoising the gray image according to a median filtering method, carrying out gray value conversion processing after denoising to obtain a gray original address image, denoising to enable boundaries between characters in the image to be clearer, and then carrying out gray value conversion, wherein the converted gray image is more beneficial to recognition and can improve the recognition precision of the gray image.
Step 1.2: performing binarization processing on the denoised gray level image according to a maximum inter-class variance method to generate a binary image, wherein a threshold value T of the binarization processing is determined according to the maximum inter-class variance method, and the determining of the threshold value T further comprises dividing pixels with gray level values below T and pixels with gray level values larger than T into two classes, namely class 1 and class 2, wherein the number of the pixels in the class 1 is W1(T) mean value of the gray values M1(T) variance σ1(T) the number of pixels in class 2 is W2(T) mean value of the gray values M2(T) variance σ2(T) average value of all pixels is MT(ii) a Calculating the intra-class variance, and the formula is as follows:
σw 2=W1(T)σ1 2(T)+W2(T)σ2 2(T)
calculating the variance between classes, and the formula is as follows:
σb 2=W1(T)(M1(T)-MT)2+W2(T)(M2(T)-MT)2=W1(T)W2(T)(M1(T)-M2(T))2
let σ beb 2w 2Become maximum, i.e. byb 2Maximum, find the maximum σb 2The corresponding gray value T is the threshold value.
Assuming that a given image has L gray levels, setting a threshold value as T, setting the gray value of a pixel larger than the threshold value as 1, and setting the gray value of an image smaller than the threshold value as 0, which is equivalent to 1 for foreground color and 0 for background color;
performing binarization processing on the denoised gray level image according to the threshold value T to generate a binary image, wherein the pixel values of the binary image are as follows:
wherein, P (i, j) is the pixel value of the denoised gray level image, and the value range is [0, m ]; t is the threshold value of the binarization processing, and 0 < T < m.
Step two: and confirming the English text of the gray level original address image through a neural network to obtain the specific English text information in the gray level original address image.
The method comprises the steps of inputting a gray original address image into a pre-trained convolutional neural network model to obtain a spatial feature vector, inputting the spatial feature vector into a pre-trained encoder to obtain encoding information, updating a self hidden state by using the spatial feature vector extracted by the convolutional neural network as an input sequence of the cyclic neural network by the encoder, and performing bidirectional transmission on the spatial feature vector according to time steps in sequence. And after the two hidden layers execute the same operation, connecting the bidirectional hidden state of the last hidden layer as the input of an output layer to obtain the coding information of the encoder, and inputting the coding information into a pre-trained decoder to obtain a recognition result.
The decoder of the recurrent neural network takes the coding information of the coder as an input sequence, firstly initializes the hidden state of the decoder, then calculates the attention weight of each coding information at the current time step, and obtains the context sequence information after weighted summation. And finally, the decoder takes the output and hidden state of the previous moment and the context sequence information as input, updates the hidden state of the decoder and transmits the updated hidden state to the output layer. And finally, the output layer outputs the character with the maximum probability at each moment as the result of the final prediction sequence.
And B, constructing a convolutional neural network model, an encoder and a decoder before the second step, training the model, inputting the text image data transformed in the image preprocessing stage by the convolutional neural network, and outputting the spatial feature vector of the image. The whole convolutional neural network consists of a plurality of operations, including Conv, Relu, MaxPholing, BN, dropout. The use of a deeper and reasonably layered network structure is beneficial to obtaining more abstract image space feature representation. Conv is convolutional layers, each convolutional layer having one or more convolutional kernels, and these convolutional kernels are shared at that layer. Specifically, in this embodiment, the initialization device is used to initialize each convolution kernel, and the initialization strategy is used to make the variance of each layer output as equal as possible. The featuremaps are obtained by dot-product the convolution layer input and the convolution kernel as the next layer input. The plurality of convolutional layers may extract spatial features of different levels of the input image. The convolution layer of this embodiment adopts an SAME filling manner for the input data, and uses the convolution kernel with a size of 3 × 3, so that the smaller convolution kernel can make the model focus more on the local information, and avoid the loss of image information due to the overlarge receptive field of the convolution kernel, so that some characters with similar appearances, such as g and q, a and u, etc., can be distinguished more easily.
The present embodiment uses Relu as an activation function, which may provide a non-linear factor to the neural network, making it possible to handle more complex problems. Compared with other activation functions, the Relu iteration speed is higher, and when the input value is greater than 0, the gradient of the function is 1, so that the phenomenon of gradient disappearance cannot occur, the model optimization is facilitated, and the convergence speed is accelerated.
After the convolution operation is performed, the dimensionality of featuremas is high, and the dimensionality can be reduced by using the pooling layer. This embodiment uses MaxPooling, which retains the largest eigenvalues in the pooling window, i.e., retains the most expressible of these eigenvalues, while the other smaller eigenvalues will be discarded altogether. The pooling layer of the embodiment adopts a VALID filling manner for input data, and the pooling window uses a size of 2 × 1 in addition to a size of 2 × 2, so that the arrangement can be more suitable for word pictures with large length-width ratios. Through pooling operation, the number of parameters of the model can be reduced, the overfitting problem is relieved, and meanwhile, the invariance of the position and the rotation of the features can be guaranteed.
In order to reduce the risk of model induced by the over-fitting phenomenon, the embodiment adds a BN (Batchnormalization) layer and a dropout operation after the partial layer. BN allows for the use of a greater learning rate, thereby increasing the training speed of the model; dropout enables the activation value of a neuron to stop working with a certain probability P. These methods can improve the generalization ability of the model.
After the input image data passes through the convolutional neural network, the obtained output is the overall spatial feature of the input image, and the output is used as the input of the next stage.
Step three: the method can perform word segmentation and sentence processing on the English text information, can perform word segmentation on a plurality of English character strings of the English text information based on a Porterstem algorithm, and respectively generate corresponding English sentences, wherein the generated English sentences are, for example: "MRSFENGMEI", "TIANHEQUZIJIGJIE 1 HAO", "901 FANG, GUANGZHOU", "CHINA".
Step four: and translating a plurality of English sentences generated in the third step by using a Chinese-English translation module, filtering sentences irrelevant to address information according to keywords such as 'province', 'city', 'district', 'street', 'road', 'cell' and the like representing regional levels, and transmitting the sentences belonging to the address information into a built Chinese address database for matching and checking to obtain corresponding Chinese addresses.
It should be apparent that the foregoing description and illustrations are by way of example only and are not intended to limit the present disclosure, application or uses. While embodiments have been described in the embodiments and depicted in the drawings, the present invention is not limited to the particular examples illustrated by the drawings and described in the embodiments as the best mode presently contemplated for carrying out the teachings of the present invention, and the scope of the present invention will include any embodiments falling within the foregoing description and the appended claims.

Claims (8)

1. An English address recognition analysis method of an international mail list is characterized by comprising the following steps:
the method comprises the following steps: obtaining an original image of an international mail list, cutting an address area to obtain an original address image, and preprocessing the original address image;
step two: performing English text confirmation on the gray level original address image through a neural network to obtain specific English text information in the gray level original address image;
step three: performing word segmentation and sentence processing on the English text information to respectively generate corresponding English sentences;
step four: and translating the English sentences generated in the third step by a Chinese-English translation module to obtain corresponding Chinese addresses.
2. The method for identifying and analyzing the English address of the international mail list as claimed in claim 1, wherein: in the first step, image acquisition is carried out on the international mail order through image acquisition equipment of a mobile phone, a scanner or a camera so as to obtain an original image of the international mail order, and the address position of the original image is cut through a cutting module so as to obtain an original address image.
3. The method for identifying and analyzing the English address of the international mail list as claimed in claim 1, wherein: the pre-processing of the original address image comprises the following sub-steps:
step 1.1: denoising the original address image, and performing gray value conversion processing after denoising to obtain a gray original address image;
step 1.2: and carrying out binarization processing on the denoised gray level image according to a maximum inter-class variance method to generate a binary image.
4. The method for identifying and analyzing the English address of the international mail list as claimed in claim 3, wherein: determining a threshold value T for the binarization process based on the maximum inter-class variance method, the determining the threshold value T further comprising determining a threshold value T having a gray scale value below TPixels and pixels having a gradation value larger than T are classified into two groups, i.e., group 1 and group 2, where the number of pixels in group 1 is W1(T) mean value of the gray values M1(T) variance σ1(T) the number of pixels in class 2 is W2(T) mean value of the gray values M2(T) variance σ2(T) average value of all pixels is MT(ii) a Calculating the intra-class variance, and the formula is as follows:
σw 2=W1(T)σ1 2(T)+W2(T)σ2 2(T)
calculating the variance between classes, and the formula is as follows:
σb 2=W1(T)(M1(T)-MT)2+W2(T)(M2(T)-MT)2=W1(T)W2(T)(M1(T)-M2(T))2
let σ beb 2w 2Become maximum, i.e. byb 2Maximum, find the maximum σb 2The corresponding gray value T is the threshold value.
5. The method for identifying and analyzing the English address of the international mail list as claimed in claim 1, wherein: and in the third step, inputting the gray level original address image into a pre-trained convolutional neural network model to obtain a spatial feature vector, inputting the spatial feature vector into a pre-trained encoder to obtain encoding information, using the spatial feature vector extracted by the convolutional neural network as an input sequence of the cyclic neural network by the encoder, updating the hidden state of the encoder, and performing bidirectional transmission on the spatial feature vector according to time steps in sequence.
6. The method for identifying and analyzing the English address of the international mail list as claimed in claim 5, wherein: and after the two hidden layers execute the same operation, connecting the bidirectional hidden state of the last hidden layer as the input of an output layer to obtain the coding information of the encoder, and inputting the coding information into a pre-trained decoder to obtain a recognition result.
7. The method for identifying and analyzing the English address of the international mail list as claimed in claim 1, wherein: and step two, constructing a convolutional neural network model, an encoder and a decoder, training the model, inputting the text image data transformed in the image preprocessing stage by the convolutional neural network, and outputting the spatial feature vector of the image.
8. The method for identifying and analyzing the English address of the international mail list as claimed in claim 1, wherein: and in the fourth step, sentences irrelevant to the address information are filtered according to the keywords representing the regional levels, and the sentences belonging to the address information are transmitted into the built Chinese address database for matching and checking to obtain the corresponding Chinese addresses.
CN202110725042.1A 2021-06-29 2021-06-29 English address recognition and analysis method for international mail list Pending CN113657374A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110725042.1A CN113657374A (en) 2021-06-29 2021-06-29 English address recognition and analysis method for international mail list

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110725042.1A CN113657374A (en) 2021-06-29 2021-06-29 English address recognition and analysis method for international mail list

Publications (1)

Publication Number Publication Date
CN113657374A true CN113657374A (en) 2021-11-16

Family

ID=78489146

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110725042.1A Pending CN113657374A (en) 2021-06-29 2021-06-29 English address recognition and analysis method for international mail list

Country Status (1)

Country Link
CN (1) CN113657374A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1840799A1 (en) * 2006-03-28 2007-10-03 Solystic Method using the multi-resolution of images for optical recognition of postal shipments
CN111639646A (en) * 2020-05-18 2020-09-08 山东大学 Test paper handwritten English character recognition method and system based on deep learning
CN112633079A (en) * 2020-12-02 2021-04-09 山东山大鸥玛软件股份有限公司 Handwritten English word recognition method and system
CN112633283A (en) * 2021-03-08 2021-04-09 广州市玄武无线科技股份有限公司 Method and system for identifying and translating English mail address

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1840799A1 (en) * 2006-03-28 2007-10-03 Solystic Method using the multi-resolution of images for optical recognition of postal shipments
CN111639646A (en) * 2020-05-18 2020-09-08 山东大学 Test paper handwritten English character recognition method and system based on deep learning
CN112633079A (en) * 2020-12-02 2021-04-09 山东山大鸥玛软件股份有限公司 Handwritten English word recognition method and system
CN112633283A (en) * 2021-03-08 2021-04-09 广州市玄武无线科技股份有限公司 Method and system for identifying and translating English mail address

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李勇等: "《复杂情感分析方法及其应用》", 冶金工业出版社, pages: 152 - 153 *

Similar Documents

Publication Publication Date Title
CN108664996B (en) Ancient character recognition method and system based on deep learning
CN107622104B (en) Character image identification and marking method and system
Bhattacharya et al. Offline recognition of handwritten Bangla characters: an efficient two-stage approach
CN103514170B (en) A kind of file classification method and device of speech recognition
Alotaibi et al. Optical character recognition for quranic image similarity matching
Roy et al. A system towards Indian postal automation
CN112069900A (en) Bill character recognition method and system based on convolutional neural network
CN109947936B (en) Method for dynamically detecting junk mails based on machine learning
CN112507800A (en) Pedestrian multi-attribute cooperative identification method based on channel attention mechanism and light convolutional neural network
Nikitha et al. Handwritten text recognition using deep learning
CN114580362B (en) System and method for generating return mark file
Dongre et al. Devnagari handwritten numeral recognition using geometric features and statistical combination classifier
CN113554021A (en) Intelligent seal identification method
Mariyathas et al. Sinhala handwritten character recognition using convolutional neural network
Dipu et al. Bangla optical character recognition (ocr) using deep learning based image classification algorithms
CN112507863B (en) Handwritten character and picture classification method based on quantum Grover algorithm
Hemanth et al. CNN-RNN BASED HANDWRITTEN TEXT RECOGNITION.
Ifhaam et al. Sinhala handwritten postal address recognition for postal sorting
CN117710996A (en) Data extraction, classification and storage method of unstructured table document based on deep learning
CN112036330A (en) Text recognition method, text recognition device and readable storage medium
CN117076455A (en) Intelligent identification-based policy structured storage method, medium and system
CN113657374A (en) English address recognition and analysis method for international mail list
Choudhary et al. Offline handwritten mathematical expression evaluator using convolutional neural network
CN115565078A (en) Remote sensing image scene classification and semantic segmentation method based on weighted cross entropy loss
Joshi et al. Combination of multiple image features along with KNN classifier for classification of Marathi Barakhadi

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination