CN114495106A - MOCR (metal-oxide-semiconductor resistor) deep learning method applied to DFB (distributed feedback) laser chip - Google Patents

MOCR (metal-oxide-semiconductor resistor) deep learning method applied to DFB (distributed feedback) laser chip Download PDF

Info

Publication number
CN114495106A
CN114495106A CN202210401938.9A CN202210401938A CN114495106A CN 114495106 A CN114495106 A CN 114495106A CN 202210401938 A CN202210401938 A CN 202210401938A CN 114495106 A CN114495106 A CN 114495106A
Authority
CN
China
Prior art keywords
character
chip
feature
laser chip
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210401938.9A
Other languages
Chinese (zh)
Inventor
王旭东
李晔彬
王昭睿
刘蔚
杜晓辉
张静
刘娟秀
刘霖
叶玉堂
刘永
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202210401938.9A priority Critical patent/CN114495106A/en
Publication of CN114495106A publication Critical patent/CN114495106A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Character Discrimination (AREA)

Abstract

The invention discloses a deep learning MOCR (metal oxide semiconductor) method applied to a DFB (distributed feedback) laser chip, belonging to the field of image processing. The method comprises the following steps: step 1: collecting images of the DFB chip on line; step 2: preprocessing the image; and step 3: correcting the character direction on the chip through a character area detection network; and 4, step 4: identifying the character content on the chip through a character identification network; and 5: judging whether the recognized character content is correct or not, and if so, storing the recognized character content in an associated database; if not, the chip is rejected or manual re-inspection is carried out. The invention can efficiently and accurately identify the character information on the DFB laser chip under the condition of not changing the original production process flow. Compared with the traditional identification method, the identification accuracy is greatly improved and can reach 98.8%.

Description

MOCR (metal-oxide-semiconductor resistor) deep learning method applied to DFB (distributed feedback) laser chip
Technical Field
The invention relates to the field of photoelectric semiconductor microscopic visual inspection, in particular to a deep learning MOCR method applied to a DFB laser chip.
Background
Distributed feedback laser Diodes (DFBs) are widely used in fiber optic communication systems due to their high side-mode suppression ratio and ultra-narrow spectral width. With the continuous development of communication capacity and communication bandwidth, laser chips are becoming smaller and smaller, so as to facilitate integration and packaging into optical communication devices; with the rapid development of 5G communication, FTTR and data center, the communication rate is becoming higher and higher. As the most critical core chip in fiber optic communication systems, DFB lasers will become increasingly important for reliable stability of performance and quality.
The continuous development of intelligent manufacturing and artificial intelligence has given rise to a powerful capability for informatization, intelligence and traceability of manufacturing. Because of their technical and price importance, DFB laser chips need quality traceability and process control throughout the manufacturing process of optical devices. Key information of each DFB laser chip, such as optical power, extinction ratio, skew efficiency, center wavelength, cross point, side mode suppression ratio and the like, needs to be associated and corresponding to each chip one by one. And the character information on the DFB laser chip is used as the unique identity of the chip, and the accurate, error-free and efficient identification of the chip becomes important.
However, characters on the DFB laser chip are greatly different and different from traditional characters, which mainly means that the characters on the chip are extremely small, about less than 100um (the character area occupies about 10% of the chip area), which requires microscopic imaging technology to clearly image the target; secondly, there is no context semantic between characters, which results in that a semantic recognition method cannot be adopted; and the chip is tiny, and the characters of the chip are not very standard fonts, which brings certain difficulty to recognition. In addition, interference caused by dirt, defects, dust and the like of the industrial environment and the chip increases the identification difficulty. All of these problems and features contribute to the uniqueness and difficulty of character recognition for DFB laser chips.
The recognition accuracy rate of the traditional character recognition method is only about 60%, so that each character which cannot be recognized needs to be judged by special engineers on recognition stations in real time through human assistance. This is not satisfactory in a real-time manufacturing line of optical communication devices. The waste of manpower and efficiency is seriously caused, and the problems of easy fatigue, easy emotion and the like of people also cause a leak and a risk in quality control.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a deep learning automatic character area positioning and character recognition method for microscopic characters on a DFB laser chip in the optical device packaging process, which can efficiently and accurately recognize character information on the DFB laser chip without changing the original production process flow. Compared with the traditional identification method, the identification accuracy is greatly improved and can reach 98.8%.
The purpose of the invention is realized by the following technical scheme:
a deep learning MOCR method applied to a DFB laser chip comprises the following steps:
step 1: collecting images of the DFB laser chip on line;
step 2: preprocessing the image;
and step 3: correcting the character direction on the chip through a character area detection network;
step 3.1: sequentially carrying out down-sampling on an input image for four times, summing the result of the down-sampling for the fourth time and the result of the down-sampling for the third time to obtain a first feature, summing the first feature and the result of the down-sampling for the second time to obtain a second feature, and summing the second feature and the result of the down-sampling for the first time to obtain a third feature;
step 3.2: performing feature fusion on the fourth down-sampling result, the feature I, the feature II and the feature III to obtain a fusion feature;
step 3.3: performing convolution and deconvolution on the fusion features to obtain a probability map and a threshold map;
step 3.4: carrying out differentiable binarization on the probability map and the threshold map to obtain an approximate binarization image;
step 3.5: outputting an identification frame according to a connected domain in the approximate binary image, and converting the identification frame into a horizontal state according to the long proportion of the identification frame;
step 3.6: determining, by a text direction classifier, whether the text is reversed; if the text direction is opposite, rotating the text by 180 degrees, and then inputting a character recognition network for recognition; if not, directly inputting a character recognition network for recognition;
and 4, step 4: identifying the character content on the chip through a character identification network;
and 5: judging whether the recognized character content is correct or not, and if so, storing the recognized character content in an associated database; if not, the chip is rejected or manual re-inspection is carried out.
Further, the data set used in the network training for detecting the character area is formed by manually labeling the original image and performing data enhancement.
Further, the character recognition network is a full convolution neural network CRNN based on character recognition, and is used for recognizing characters contained in the character box.
Further, the data set used in the character recognition network training is formed by manually labeling the original image and performing data enhancement.
Further, the device for acquiring the image of the DFB laser chip on line in step 1 includes a light source, a COMS camera and a telecentric lens, and the surface character image of the DFB laser chip is acquired by using an auto-focusing technology based on infrared ranging.
The invention has the beneficial effects that: the invention aims to provide a deep learning automatic character area positioning and character recognition method for microscopic characters on a DFB laser chip in the optical device packaging process, which can efficiently and accurately recognize character information on the DFB laser chip without changing the original production process flow. Compared with the traditional identification method, the identification accuracy is greatly improved and can reach 98.8%.
Drawings
FIG. 1 is a flow chart of image character region location and recognition.
Fig. 2 is a diagram of a character area detection network architecture.
Fig. 3 is a diagram of a character recognition network architecture.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In this embodiment, as shown in fig. 1, a deep learning MOCR method applied to a DFB laser chip includes the following steps:
a deep learning MOCR method applied to a DFB laser chip comprises the following steps:
step 1: collecting images of the DFB laser chip on line;
step 2: preprocessing the image;
and step 3: correcting the character direction on the chip through a character area detection network;
step 3.1: sequentially carrying out down-sampling on an input image for four times, summing the result of the down-sampling for the fourth time and the result of the down-sampling for the third time to obtain a first feature, summing the first feature and the result of the down-sampling for the second time to obtain a second feature, and summing the second feature and the result of the down-sampling for the first time to obtain a third feature;
step 3.2: performing feature fusion on the fourth down-sampling result, the feature I, the feature II and the feature III to obtain a fusion feature;
step 3.3: performing convolution and deconvolution on the fusion features to obtain a probability map and a threshold map;
step 3.4: carrying out differentiable binarization on the probability map and the threshold map to obtain an approximate binarization image;
step 3.5: outputting an identification frame according to a connected domain in the approximate binary image, and converting the identification frame into a horizontal state according to the long proportion of the identification frame;
step 3.6: determining, by a text direction classifier, whether the text is reversed; if the text direction is opposite, rotating the text by 180 degrees, and then inputting a character recognition network for recognition; if not, directly inputting a character recognition network for recognition;
and 4, step 4: identifying the character content on the chip through a character identification network;
and 5: judging whether the recognized character content is correct or not, and if so, storing the recognized character content in an associated database; if not, the chip is rejected or manual re-inspection is carried out.
The character area detection network is a DBNet deep learning network, and the DBNet deep learning network is optimized by combining the character characteristics of the DFB laser chip; the optimization is based on the latest DBNet model, which is responsible for detecting character regions in the image and converting text boxes to horizontal orientation. A text direction classifier is then used to determine whether the text is inverted. If so, the text needs to be rotated 180 degrees to be recognized again. In addition, the last character recognition method uses a fully-convolved character-based recognition model to process the detected character box and recognize the characters contained therein.
The data set used in the DBNet deep learning network training is formed by manually labeling an original image and enhancing data.
The character recognition network is a full convolution neural network CRNN based on character recognition and is used for recognizing characters contained in the character frame.
The data set used in the training of the character recognition network is formed by manually labeling the original image and enhancing the data.
The MOCR method is divided into two stages of text detection and text recognition. The detection method is based on the latest DBNet deep learning model, and the model is optimized by combining the character characteristics of the DFB laser chip on the basis. The optimized model is responsible for detecting the framing of the character area in the image and changing the text box into the horizontal direction, as shown in fig. 2, the character area positioning network model. The text direction classifier is then used to determine whether the text is reversed, with "1/2", "1/4", and "1/32" in the figures representing the scale compared to the input image. "pred" consists of two deconvolution operators with step size 2 and one 3 x 3 convolution operator.
If the orientation is reversed, the text needs to be rotated 180 degrees to be recognized again. In addition, the final character recognition method uses a full convolutional neural network character-based recognition model to process the detected character box and recognize the characters contained therein. The identification network is shown in fig. 3.
In this embodiment, the device for acquiring the image of the DFB laser chip on line includes a light source, a cmos camera and a telecentric lens, and an auto-focusing technique based on infrared ranging is used to acquire the surface character image of the DFB laser chip. The COMS camera adopts an area array COMS image sensor (MER-502-79U 3C) (resolution: 2448X 2048, frame rate: 79fps, image size: 3.45 um; optical interface: C interface, data interface: USB 3.0) produced by Dachang company, and adopts a brilliant telecentric lens (XF-T6X 65D) (magnification: 6.0X, object field: phi 1.3mm, depth of field: 0.06, telecentricity: <0.02 degree).
The DFB laser chip surface character size is 70um 80 um. Such characters are represented as micro-characters having a maximum dimension of less than 100 microns because they are hardly recognizable to the naked eye. Meanwhile, an automatic focusing technology based on infrared ranging is adopted, so that the real-time performance and the high efficiency of a production line are guaranteed, and the influence of defocused image blurring caused by environment and batch errors is reduced to the maximum extent.
In the embodiment, since the original image is an image of 2448 × 2048 pixels, in which the character region only occupies a small portion, a two-stage deep learning network model training method (detection and recognition) is adopted; then, the original image is marked to obtain an image and a corresponding label required by detecting and identifying the network.
First, a character area in an original image is marked with a rectangular frame, and four coordinate points in a clockwise direction from the upper left corner are recorded. Then, the coordinate position of the rectangular frame in the original image, i.e. the label required for detecting the network, is obtained. The rectangular area of the original image is then cropped and the text content is recorded to obtain the pictures and labels needed to identify the network. Finally, the text content of the label used for identification is replaced by 0 degree and 180 degrees, and the label required by the direction classifier is obtained.
The above is a description of a specific method for realizing the DFB laser chip character recognition for the MOCR. The method is successfully applied to a packaging detection production line of the DFB optical device at present. A large amount of actual use data shows that the method has better robustness and identification accuracy. The identification accuracy rate reaches about 98.8 percent at present. The MOCR is realized for the first time and successfully applied, plays a key role in intelligent manufacturing and quality tracing of DFB laser chips and devices in the field of optical communication, and has important significance in further accelerating the development of industrial Internet.
It should be noted that, for simplicity of description, the above-mentioned embodiments of the method are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the order of acts described, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and elements referred to are not necessarily required in this application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a ROM, a RAM, etc.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (5)

1. A deep learning MOCR method applied to a DFB laser chip is characterized by comprising the following steps:
step 1: collecting images of the DFB laser chip on line;
step 2: preprocessing an image;
and step 3: correcting the character direction on the chip through a character area detection network;
step 3.1: sequentially carrying out down-sampling on an input image for four times, summing the result of the down-sampling for the fourth time and the result of the down-sampling for the third time to obtain a first feature, summing the first feature and the result of the down-sampling for the second time to obtain a second feature, and summing the second feature and the result of the down-sampling for the first time to obtain a third feature;
step 3.2: performing feature fusion on the result of the fourth down-sampling, the feature I, the feature II and the feature III to obtain fusion features;
step 3.3: performing convolution and deconvolution on the fusion features to obtain a probability map and a threshold map;
step 3.4: carrying out differentiable binarization on the probability map and the threshold map to obtain an approximate binarization image;
step 3.5: outputting an identification frame according to a connected domain in the approximate binary image, and converting the identification frame into a horizontal state according to the long proportion of the identification frame;
step 3.6: determining, by a text direction classifier, whether the text is reversed; if the text direction is opposite, rotating the text by 180 degrees, and then inputting a character recognition network for recognition; if not, directly inputting a character recognition network for recognition;
and 4, step 4: identifying the character content on the chip through a character identification network;
and 5: judging whether the recognized character content is correct or not, and if so, storing the recognized character content in an associated database; if not, the chip is rejected or manual re-inspection is carried out.
2. The deep learning MOCR method applied to the DFB laser chip as claimed in claim 1, wherein the character area detection network is a DBNet deep learning network.
3. The method as claimed in claim 1, wherein the character recognition network is a full convolution neural network (CRNN) based on character recognition, and is used for recognizing characters contained in a character frame.
4. The MOCR method for deep learning of the DFB laser chip as claimed in claim 1, wherein the device used for collecting the image of the DFB laser chip on line in step 1 comprises a light source, a COMS camera and a telecentric lens, and the image of the surface character of the DFB laser chip is collected by using an auto-focusing technique based on infrared ranging.
5. The deep learning MOCR method applied to the DFB laser chip as claimed in claim 2, wherein the data set used in the DBNet deep learning network training is formed by manually labeling an original image and performing data enhancement.
CN202210401938.9A 2022-04-18 2022-04-18 MOCR (metal-oxide-semiconductor resistor) deep learning method applied to DFB (distributed feedback) laser chip Pending CN114495106A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210401938.9A CN114495106A (en) 2022-04-18 2022-04-18 MOCR (metal-oxide-semiconductor resistor) deep learning method applied to DFB (distributed feedback) laser chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210401938.9A CN114495106A (en) 2022-04-18 2022-04-18 MOCR (metal-oxide-semiconductor resistor) deep learning method applied to DFB (distributed feedback) laser chip

Publications (1)

Publication Number Publication Date
CN114495106A true CN114495106A (en) 2022-05-13

Family

ID=81489366

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210401938.9A Pending CN114495106A (en) 2022-04-18 2022-04-18 MOCR (metal-oxide-semiconductor resistor) deep learning method applied to DFB (distributed feedback) laser chip

Country Status (1)

Country Link
CN (1) CN114495106A (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020010547A1 (en) * 2018-07-11 2020-01-16 深圳前海达闼云端智能科技有限公司 Character identification method and apparatus, and storage medium and electronic device
WO2020098250A1 (en) * 2018-11-12 2020-05-22 平安科技(深圳)有限公司 Character recognition method, server, and computer readable storage medium
CN112115948A (en) * 2020-09-15 2020-12-22 电子科技大学 Chip surface character recognition method based on deep learning
US20200401571A1 (en) * 2019-06-24 2020-12-24 Evolution Pathfinder Llc Human Experiences Ontology Data Model and its Design Environment
CN112580657A (en) * 2020-12-23 2021-03-30 陕西天诚软件有限公司 Self-learning character recognition method
WO2021115091A1 (en) * 2019-12-13 2021-06-17 华为技术有限公司 Text recognition method and apparatus
CN113221889A (en) * 2021-05-25 2021-08-06 中科芯集成电路有限公司 Anti-interference recognition method and device for chip characters
CN113221867A (en) * 2021-05-11 2021-08-06 北京邮电大学 Deep learning-based PCB image character detection method
WO2021190171A1 (en) * 2020-03-25 2021-09-30 腾讯科技(深圳)有限公司 Image recognition method and apparatus, terminal, and storage medium
WO2021196873A1 (en) * 2020-03-30 2021-10-07 京东方科技集团股份有限公司 License plate character recognition method and apparatus, electronic device, and storage medium
CN113850157A (en) * 2021-09-08 2021-12-28 精锐视觉智能科技(上海)有限公司 Character recognition method based on neural network
WO2022017245A1 (en) * 2020-07-24 2022-01-27 华为技术有限公司 Text recognition network, neural network training method, and related device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020010547A1 (en) * 2018-07-11 2020-01-16 深圳前海达闼云端智能科技有限公司 Character identification method and apparatus, and storage medium and electronic device
WO2020098250A1 (en) * 2018-11-12 2020-05-22 平安科技(深圳)有限公司 Character recognition method, server, and computer readable storage medium
US20200401571A1 (en) * 2019-06-24 2020-12-24 Evolution Pathfinder Llc Human Experiences Ontology Data Model and its Design Environment
WO2021115091A1 (en) * 2019-12-13 2021-06-17 华为技术有限公司 Text recognition method and apparatus
WO2021190171A1 (en) * 2020-03-25 2021-09-30 腾讯科技(深圳)有限公司 Image recognition method and apparatus, terminal, and storage medium
WO2021196873A1 (en) * 2020-03-30 2021-10-07 京东方科技集团股份有限公司 License plate character recognition method and apparatus, electronic device, and storage medium
WO2022017245A1 (en) * 2020-07-24 2022-01-27 华为技术有限公司 Text recognition network, neural network training method, and related device
CN112115948A (en) * 2020-09-15 2020-12-22 电子科技大学 Chip surface character recognition method based on deep learning
CN112580657A (en) * 2020-12-23 2021-03-30 陕西天诚软件有限公司 Self-learning character recognition method
CN113221867A (en) * 2021-05-11 2021-08-06 北京邮电大学 Deep learning-based PCB image character detection method
CN113221889A (en) * 2021-05-25 2021-08-06 中科芯集成电路有限公司 Anti-interference recognition method and device for chip characters
CN113850157A (en) * 2021-09-08 2021-12-28 精锐视觉智能科技(上海)有限公司 Character recognition method based on neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XUDONG WANG等: "Intelligent Micron Optical Character Recognition of DFB Chip Using Deep Convolutional Neural Network", 《 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT》 *
罗月童等: "基于深度学习的芯片表面字符识别方法", 《合肥工业大学学报(自然科学版)》 *

Similar Documents

Publication Publication Date Title
CN111160352B (en) Workpiece metal surface character recognition method and system based on image segmentation
CN111179251A (en) Defect detection system and method based on twin neural network and by utilizing template comparison
CN103185730B (en) Method for building rule of thumb of defect classification, and methods for classifying defect and judging killer defect
JP2018077786A (en) Image processing apparatus, image processing method, program, drive control system, and vehicle
CN111242904A (en) Optical fiber end face detection method and device
CN111767780B (en) AI and vision combined intelligent integrated card positioning method and system
Saha et al. Automatic localization and recognition of license plate characters for Indian vehicles
CN113191358B (en) Metal part surface text detection method and system
CN114580515A (en) Neural network training method for intelligent detection of semiconductor desoldering
CN116071294A (en) Optical fiber surface defect detection method and device
CN110942063B (en) Certificate text information acquisition method and device and electronic equipment
CN117455917B (en) Establishment of false alarm library of etched lead frame and false alarm on-line judging and screening method
Suh et al. Fusion of global-local features for image quality inspection of shipping label
US20240112781A1 (en) Ai-based product surface inspecting apparatus and method
CN112784675B (en) Target detection method and device, storage medium and terminal
Salunkhe et al. Recognition of multilingual text from signage boards
CN117197010A (en) Method and device for carrying out workpiece point cloud fusion in laser cladding processing
CN114495106A (en) MOCR (metal-oxide-semiconductor resistor) deep learning method applied to DFB (distributed feedback) laser chip
CN116912872A (en) Drawing identification method, device, equipment and readable storage medium
CN115393290A (en) Edge defect detection method, device and equipment
CN114299533A (en) Power grid wiring diagram element and line identification system and method based on artificial intelligence
CN112967166A (en) OpenCV-based automatic image watermark identification processing method and system
CN113781449A (en) Textile flaw classification method based on multi-scale feature fusion
CN113034432A (en) Product defect detection method, system, device and storage medium
CN112085747A (en) Image segmentation method based on local relation guidance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220513

RJ01 Rejection of invention patent application after publication