CN111166371A - Diagnostic support method - Google Patents

Diagnostic support method Download PDF

Info

Publication number
CN111166371A
CN111166371A CN201811331136.5A CN201811331136A CN111166371A CN 111166371 A CN111166371 A CN 111166371A CN 201811331136 A CN201811331136 A CN 201811331136A CN 111166371 A CN111166371 A CN 111166371A
Authority
CN
China
Prior art keywords
result
algorithm
image data
generate
stethoscope
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811331136.5A
Other languages
Chinese (zh)
Inventor
王峻国
许银雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Acer Inc
Original Assignee
Acer Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Acer Inc filed Critical Acer Inc
Priority to CN201811331136.5A priority Critical patent/CN111166371A/en
Publication of CN111166371A publication Critical patent/CN111166371A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B7/00Instruments for auscultation
    • A61B7/02Stethoscopes
    • A61B7/04Electric stethoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B7/00Instruments for auscultation
    • A61B7/006Detecting skeletal, cartilage or muscle noise
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5238Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image
    • A61B8/5261Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image combining images from different diagnostic modalities, e.g. ultrasound and X-ray

Abstract

The invention provides a diagnosis assisting method. The diagnosis assistance method includes: generating sound data through a stethoscope; generating image data by an ultrasonic device; processing the sound data through a first processing module to generate a first result; processing the image data according to a second processing module to generate a second result; and generating an auxiliary diagnosis result according to the first result and the second result.

Description

Diagnostic support method
Technical Field
The present disclosure relates generally to diagnostic aids, and more particularly to a diagnostic aid that generates a diagnostic aid based on both sound data generated by a stethoscope and image data generated by an ultrasound device.
Background
Traditionally, the ultrasonic device and the stethoscope are operated independently and are not used simultaneously. Thus, a reduction in the accuracy of some diagnostic assessments may result.
However, the results from the ultrasonic device and the stethoscope are of complementary relevance. For example, although a stethoscope may be used to hear possible symptoms, it may not be able to accurately identify the location where the symptoms actually occurred, whereas an ultrasound device may provide an image of the location where the symptoms were found. Conversely, although the ultrasound device can obtain a clear image of the symptom location, it may have a lower probability of being less reliable than a stethoscope in recognizing the symptom from the image.
Therefore, if the advantages of the ultrasonic device and the stethoscope can be combined, the accuracy of the medical auxiliary diagnosis can be further improved.
Disclosure of Invention
In view of the above-mentioned problems of the prior art, the present invention provides a diagnosis assistance technique, and more particularly, to a diagnosis assistance method for generating a diagnosis assistance result based on both sound data generated by a stethoscope and image data generated by an ultrasonic device.
A diagnostic assistance method is provided according to an embodiment of the invention. The diagnosis assistance method includes: generating sound data through a stethoscope; generating image data by an ultrasonic device; processing the sound data through a first processing module to generate a first result; processing the image data according to a second processing module to generate a second result; and generating an auxiliary diagnosis result according to the first result and the second result.
According to some embodiments of the present invention, the first processing module processes the audio data according to a first algorithm to generate the first result, and the second processing module processes the image data according to a second algorithm to generate the second result.
According to some embodiments of the present invention, the diagnosis assisting method further comprises analyzing the first result and the second result by a third processing module according to a third algorithm to generate the diagnosis assisting result.
With respect to other additional features and advantages of the present invention, those skilled in the art will appreciate that various modifications and adaptations of the disclosed diagnostic aids are possible without departing from the scope and spirit of the present invention.
Drawings
Fig. 1 is a block diagram illustrating a diagnostic assistance system 100 according to an embodiment of the present invention.
Fig. 2 is a flow chart 200 of a diagnostic assistance method according to an embodiment of the invention.
Wherein the reference numerals are as follows:
100 diagnostic support system
110 stethoscope
120 ultrasonic device
130 diagnosis assisting device
131 processing device
132 memory device
133 display device
200 flow chart
S210 to S250
Detailed Description
The best mode for carrying out the invention is set forth in this section for the purpose of illustrating the concepts of the invention and not for the purpose of limiting the scope of the invention as defined by the appended claims.
Fig. 1 is a block diagram illustrating a diagnostic assistance system 100 according to an embodiment of the present invention. As shown in fig. 1, the diagnostic aid system 100 may include a stethoscope 110, an ultrasound device 120, and a diagnostic aid 130. It should be noted that the block diagram shown in fig. 1 is only for convenience of describing the embodiment of the present invention, but the present invention is not limited thereto.
As shown in fig. 1, the diagnosis assisting apparatus 130 according to an embodiment of the present invention may include a processing apparatus 131, a storage apparatus 132, and a display apparatus 133. According to the embodiment of the present invention, the diagnosis assistance device 130 may be a smart phone, a tablet computer, a desktop computer, a phone call, etc. Note that the diagnosis assistance device 130 shown in fig. 1 is only an embodiment of the present invention for convenience of description, but the present invention is not limited thereto. Other components may also be included in the diagnostic aid 130.
According to an embodiment of the present invention, the stethoscope 110 may be a digital stethoscope. The stethoscope 110 may be used to obtain sound data (or sound signals) related to organs (e.g., heart, lung, stomach, etc.) in the human body. After the stethoscope 110 obtains the sound data, the stethoscope 110 can transmit the obtained sound data to the auxiliary diagnosis device 130 through a wired or wireless transmission method. According to an embodiment of the present invention, the sound data generated by the stethoscope 110 may be temporarily stored in the storage device 132 of the auxiliary device 130.
According to an embodiment of the present invention, the ultrasonic device 120 may be an ultrasonic probe. The ultrasonic device 120 may include a transmitter and a receiver (not shown). The transmitter of the ultrasonic device 120 converts the electric signal into a sound wave signal (i.e., an ultrasonic signal) and transmits the sound wave signal to the human body. The receiver of the ultrasonic device 120 receives the reflected sound wave signal from the human body and converts the reflected sound wave signal into an electrical signal. Then, the receiver of the ultrasonic device 120 converts the electrical signal into a 2-dimensional (2D) image (i.e., image data). After the ultrasound device 120 acquires the image data related to the organs in the human body, the ultrasound device 120 may transmit the acquired image data related to the organs in the human body to the diagnosis assisting device 130 through a wired or wireless transmission method. According to an embodiment of the present invention, the image data generated by the ultrasonic device 120 can be temporarily stored in the storage device 132 of the auxiliary device 130.
According to an embodiment of the present invention, the storage device 132 may be a volatile Memory (e.g., a Random Access Memory (RAM)), a non-volatile Memory (e.g., a flash Memory, a Read Only Memory (ROM)), a hard disk, or a combination thereof. According to one embodiment of the present invention, the storage device 132 may be used to store software and firmware (firmware) program codes, trained audio data, trained video data, and the like. In an embodiment of the present invention, the trained voice data is voice data indicating that a problem has been previously marked by a doctor. For example, during a previous diagnosis of a condition of a different organ, the sound data labeled by the doctor for the problematic sound waveform is stored in the memory device 132 as trained sound data. In addition, in an embodiment of the present invention, the trained image data represents image data previously marked with a problem by the doctor. For example, during the previous diagnosis process of the disease condition of different organs, the image data marked by the doctor for the problematic image feature (feature) is stored in the storage device 132 as the trained image data.
According to an embodiment of the present invention, after the diagnosis assisting apparatus 130 obtains the sound data and the image data from the stethoscope 110 and the ultrasonic device 120, respectively, a first processing module (not shown) of the processing device 131 of the diagnosis assisting apparatus 130 obtains the trained sound data and the sound data from the stethoscope 110 from the storage device 132, and processes and analyzes the sound data from the stethoscope 110 according to the trained sound data and the sound data from the stethoscope 110 to generate a first result. Specifically, the processing device 131 compares the trained sound data with the sound data from the stethoscope 110 to determine which portions of the sound data from the stethoscope 110 may be problematic, and the processing device 131 marks the portions of the sound data that may be problematic to generate the first result.
In addition, the second processing module (not shown) of the processing device 131 of the diagnosis assisting device 130 obtains the trained image data and the image data from the ultrasonic device 120 from the storage device 132, and processes and analyzes the image data from the ultrasonic device 120 according to the trained image data and the image data from the ultrasonic device 120 to generate a second result. Specifically, the processing device 131 compares the trained image data with the image data from the ultrasound device 120 to determine which portions of the image data from the ultrasound device 120 may be problematic, and the processing device 131 labels the portions of the image data that may be problematic to generate the second result.
According to an embodiment of the present invention, the first processing module of the processing device 131 processes and analyzes the sound signal from the stethoscope 110 according to a first algorithm to generate a first result, and the second processing module of the processing device 131 processes and analyzes the image signal of the ultrasonic device 120 according to a second algorithm to generate a second result. According to an embodiment of the present invention, the first algorithm is a Recursive Neural Network (RNN) deep learning (deep learning) algorithm, and the second algorithm is a Convolutional Neural Network (CNN) deep learning algorithm, but the present invention is not limited thereto. According to some embodiments of the invention, the first algorithm may also be a CNN deep learning algorithm or other deep learning algorithm and the second algorithm may also be an RNN deep learning algorithm or other deep learning algorithm. According to some embodiments of the present invention, the first algorithm and the second algorithm may also be a combination of two different depth learning algorithms, such as: the first algorithm may include a combination of the CNN deep learning algorithm and the RNN deep learning algorithm, but the invention is not limited thereto.
The RNN deep learning algorithm uses the sequence information to perform the same operation on each element of a sequence through a back propagation and storage mechanism, and the current output is affected by the previous output. The first processing module of the processing device 131 may employ an RNN deep learning algorithm to compare the trained sound data with the sound data from the stethoscope 110 to generate a first result.
The architecture of the CNN deep learning algorithm can be mainly divided into a convolutional Layer (convolutional Layer), a Pooling Layer (Pooling Layer), and a Fully Connected Layer (Fully Connected Layer). The convolutional layer may perform convolution operation on the image and a feature Detector (feature Detector) to extract features from the image. The Pooling layer divides the image processed by the convolutional layer into a plurality of regions by a Pooling method (for example, Max Pooling, but the invention is not limited thereto), and picks out the maximum value from each region. The fully connected layer is the result of the processing of the planarized (flatten) pooling layer. Furthermore, the CNN deep learning algorithm may be of different types, such as: regional CNN (R-CNN), fast regional CNN (fast R-CNN), and faster regional CNN (fast R-CNN). The second module of the processing device 131 may employ a CNN depth learning algorithm to compare the trained image data with the image data from the ultrasound device 120 to generate a second result.
According to an embodiment of the present invention, the user can adjust parameters of the deep learning algorithms (e.g., RNN deep learning algorithm and CNN deep learning algorithm) according to the first result and the second result, such as: the number of epochs (number of epoch), learning rate (learning rate), decay function (objective function), weight initialization (weight initialization), and normalized correlation (normalization), but the invention is not limited thereto.
According to an embodiment of the present invention, after the first result and the second result are generated, a third processing module (not shown) of the processing device 131 receives the first result and the second result and generates an auxiliary diagnosis result according to the first result and the second result. According to an embodiment of the present invention, the third processing module of the processing device 131 analyzes the first result and the second result according to a third algorithm to generate the auxiliary diagnosis result. According to an embodiment of the present invention, the third algorithm may be an Ensemble Learning (Ensemble Learning) algorithm, but the present invention is not limited thereto. In the ensemble learning algorithm, the prediction results (i.e., the first result and the second result) of different classifiers are considered together, and different weights are given to the different prediction results to obtain a better prediction result (i.e., an auxiliary diagnosis result).
When the processing device 131 generates the auxiliary diagnosis result, the processing device 131 outputs the auxiliary diagnosis result to the display device 133. After receiving the auxiliary diagnosis result, the display device 133 can display the auxiliary diagnosis result for the reference of the doctor. According to an embodiment of the present invention, the auxiliary diagnosis result may be a voice data with a mark, an image data with a mark, or a text data, but the present invention is not limited thereto. For example, if the auxiliary diagnosis result is a text data, the auxiliary diagnosis result includes descriptions of symptoms that may appear in the human body, such as: the location where the symptom may appear, or the probability that the symptom may appear, etc.
Fig. 2 is a flow chart 200 of a diagnostic assistance method according to an embodiment of the invention. This means that the radio resource allocation method can be applied to the diagnosis assistance system 100 of the present invention. In step S210, an audio data is generated by a stethoscope of the diagnosis assistance system 100. In step S220, an ultrasonic device of the diagnosis assistance system 100 generates image data. In step S230, the sound data generated by the stethoscope is processed by a first processing module of the diagnosis assistance device of the diagnosis assistance system 100 to generate a first result. In step S240, the image data generated by the ultrasonic device is processed by a second processing module of the diagnosis assistance device of the diagnosis assistance system 100 to generate a second result. In step S250, an auxiliary diagnosis result is generated by the diagnosis assisting apparatus of the diagnosis assisting system 100 according to the first result and the second result.
According to an embodiment of the present invention, in the diagnosis assisting method, the first processing module processes sound data generated by the stethoscope according to a first algorithm to generate a first result, and the second processing module processes image data generated by the ultrasonic device according to a second algorithm to generate a second result. According to an embodiment of the present invention, the first algorithm may be a Recursive Neural Network (RNN) deep learning algorithm and the second algorithm may be a Convolutional Neural Network (CNN) deep learning algorithm. According to an embodiment of the present invention, in the diagnosis assisting method, the first processing module compares the trained sound data with the sound data generated by the processing stethoscope according to a first algorithm to generate a first result, and the second processing module compares the trained image data with the image data generated by the ultrasonic device according to a second algorithm to generate a second result.
According to an embodiment of the present invention, the diagnosis assisting method further includes analyzing the first result and the second result according to a third algorithm by a third processing module of the diagnosis assisting apparatus of the diagnosis assisting system 100 to generate the diagnosis assisting result. According to an embodiment of the present invention, the third algorithm may be an ensemble learning algorithm.
According to an embodiment of the present invention, the method further includes displaying the diagnosis assistance result through a display device of the diagnosis assistance system 100. According to an embodiment of the present invention, the auxiliary diagnosis result may be a voice data with a mark, an image data with a mark, or a text data, but the present invention is not limited thereto.
According to the diagnosis assisting method provided by the embodiment of the invention, the results obtained by the ultrasonic device and the stethoscope are integrated, and the correlation between the results obtained by the ultrasonic device and the stethoscope is enhanced through the calculation of the deep learning algorithm, so that an auxiliary diagnosis result is more accurately and effectively provided for reference of a doctor.
Reference numerals, such as "first", "second", etc., in the description and in the claims are used for convenience of description and do not have a sequential relationship with each other.
The steps of the methods and algorithms disclosed in the present specification may be implemented directly in hardware, in software modules, or in a combination of the two by executing a processor. A software module (including executable instructions and associated data) and other data may be stored in a data memory such as a Random Access Memory (RAM), flash memory (flash memory), Read Only Memory (ROM), Erasable Programmable Read Only Memory (EPROM), Electrically Erasable Programmable Read Only Memory (EEPROM), registers, hard disk, portable diskette, compact disk read only memory (CD-ROM), DVD, or any other computer-readable storage media format known in the art. A storage medium may be coupled to a machine, such as, for example, a computer/processor (for convenience of description, the processor is referred to herein as a "processor"), which reads information (such as program code) from, and writes information to, the storage medium. A storage medium may incorporate a processor. An Application Specific Integrated Circuit (ASIC) includes a processor and a storage medium. A user equipment includes an ASIC. In other words, the processor and the storage medium are included in the user equipment without being directly connected to the user equipment. In addition, in some embodiments, any suitable computer program product includes a readable storage medium including program code associated with one or more of the disclosed embodiments. In some embodiments, the product of the computer program may include packaging materials.
The above paragraphs use various levels of description. It should be apparent that the teachings herein may be implemented in a wide variety of ways and that any specific architecture or functionality disclosed in the examples is merely representative. Any person skilled in the art will appreciate, in light of the teachings herein, that each of the layers disclosed herein can be implemented independently or that more than two layers can be implemented in combination.
Although the present disclosure has been described with reference to exemplary embodiments, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the disclosure, and therefore, the scope of the invention is to be determined by the appended claims.

Claims (10)

1. A diagnostic assistance method comprising:
generating sound data through a stethoscope;
generating image data by an ultrasonic device;
processing the sound data through a first processing module to generate a first result;
processing the image data according to a second processing module to generate a second result; and
generating an auxiliary diagnosis result according to the first result and the second result.
2. The diagnostic support method of claim 1, wherein the first processing module processes the audio data according to a first algorithm to generate the first result, and the second processing module processes the image data according to a second algorithm to generate the second result.
3. The diagnostic assistance method of claim 2, wherein the first algorithm is a Recurrent Neural Network (RNN) deep learning algorithm and the second algorithm is a Convolutional Neural Network (CNN) deep learning algorithm.
4. The diagnostic assistance method of claim 2, further comprising:
comparing the trained voice data with the voice data according to the first algorithm to generate the first result; and
according to the second algorithm, the trained image data is compared with the image data to generate the second result.
5. The diagnostic assistance method of claim 1, further comprising:
the first result and the second result are analyzed by a third processing module according to a third algorithm to generate the auxiliary diagnosis result.
6. The diagnostic aid of claim 5, wherein the third algorithm is a whole body learning algorithm.
7. The diagnostic assistance method of claim 1, wherein said stethoscope is a digital stethoscope.
8. The diagnostic aid of claim 1, wherein the ultrasonic device is an ultrasonic probe.
9. The diagnostic assistance method of claim 1, further comprising:
and displaying the auxiliary diagnosis result on a display device.
10. The diagnosis assisting method according to claim 1, wherein the diagnosis assisting result is a voice data with a mark, an image data with a mark, or a text data.
CN201811331136.5A 2018-11-09 2018-11-09 Diagnostic support method Pending CN111166371A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811331136.5A CN111166371A (en) 2018-11-09 2018-11-09 Diagnostic support method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811331136.5A CN111166371A (en) 2018-11-09 2018-11-09 Diagnostic support method

Publications (1)

Publication Number Publication Date
CN111166371A true CN111166371A (en) 2020-05-19

Family

ID=70622052

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811331136.5A Pending CN111166371A (en) 2018-11-09 2018-11-09 Diagnostic support method

Country Status (1)

Country Link
CN (1) CN111166371A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5666953A (en) * 1993-01-10 1997-09-16 Wilk; Peter J. System and associated method for providing information for use in forming medical diagnosis
US20040225476A1 (en) * 2003-05-08 2004-11-11 Tien Der Yang Inspection apparatus for diagnosis
CN105701331A (en) * 2014-12-11 2016-06-22 三星电子株式会社 Computer-aided diagnosis apparatus and computer-aided diagnosis method
WO2016206704A1 (en) * 2015-06-25 2016-12-29 Abdalla Magd Ahmed Kotb The smart stethoscope
WO2017147635A1 (en) * 2016-03-02 2017-09-08 Mathias Zirm Remote diagnosis assistance method
US20170323064A1 (en) * 2016-05-05 2017-11-09 James Stewart Bates Systems and methods for automated medical diagnostics
US20180049716A1 (en) * 2016-08-17 2018-02-22 California Institute Of Technology Enhanced stethoscope devices and methods
US20180108440A1 (en) * 2016-10-17 2018-04-19 Jeffrey Stevens Systems and methods for medical diagnosis and biomarker identification using physiological sensors and machine learning

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5666953A (en) * 1993-01-10 1997-09-16 Wilk; Peter J. System and associated method for providing information for use in forming medical diagnosis
US20040225476A1 (en) * 2003-05-08 2004-11-11 Tien Der Yang Inspection apparatus for diagnosis
CN105701331A (en) * 2014-12-11 2016-06-22 三星电子株式会社 Computer-aided diagnosis apparatus and computer-aided diagnosis method
WO2016206704A1 (en) * 2015-06-25 2016-12-29 Abdalla Magd Ahmed Kotb The smart stethoscope
WO2017147635A1 (en) * 2016-03-02 2017-09-08 Mathias Zirm Remote diagnosis assistance method
US20170323064A1 (en) * 2016-05-05 2017-11-09 James Stewart Bates Systems and methods for automated medical diagnostics
US20180049716A1 (en) * 2016-08-17 2018-02-22 California Institute Of Technology Enhanced stethoscope devices and methods
US20180108440A1 (en) * 2016-10-17 2018-04-19 Jeffrey Stevens Systems and methods for medical diagnosis and biomarker identification using physiological sensors and machine learning

Similar Documents

Publication Publication Date Title
CN109919928B (en) Medical image detection method and device and storage medium
CN110678933B (en) Ultrasound clinical feature detection and associated devices, systems, and methods
US8046058B2 (en) Heart beat signal recognition
US11207055B2 (en) Ultrasound Cardiac Doppler study automation
US20160015365A1 (en) System and method for ultrasound elastography and method for dynamically processing frames in real time
US20190159762A1 (en) System and method for ultrasound elastography and method for dynamically processing frames in real time
US20210093301A1 (en) Ultrasound system with artificial neural network for retrieval of imaging parameter settings for recurring patient
KR20150111697A (en) Method and ultrasound apparatus for recognizing an ultrasound image
WO2014024453A1 (en) Medical data processing device, medical data processing method, and ultrasound diagnostic device
CN112561908B (en) Mammary gland image focus matching method, device and storage medium
KR20170064960A (en) Disease diagnosis apparatus and method using a wave signal
CN112292086A (en) Ultrasound lesion assessment and associated devices, systems, and methods
CN111370120B (en) Heart diastole dysfunction detection method based on heart sound signals
US20210090734A1 (en) System, device and method for detection of valvular heart disorders
Lucassen et al. Deep learning for detection and localization of B-lines in lung ultrasound
TWI682770B (en) Diagnostic assistance method
CN111166371A (en) Diagnostic support method
Joshi et al. AI-CardioCare: Artificial Intelligence Based Device for Cardiac Health Monitoring
US20220257195A1 (en) Audio pill
US20200178840A1 (en) Method and device for marking adventitious sounds
CN111260606B (en) Diagnostic device and diagnostic method
CN116529765A (en) Predicting a likelihood that an individual has one or more lesions
US10952625B2 (en) Apparatus, methods and computer programs for analyzing heartbeat signals
CN113052930A (en) Chest DR dual-energy digital subtraction image generation method
WO2020175356A1 (en) Storage medium, image diagnosis assistance device, learning device, and learned model generation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200519

WD01 Invention patent application deemed withdrawn after publication