CN111292296A - Training set acquisition method and device based on eye recognition model - Google Patents

Training set acquisition method and device based on eye recognition model Download PDF

Info

Publication number
CN111292296A
CN111292296A CN202010066887.XA CN202010066887A CN111292296A CN 111292296 A CN111292296 A CN 111292296A CN 202010066887 A CN202010066887 A CN 202010066887A CN 111292296 A CN111292296 A CN 111292296A
Authority
CN
China
Prior art keywords
image
pixel point
region
optic disc
connected region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010066887.XA
Other languages
Chinese (zh)
Inventor
李龙飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202010066887.XA priority Critical patent/CN111292296A/en
Publication of CN111292296A publication Critical patent/CN111292296A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

One or more embodiments of the present specification provide a training set acquisition method and apparatus based on an eye recognition model, where the method includes: carrying out graying processing on the fundus image to obtain a grayscale image; carrying out binarization processing on the gray level image to obtain a binarized image; identifying the optic disc area in the binary image to obtain an identification result; intercepting a optic disc region in the fundus image as a sample image in a training set of the eye part identification model based on the identification result; that is to say, in this scheme, the neural network is trained not based on the fundus image, but only based on the optic disc region in the fundus image, and the optic disc region occupies a small part of the fundus image, so that the training time of the neural network and the memory occupied can be reduced.

Description

Training set acquisition method and device based on eye recognition model
Technical Field
One or more embodiments of the present disclosure relate to the field of image processing technologies, and in particular, to a training set obtaining method and apparatus based on an eye recognition model.
Background
The fundus image may express tissues in the posterior segment of the eyeball, including the inner membrane (retina), disc, macula, and central retinal artery and vein of the eyeball, and so forth. A doctor can diagnose various diseases such as glaucoma, diabetic retinopathy, arteriosclerosis and the like by fundus images.
At present, in some schemes, a plurality of eye fundus images are obtained as training data, and a neural network is trained by using the training data to obtain an eye recognition model; the doctor can use the eye recognition model to perform auxiliary diagnosis.
However, the fundus images carry more information, the training time is long by training the neural network by using the fundus images, and more memories are occupied in the training process.
Disclosure of Invention
In view of this, one or more embodiments of the present disclosure provide a training set obtaining method and apparatus based on an eye recognition model to reduce training time and memory usage.
In view of the above, one or more embodiments of the present specification provide a training set obtaining method based on an eye recognition model, including:
acquiring a fundus image;
carrying out graying processing on the fundus image to obtain a grayscale image;
carrying out binarization processing on the gray level image to obtain a binarized image;
identifying the optic disc area in the binary image to obtain an identification result;
and intercepting a optic disc region in the fundus image as a sample image in a training set of the eye part identification model based on the identification result.
Optionally, the gray image includes a gray value of each pixel point; the graying processing of the fundus image to obtain a grayscale image includes:
and aiming at each pixel point in the eye fundus image, calculating the Y value component of the pixel point in a YUV space based on the RGB value of the pixel point, and taking the Y value component as the gray value of the pixel point.
Optionally, the binarizing the grayscale image to obtain a binarized image includes:
calculating the average value of the gray values of all pixel points in the gray image;
determining a binarization threshold value based on the average value;
and carrying out binarization processing on the gray level image based on the binarization threshold value to obtain a binarization image.
Optionally, the identifying the optic disc region in the binarized image to obtain an identification result includes:
performing morphological operation on the binary image to obtain an operated image;
in the calculated image, a connected region having a pixel value not equal to 0 is searched, and a video disk region is determined based on the search result.
Optionally, the performing morphological operation on the binarized image to obtain an operated image includes:
and carrying out expansion and corrosion treatment on the binary image to obtain an image after operation.
Optionally, the determining the optic disc region based on the search result includes:
counting the number of pixel points in each searched connected region;
judging whether the quantity meets a preset quantity condition or not;
if so, the connected component is determined to be the optic disc area.
Optionally, the intercepting, based on the recognition result, a optic disc region in the fundus image as a sample image in a training set of an eye recognition model includes:
determining a rectangular area consisting of an upper boundary, a lower boundary, a left boundary and a right boundary of the optic disc area in the calculated image;
and mapping the rectangular region to the fundus image to obtain a mapping region, and intercepting the mapping region to be used as a sample image in a training set of the eye part identification model.
Optionally, the determining, in the computed image, a rectangular region composed of an upper boundary, a lower boundary, a left boundary, and a right boundary of the optic disc region includes:
searching the leftmost pixel point of the connected region in the calculated image; judging whether the leftmost pixel point is the edge point of the calculated image or not; if so, removing the leftmost pixel point from the connected region, and returning to the step of searching the leftmost pixel point of the connected region; if not, determining the left boundary of the optic disc area based on the leftmost pixel point;
searching the rightmost pixel point of the connected region; judging whether the rightmost pixel point is the edge point of the calculated image or not; if so, removing the rightmost pixel points from the connected region, and returning to the step of searching the rightmost pixel points of the connected region; if not, determining the right boundary of the optic disc area based on the rightmost pixel point;
searching the uppermost pixel point of the connected region; judging whether the pixel point at the uppermost side is the edge point of the calculated image or not; if so, removing the uppermost pixel point from the connected region, and returning to the step of searching the uppermost pixel point of the connected region; if not, determining the upper boundary of the optic disc area based on the uppermost pixel point;
searching the lowest pixel point of the connected region; judging whether the pixel point at the lowest side is the edge point of the calculated image or not; if so, removing the lowermost pixel point from the connected region, and returning to the step of searching the lowermost pixel point of the connected region; if not, determining the lower boundary of the optic disc area based on the lowermost pixel point;
determining a rectangular area consisting of the upper boundary, the lower boundary, the left boundary, and the right boundary.
In view of the above, one or more embodiments of the present specification further provide an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements any one of the above-mentioned methods for obtaining a training set based on an eye recognition model when executing the computer program.
In view of the above, one or more embodiments of the present specification further provide a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform any one of the above-mentioned methods for training set acquisition based on an eye recognition model.
By applying the embodiment of the invention, the optic disc area in the fundus image is identified, and the optic disc area is intercepted as the sample image in the training set of the eye identification model; that is to say, in this scheme, the neural network is trained not based on the fundus image, but only based on the optic disc region in the fundus image, and the optic disc region occupies a small part of the fundus image, so that the training time of the neural network and the memory occupied can be reduced.
Drawings
In order to more clearly illustrate one or more embodiments or prior art solutions of the present specification, the drawings that are needed in the description of the embodiments or prior art will be briefly described below, and it is obvious that the drawings in the following description are only one or more embodiments of the present specification, and that other drawings may be obtained by those skilled in the art without inventive effort from these drawings.
Fig. 1 is a schematic flowchart of a first method for obtaining a training set based on an eye recognition model according to an embodiment of the present invention;
fig. 2 is a schematic view of a fundus image (gray scale image) according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a binarized image according to an embodiment of the present invention;
fig. 4 is a schematic diagram of an operated image obtained by performing an opening operation on a binarized image according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an image after operation obtained by performing a close operation on a binarized image according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a connected region and a rectangular region provided in an embodiment of the present invention;
FIG. 7 is a schematic diagram of a method of intercepting a disc region in a fundus image according to an embodiment of the present invention;
fig. 8 is a schematic flowchart of a second method for obtaining a training set based on an eye recognition model according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of promoting a better understanding of the objects, aspects and advantages of the present disclosure, reference is made to the following detailed description taken in conjunction with the accompanying drawings.
It is to be noted that unless otherwise defined, technical or scientific terms used in one or more embodiments of the present specification should have the ordinary meaning as understood by those of ordinary skill in the art to which this disclosure belongs. The use of "first," "second," and similar terms in one or more embodiments of the specification is not intended to indicate any order, quantity, or importance, but rather is used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", and the like are used merely to indicate relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships may also be changed accordingly.
In order to achieve the above object, embodiments of the present invention provide a training set obtaining method and device based on an eye recognition model, where the method may be applied to various electronic devices, and is not limited specifically. First, the method for obtaining the training set based on the eye recognition model will be described in detail below.
Fig. 1 is a first flowchart of a training set obtaining method based on an eye recognition model according to an embodiment of the present invention, including:
s101: fundus images are acquired.
For example, the fundus image may be as shown in fig. 2, including the posterior tissue within the eyeball, including the inner membrane (retina), disc, macula, and central retinal artery and vein of the eyeball, and so forth. The fundus image is generally a color image, fig. 2 does not express a color effect, and fig. 2 can also be regarded as a grayscale image of the fundus image.
S102: and carrying out graying processing on the fundus image to obtain a grayscale image.
For example, the fundus image may be processed by various graying methods such as a component method, a maximum value method, or an average value method to obtain a grayscale image. The gray image comprises the gray value of each pixel point, the gray value is an integer between 0 and 255, the larger the numerical value is, the closer the display color is to white, and conversely, the smaller the numerical value is, the closer the color is to black.
In one embodiment, S102 may include: and aiming at each pixel point in the eye fundus image, calculating the Y value component of the pixel point in a YUV space based on the RGB value of the pixel point, and taking the Y value component as the gray value of the pixel point. The gray image comprises the gray value of each pixel point, so that the gray value of each pixel point is obtained, and the gray image is obtained.
In the YUV space, Y represents brightness (Luma) and gray scale values, and "U" and "V" represent Chrominance (Chroma) for describing the color and saturation of an image and for specifying the color of a pixel. In this embodiment, the Y value component in the YUV space is used as the gray value of the pixel.
For example, the RGB values of the pixel points can be converted into Y values by using the following equation: y is 0.3R +0.59G + 0.11B.
S103: and carrying out binarization processing on the gray level image to obtain a binarized image.
The Image Binarization (Image Binarization) processing may be understood as: the gray value of the pixel points in the image is set to be 0 or 255, namely the image presents obvious black and white effect. The binarization processing reduces the data amount in the image, and can highlight the contour of the target.
The binarization processing procedure may include: for each pixel point in the gray-scale image, if the gray-scale value of the pixel point is greater than the binarization threshold, the gray-scale value of the pixel point is set to 255, and if the gray-scale value of the pixel point is not greater than the binarization threshold, the gray-scale value of the pixel point is set to 0.
In one embodiment, the binary threshold may be a fixed value.
In another embodiment, a dynamic binarization threshold may be set, and the dynamic binarization threshold is suitable for different gray level images. For example, an average value of gray values of each pixel in the gray image may be calculated; determining a binarization threshold value based on the average value; and carrying out binarization processing on the gray level image based on the binarization threshold value to obtain a binarization image.
For example, the binarization threshold may be calculated according to the following equation:
Figure BDA0002376241530000061
numthresh=fmean+C;
wherein, numthreshRepresenting a binary threshold value, fmeanExpressing the average value of gray values of all pixel points in the gray image, H expressing the height of the gray image, W expressing the width of the gray image, and f (i, j) expressing the gray value of the ith row and jth column pixel points in the gray image; c represents a preset value, 0<C<255-fmean. For example, C may be 60, or may be 30, etc., and the specific value is not limited.
Because the image brightness may be different due to the influence of the shooting mode, the shooting equipment, the environmental factors and the like, in the embodiment, the dynamic binarization threshold value suitable for different gray level images is set, and compared with the adoption of a fixed threshold value, the binarization effect is better.
S104: and identifying the optic disc area in the binary image to obtain an identification result.
The binarized image may be as shown in fig. 3, and the white area in fig. 3 is the optic disc area. In one embodiment, a connected component having a pixel value other than 0 in the binarized image may be identified as the optic disc region.
Alternatively, in another embodiment, a morphological operation may be performed on the binarized image to obtain an operated image; in the calculated image, a connected region having a pixel value not equal to 0 is searched, and a video disk region is determined based on the search result.
For example, morphological operations may include erosion and dilation. The process of corrosion first and then expansion is called opening operation; the opening operation generally has the effect of eliminating fine objects, separating objects at fine points and smoothing the boundaries of larger objects. The process of expansion followed by erosion is called closed-loop operation. The closed operation generally has the effect of filling fine holes in the object, connecting neighboring objects and smoothing the boundary. The computed image obtained by performing the opening operation on the binarized image may be as shown in fig. 4, and the computed image obtained by performing the closing operation on the binarized image may be as shown in fig. 5.
As shown in fig. 4 and 5, the optic disc region is white and the gray scale value is 255, and the non-optic disc region is black and the gray scale value is 0. In this embodiment, a connected component having a pixel value other than 0, that is, a video disk area is searched.
In one embodiment, the binary image may be subjected to a morphological operation by a closed-loop operation, that is, the binary image may be subjected to a dilation-prior-to-erosion process to obtain an operated image.
The optic disc area in the binary image can be divided into a plurality of small areas, and the small areas can be connected through closed operation to form a complete optic disc area. Therefore, when the subsequently intercepted optic disc area is used as a sample image to train the eye recognition model, the sample image comprises a more complete optic disc area, and the training effect can be improved.
In one embodiment, determining the optic disc region based on the search result may include: counting the number of pixel points in each searched connected region; judging whether the quantity meets a preset quantity condition or not; if so, the connected component is determined to be the optic disc area.
For example, the predetermined number condition may be: greater than a preset threshold; or can also be: greater than a predetermined percentage of the total number of image pixels, such as one percent, or two percent, etc. The preset number condition here can be understood as: and judging whether the number of the pixel points in the communication area is more or not.
In some cases, there is a phenomenon of edge light leakage when acquiring fundus images, which may cause some small bright spots in the fundus images; after the fundus image is subjected to the gradation and binarization processing (morphological operation), these small bright spots may become connected regions having a pixel value of 0. Thus, the area corresponding to the small bright spot may be erroneously recognized as the optic disc area. In the embodiment, if the number of the pixels in the searched connected region with the pixel value of 0 is small, the connected region can be considered as a small bright spot caused by edge light leakage instead of a video disc region, so that the condition of error identification is reduced, and the identification accuracy is improved.
S105: based on the recognition result, a optic disc region is cut out in the fundus image as a sample image in the training set of the eye portion recognition model.
In one case, the optic disc region identified in S104 may be mapped into the fundus image, and then the optic disc region may be directly cut out in the fundus image. For example, in one case, the fundus image, the grayscale image, the binarized image, and the calculated images may use the same coordinate system, or the coordinate values of the pixels in these images may be the same. In this case, the coordinates of the optic disc region in the binarized image or the calculated image may be directly mapped into the fundus image, and the optic disc region may be cut out from the fundus image based on the mapped coordinates.
Alternatively, in another case, the optic disc region identified in S104 may be expanded, which may help to intercept a more complete optic disc region.
For example, in one embodiment, S105 may include: determining a rectangular area consisting of an upper boundary, a lower boundary, a left boundary and a right boundary of the optic disc area in the calculated image; and mapping the rectangular region to the fundus image to obtain a mapping region, and intercepting the mapping region to be used as a sample image in a training set of the eye part identification model.
In the present embodiment, the video disk area is expanded to a rectangular area. Or in other embodiments, the optic disc region may be expanded to a region with another shape, such as a circular region, an elliptical region, or the like, or the shape of the optic disc region may be retained, and the optic disc region is expanded proportionally, which is not limited specifically.
For the case of expanding the image to be a rectangular region, the leftmost pixel point of the connected region can be searched in the calculated image; judging whether the leftmost pixel point is the edge point of the calculated image or not; if so, removing the leftmost pixel point from the connected region, and returning to the step of searching the leftmost pixel point of the connected region; if not, determining the left boundary of the optic disc area based on the leftmost pixel point;
searching the rightmost pixel point of the connected region; judging whether the rightmost pixel point is the edge point of the calculated image or not; if so, removing the rightmost pixel points from the connected region, and returning to the step of searching the rightmost pixel points of the connected region; if not, determining the right boundary of the optic disc area based on the rightmost pixel point;
searching the uppermost pixel point of the connected region; judging whether the pixel point at the uppermost side is the edge point of the calculated image or not; if so, removing the uppermost pixel point from the connected region, and returning to the step of searching the uppermost pixel point of the connected region; if not, determining the upper boundary of the optic disc area based on the uppermost pixel point;
searching the lowest pixel point of the connected region; judging whether the pixel point at the lowest side is the edge point of the calculated image or not; if so, removing the lowermost pixel point from the connected region, and returning to the step of searching the lowermost pixel point of the connected region; if not, determining the lower boundary of the optic disc area based on the lowermost pixel point;
determining a rectangular area consisting of the upper boundary, the lower boundary, the left boundary, and the right boundary.
For example, the pixel point with the first pixel value of 0 may be searched from the center point of the connected region in four directions, i.e., up, down, left, and right, respectively, if the pixel point with the first pixel value of 0 is not the edge point of the image, the boundary of the rectangular region is determined based on the pixel point with the first pixel value of 0, if the pixel point with the first pixel value of 0 is the edge point of the image, the pixel point with the second pixel value of 0 is continuously searched along the original direction, and the boundary of the rectangular region is determined based on the pixel point with the second pixel value of 0.
In some cases, when acquiring a fundus image, there is a phenomenon of edge light leakage, which may cause a connected region to reach an image edge. In this embodiment, if the boundary points (the upper, lower, left, and right boundary points) of the connected region reach the edge of the image, the boundary points are determined again, so that the determined optical disc region does not reach the edge of the image, the influence caused by the edge light leakage phenomenon is reduced, and the recognition accuracy is improved.
Referring to FIG. 6, a small square in FIG. 6 represents a pixel, and it is assumed that the coordinate of the leftmost pixel of the connected region is determined to be (x)left,yleft) The coordinate of the rightmost pixel point is (x)right,yright) The coordinate of the uppermost pixel is (x)up,yup) The coordinate of the lowermost pixel is (x)down,ydown) Then, the coordinates of the four vertices of the rectangular area can be determined to be (x) respectivelyleft,yup)、(xleft,ydown)、(xright,yup)、(xright,ydown)。
In one embodiment, these four vertex coordinates may be directly mapped into the fundus image, resulting in a mapped region. As described above, in a case where the fundus image, the grayscale image, the binarized image, and the calculated image use the same coordinate system, the four vertex coordinates may be directly marked on the fundus image, and the four vertex coordinates may surround the mapping region.
Alternatively, in another embodiment, when the four vertex coordinates are mapped to the fundus image, the expansion may be performed. For example, the four vertex coordinates may be extended outward by one tenth of the original rectangular area. Thus, the four vertex coordinates of the optic disk region in the fundus image are:
upper left of
Figure BDA0002376241530000091
Left lower part
Figure BDA0002376241530000092
Upper right part
Figure BDA0002376241530000093
Lower right
Figure BDA0002376241530000094
The specific proportion of the outward extension of the four vertex coordinates is not limited, for example, the specific proportion may be one eighth, one twelfth, and the like, and is not limited specifically.
Extending these four vertex coordinates outward, the disc region can be cut out in the fundus image as shown in fig. 7, where the rectangular frame in fig. 7 completely encompasses the disc region. The rectangular box may be truncated as a sample image in a training set of the eye recognition model. The fundus image is generally a color image, and thus the extracted optic disk region is also a color image, and fig. 7 does not show a color effect.
With the illustrated embodiments of the present invention, in the first aspect, it is achieved to quickly locate the optic disc region in the fundus image, which contributes to the diagnosis of various diseases. In the second aspect, the neural network is trained not based on the fundus image but based only on the optic disc region in the fundus image, which occupies a small portion of the fundus image, so that the training time of the neural network and the memory occupied can be reduced. In the scheme, the neural network is trained only on the basis of the optic disc region in the fundus image, so that the interference caused by other information in the fundus image is reduced, and the obtained eye part recognition model is high in recognition accuracy.
Fig. 8 is a second flowchart of the method for obtaining a training set based on an eye recognition model according to the embodiment of the present invention, including:
s801: fundus images are acquired.
For example, the fundus image may be as shown in fig. 2, including the posterior tissue within the eyeball, including the inner membrane (retina), disc, macula, and central retinal artery and vein of the eyeball, and so forth. The fundus image is generally a color image, fig. 2 does not express a color effect, and fig. 2 can also be regarded as a grayscale image of the fundus image.
S802: and carrying out graying processing on the fundus image to obtain a grayscale image.
For example, the fundus image may be processed by various graying methods such as a component method, a maximum value method, or an average value method to obtain a grayscale image. The gray image comprises the gray value of each pixel point, the gray value is an integer between 0 and 255, the larger the numerical value is, the closer the display color is to white, and conversely, the smaller the numerical value is, the closer the color is to black.
In one embodiment, S802 may include: and aiming at each pixel point in the eye fundus image, calculating the Y value component of the pixel point in a YUV space based on the RGB value of the pixel point, and taking the Y value component as the gray value of the pixel point. The gray image comprises the gray value of each pixel point, so that the gray value of each pixel point is obtained, and the gray image is obtained.
In the YUV space, Y represents brightness (Luma) and gray scale values, and "U" and "V" represent Chrominance (Chroma) for describing the color and saturation of an image and for specifying the color of a pixel. In this embodiment, the Y value component in the YUV space is used as the gray value of the pixel.
For example, the RGB values of the pixel points can be converted into Y values by using the following equation: y is 0.3R +0.59G + 0.11B.
S803: and carrying out binarization processing on the gray level image to obtain a binarized image.
The Image Binarization (Image Binarization) processing may be understood as: the gray value of the pixel points in the image is set to be 0 or 255, namely the image presents obvious black and white effect. The binarization processing reduces the data amount in the image, and can highlight the contour of the target.
The binarization processing procedure may include: for each pixel point in the gray-scale image, if the gray-scale value of the pixel point is greater than the binarization threshold, the gray-scale value of the pixel point is set to 255, and if the gray-scale value of the pixel point is not greater than the binarization threshold, the gray-scale value of the pixel point is set to 0.
In one embodiment, the binary threshold may be a fixed value.
In another embodiment, a dynamic binarization threshold may be set, and the dynamic binarization threshold is suitable for different gray level images. For example, an average value of gray values of each pixel in the gray image may be calculated; determining a binarization threshold value based on the average value; and carrying out binarization processing on the gray level image based on the binarization threshold value to obtain a binarization image.
For example, the binarization threshold may be calculated according to the following equation:
Figure BDA0002376241530000111
numthresh=fmean+C;
wherein, numthreshRepresenting a binary threshold value, fmeanExpressing the average value of gray values of all pixel points in the gray image, H expressing the height of the gray image, W expressing the width of the gray image, f (i, j) expressing the gray imageThe gray value of the ith row and the jth column of pixel points; c represents a preset value, 0<C<255-fmean. For example, C may be 60, or may be 30, etc., and the specific value is not limited.
S804: and carrying out expansion and corrosion treatment on the binary image to obtain an image after operation.
For example, the morphological operation may include erosion and dilation, and the process of erosion after dilation is referred to as a closed operation. The closed operation generally has the effect of filling fine holes in the object, connecting neighboring objects and smoothing the boundary. The optic disc area in the binary image can be divided into a plurality of small areas, and the small areas can be connected through closed operation to form a complete optic disc area. Therefore, when the subsequently intercepted optic disc area is used as a sample image to train the eye recognition model, the sample image comprises a more complete optic disc area, and the training effect can be improved.
S805: in the calculated image, a connected region having a pixel value other than 0 is searched for.
The computed image obtained by performing the closing operation on the binarized image may be as shown in fig. 5, in which the optic disc region is white and the gray value is 255, and the non-optic disc region is black and the gray value is 0. Therefore, a connected component having a pixel value other than 0 is searched, that is, a video disk area is searched.
S806: and counting the number of pixel points in each searched connected region.
S807: judging whether the quantity meets a preset quantity condition or not; if so, execution proceeds to S808.
S808: the connected component is determined as the optic disc area.
For example, the predetermined number condition may be: greater than a preset threshold; or can also be: greater than a predetermined percentage of the total number of image pixels, such as one percent, or two percent, etc. The preset number condition here can be understood as: and judging whether the number of the pixel points in the communication area is more or not.
In some cases, there is a phenomenon of edge light leakage when acquiring fundus images, which may cause some small bright spots in the fundus images; after the fundus image is subjected to the gradation and binarization processing (morphological operation), these small bright spots may become connected regions having a pixel value of 0. Thus, the area corresponding to the small bright spot may be erroneously recognized as the optic disc area. In the embodiment, if the number of the pixels in the searched connected region with the pixel value of 0 is small, the connected region can be considered as a small bright spot caused by edge light leakage instead of a video disc region, so that the condition of error identification is reduced, and the identification accuracy is improved.
S809: in the calculated image, a rectangular region composed of the upper, lower, left and right boundaries of the optic disc region is determined.
For example, S809 may include: searching the leftmost pixel point of the connected region in the calculated image; judging whether the leftmost pixel point is the edge point of the calculated image or not; if so, removing the leftmost pixel point from the connected region, and returning to the step of searching the leftmost pixel point of the connected region; if not, determining the left boundary of the optic disc area based on the leftmost pixel point;
searching the rightmost pixel point of the connected region; judging whether the rightmost pixel point is the edge point of the calculated image or not; if so, removing the rightmost pixel points from the connected region, and returning to the step of searching the rightmost pixel points of the connected region; if not, determining the right boundary of the optic disc area based on the rightmost pixel point;
searching the uppermost pixel point of the connected region; judging whether the pixel point at the uppermost side is the edge point of the calculated image or not; if so, removing the uppermost pixel point from the connected region, and returning to the step of searching the uppermost pixel point of the connected region; if not, determining the upper boundary of the optic disc area based on the uppermost pixel point;
searching the lowest pixel point of the connected region; judging whether the pixel point at the lowest side is the edge point of the calculated image or not; if so, removing the lowermost pixel point from the connected region, and returning to the step of searching the lowermost pixel point of the connected region; if not, determining the lower boundary of the optic disc area based on the lowermost pixel point;
determining a rectangular area consisting of the upper boundary, the lower boundary, the left boundary, and the right boundary.
In some cases, when acquiring a fundus image, there is a phenomenon of edge light leakage, which may cause a connected region to reach an image edge. In this embodiment, if the boundary points (the upper, lower, left, and right boundary points) of the connected region reach the edge of the image, the boundary points are determined again, so that the determined optical disc region does not reach the edge of the image, the influence caused by the edge light leakage phenomenon is reduced, and the recognition accuracy is improved.
Referring to FIG. 6, assume that the leftmost pixel of the connected component is determined to have the coordinate (x)left,yleft) The coordinate of the rightmost pixel point is (x)right,yright) The coordinate of the uppermost pixel is (x)up,yup) The coordinate of the lowermost pixel is (x)down,ydown) Then, the coordinates of the four vertices of the rectangular area can be determined to be (x) respectivelyleft,yup)、(xleft,ydown)、(xright,yup)、(xright,ydown)。
S810: and mapping the rectangular region to the fundus image to obtain a mapping region, and intercepting the mapping region to be used as a sample image in a training set of the eye part identification model.
In one embodiment, these four vertex coordinates may be directly mapped into the fundus image, resulting in a mapped region. As described above, in a case where the fundus image, the grayscale image, the binarized image, and the calculated image use the same coordinate system, the four vertex coordinates may be directly marked on the fundus image, and the four vertex coordinates may surround the mapping region.
Alternatively, in another embodiment, when the four vertex coordinates are mapped to the fundus image, the expansion may be performed. For example, the four vertex coordinates may be extended outward by one tenth of the original rectangular area. Thus, the four vertex coordinates of the optic disk region in the fundus image are:
upper left of
Figure BDA0002376241530000131
Left lower part
Figure BDA0002376241530000132
Upper right part
Figure BDA0002376241530000133
Lower right
Figure BDA0002376241530000134
The specific proportion of the outward extension of the four vertex coordinates is not limited, for example, the specific proportion may be one eighth, one twelfth, and the like, and is not limited specifically.
Extending these four vertex coordinates outward, the disc region can be cut out in the fundus image as shown in fig. 7, where the rectangular frame in fig. 7 completely encompasses the disc region. The rectangular box may be truncated as a sample image in a training set of the eye recognition model. The fundus image is generally a color image, and thus the extracted optic disk region is also a color image, and fig. 7 does not show a color effect.
With the illustrated embodiments of the present invention, in the first aspect, it is achieved to quickly locate the optic disc region in the fundus image, which contributes to the diagnosis of various diseases.
In the second aspect, the neural network is trained not based on the fundus image but based only on the optic disc region in the fundus image, which occupies a small portion of the fundus image, so that the training time of the neural network and the memory occupied can be reduced.
In the scheme, the neural network is trained only on the basis of the optic disc region in the fundus image, so that the interference caused by other information in the fundus image is reduced, and the obtained eye part recognition model is high in recognition accuracy.
In a fourth aspect, due to the influence of the shooting mode, the shooting equipment, environmental factors and the like, the brightness of the image may be different, and in one embodiment, dynamic binarization thresholds suitable for different gray level images are set, so that the binarization effect is better than that of one fixed threshold.
In the fifth aspect, the optic disc area in the binary image may be divided into many small areas, and the small areas may be connected by a closing operation to form a complete optic disc area. Therefore, when the subsequently intercepted optic disc area is used as a sample image to train the eye recognition model, the sample image comprises a more complete optic disc area, and the training effect can be improved.
In the sixth aspect, in some cases, there is a phenomenon of light leakage at the edge when acquiring a fundus image, which may cause some small bright spots in the fundus image; after the fundus image is subjected to the gradation and binarization processing (morphological operation), these small bright spots may become connected regions having a pixel value of 0. Thus, the area corresponding to the small bright spot may be erroneously recognized as the optic disc area. In one embodiment, if the number of pixels in the searched connected region with the pixel value of 0 is small, the connected region can be considered as a small bright point caused by edge light leakage instead of a video disc region, so that the situation of false identification is reduced, and the identification accuracy is improved.
In the seventh aspect, in some cases, there is a phenomenon of edge light leakage when acquiring a fundus image, which causes a connected region to reach an image edge. In one embodiment, if the boundary points (the upper, lower, left and right boundary points) of the connected region reach the edge of the image, the boundary points are determined again, so that the determined optic disc region does not reach the edge of the image, the influence caused by the edge light leakage phenomenon is reduced, and the identification accuracy is improved.
It should be noted that the method of one or more embodiments of the present disclosure may be performed by a single device, such as a computer or server. The method of the embodiment can also be applied to a distributed scene and completed by the mutual cooperation of a plurality of devices. In such a distributed scenario, one of the devices may perform only one or more steps of the method of one or more embodiments of the present disclosure, and the devices may interact with each other to complete the method.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Corresponding to the above method embodiments, an embodiment of the present invention further provides an electronic device, as shown in fig. 9, including a memory 902, a processor 901, and a computer program stored on the memory 902 and executable on the processor 901, where the processor 901 implements any one of the above training set obtaining methods based on an eye recognition model when executing the program.
The processor 901 may be implemented by a general-purpose CPU (Central Processing Unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits, and is configured to execute related programs to implement the technical solutions provided in the embodiments of the present specification.
The Memory 902 may be implemented in the form of a ROM (Read Only Memory), a RAM (Random access Memory), a static storage device, a dynamic storage device, or the like. The memory 902 may store an operating system and other application programs, and when the technical solution provided by the embodiments of the present specification is implemented by software or firmware, the relevant program codes are stored in the memory 902 and called by the processor 901 for execution.
It should be noted that although the above device only shows the processor 901 and the memory 902, in a specific implementation, the device may also include other components necessary for normal operation. In addition, those skilled in the art will appreciate that the above-described apparatus may also include only those components necessary to implement the embodiments of the present description, and not necessarily all of the components shown in the figures.
An embodiment of the present invention further provides a non-transitory computer-readable storage medium, which stores computer instructions for causing a computer to execute any one of the above-mentioned training set acquisition methods based on an eye recognition model.
Computer-readable media of the present embodiments, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, is limited to these examples; within the spirit of the present disclosure, features from the above embodiments or from different embodiments may also be combined, steps may be implemented in any order, and there are many other variations of different aspects of one or more embodiments of the present description as described above, which are not provided in detail for the sake of brevity.
In addition, well-known power/ground connections to Integrated Circuit (IC) chips and other components may or may not be shown in the provided figures, for simplicity of illustration and discussion, and so as not to obscure one or more embodiments of the disclosure. Furthermore, devices may be shown in block diagram form in order to avoid obscuring the understanding of one or more embodiments of the present description, and this also takes into account the fact that specifics with respect to implementation of such block diagram devices are highly dependent upon the platform within which the one or more embodiments of the present description are to be implemented (i.e., specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the disclosure, it should be apparent to one skilled in the art that one or more embodiments of the disclosure can be practiced without, or with variation of, these specific details. Accordingly, the description is to be regarded as illustrative instead of restrictive.
While the present disclosure has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of these embodiments will be apparent to those of ordinary skill in the art in light of the foregoing description. For example, other memory architectures (e.g., dynamic ram (dram)) may use the discussed embodiments.
It is intended that the one or more embodiments of the present specification embrace all such alternatives, modifications and variations as fall within the broad scope of the appended claims. Therefore, any omissions, modifications, substitutions, improvements, and the like that may be made without departing from the spirit and principles of one or more embodiments of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (10)

1. A training set acquisition method based on an eye recognition model is characterized by comprising the following steps:
acquiring a fundus image;
carrying out graying processing on the fundus image to obtain a grayscale image;
carrying out binarization processing on the gray level image to obtain a binarized image;
identifying the optic disc area in the binary image to obtain an identification result;
and intercepting a optic disc region in the fundus image as a sample image in a training set of the eye part identification model based on the identification result.
2. The method of claim 1, wherein the gray scale image comprises a gray scale value for each pixel; the graying processing of the fundus image to obtain a grayscale image includes:
and aiming at each pixel point in the eye fundus image, calculating the Y value component of the pixel point in a YUV space based on the RGB value of the pixel point, and taking the Y value component as the gray value of the pixel point.
3. The method according to claim 1, wherein the binarizing the grayscale image to obtain a binarized image comprises:
calculating the average value of the gray values of all pixel points in the gray image;
determining a binarization threshold value based on the average value;
and carrying out binarization processing on the gray level image based on the binarization threshold value to obtain a binarization image.
4. The method according to claim 1, wherein the identifying of the optic disc region in the binarized image, resulting in an identification result, comprises:
performing morphological operation on the binary image to obtain an operated image;
in the calculated image, a connected region having a pixel value not equal to 0 is searched, and a video disk region is determined based on the search result.
5. The method according to claim 4, wherein said performing a morphological operation on said binarized image to obtain an operated image comprises:
and carrying out expansion and corrosion treatment on the binary image to obtain an image after operation.
6. The method of claim 4, wherein determining the optic disc region based on the search result comprises:
counting the number of pixel points in each searched connected region;
judging whether the quantity meets a preset quantity condition or not;
if so, the connected component is determined to be the optic disc area.
7. The method according to claim 4, wherein the intercepting of the optic disc region in the fundus image based on the recognition result as a sample image in a training set of an eye portion recognition model includes:
determining a rectangular area consisting of an upper boundary, a lower boundary, a left boundary and a right boundary of the optic disc area in the calculated image;
and mapping the rectangular region to the fundus image to obtain a mapping region, and intercepting the mapping region to be used as a sample image in a training set of the eye part identification model.
8. The method of claim 7, wherein determining a rectangular region consisting of an upper boundary, a lower boundary, a left boundary, and a right boundary of a disc region in the computed image comprises:
searching the leftmost pixel point of the connected region in the calculated image; judging whether the leftmost pixel point is the edge point of the calculated image or not; if so, removing the leftmost pixel point from the connected region, and returning to the step of searching the leftmost pixel point of the connected region; if not, determining the left boundary of the optic disc area based on the leftmost pixel point;
searching the rightmost pixel point of the connected region; judging whether the rightmost pixel point is the edge point of the calculated image or not; if so, removing the rightmost pixel points from the connected region, and returning to the step of searching the rightmost pixel points of the connected region; if not, determining the right boundary of the optic disc area based on the rightmost pixel point;
searching the uppermost pixel point of the connected region; judging whether the pixel point at the uppermost side is the edge point of the calculated image or not; if so, removing the uppermost pixel point from the connected region, and returning to the step of searching the uppermost pixel point of the connected region; if not, determining the upper boundary of the optic disc area based on the uppermost pixel point;
searching the lowest pixel point of the connected region; judging whether the pixel point at the lowest side is the edge point of the calculated image or not; if so, removing the lowermost pixel point from the connected region, and returning to the step of searching the lowermost pixel point of the connected region; if not, determining the lower boundary of the optic disc area based on the lowermost pixel point;
determining a rectangular area consisting of the upper boundary, the lower boundary, the left boundary, and the right boundary.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 8 when executing the program.
10. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1 to 8.
CN202010066887.XA 2020-01-20 2020-01-20 Training set acquisition method and device based on eye recognition model Pending CN111292296A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010066887.XA CN111292296A (en) 2020-01-20 2020-01-20 Training set acquisition method and device based on eye recognition model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010066887.XA CN111292296A (en) 2020-01-20 2020-01-20 Training set acquisition method and device based on eye recognition model

Publications (1)

Publication Number Publication Date
CN111292296A true CN111292296A (en) 2020-06-16

Family

ID=71023370

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010066887.XA Pending CN111292296A (en) 2020-01-20 2020-01-20 Training set acquisition method and device based on eye recognition model

Country Status (1)

Country Link
CN (1) CN111292296A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937085A (en) * 2022-06-28 2023-04-07 哈尔滨学院 Nuclear cataract image processing method based on neural network learning

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049763A (en) * 2012-12-07 2013-04-17 华中科技大学 Context-constraint-based target identification method
CN104751187A (en) * 2015-04-14 2015-07-01 山西科达自控股份有限公司 Automatic meter-reading image recognition method
CN106384343A (en) * 2016-08-24 2017-02-08 上海交通大学 Morphological processing based hard exudation area detecting method
US20170112372A1 (en) * 2015-10-23 2017-04-27 International Business Machines Corporation Automatically detecting eye type in retinal fundus images
CN107180421A (en) * 2016-03-09 2017-09-19 中兴通讯股份有限公司 A kind of eye fundus image lesion detection method and device
CN107292877A (en) * 2017-07-05 2017-10-24 北京至真互联网技术有限公司 A kind of right and left eyes recognition methods based on eye fundus image feature
WO2018116321A2 (en) * 2016-12-21 2018-06-28 Braviithi Technologies Private Limited Retinal fundus image processing method
CN109800615A (en) * 2018-12-28 2019-05-24 新大陆数字技术股份有限公司 The detection localization method and system of two-dimension code area
CN110335254A (en) * 2019-06-10 2019-10-15 北京至真互联网技术有限公司 Eye fundus image compartmentalization deep learning method, apparatus and equipment and storage medium
CN110543802A (en) * 2018-05-29 2019-12-06 北京大恒普信医疗技术有限公司 Method and device for identifying left eye and right eye in fundus image

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049763A (en) * 2012-12-07 2013-04-17 华中科技大学 Context-constraint-based target identification method
CN104751187A (en) * 2015-04-14 2015-07-01 山西科达自控股份有限公司 Automatic meter-reading image recognition method
US20170112372A1 (en) * 2015-10-23 2017-04-27 International Business Machines Corporation Automatically detecting eye type in retinal fundus images
CN107180421A (en) * 2016-03-09 2017-09-19 中兴通讯股份有限公司 A kind of eye fundus image lesion detection method and device
CN106384343A (en) * 2016-08-24 2017-02-08 上海交通大学 Morphological processing based hard exudation area detecting method
WO2018116321A2 (en) * 2016-12-21 2018-06-28 Braviithi Technologies Private Limited Retinal fundus image processing method
CN107292877A (en) * 2017-07-05 2017-10-24 北京至真互联网技术有限公司 A kind of right and left eyes recognition methods based on eye fundus image feature
CN110543802A (en) * 2018-05-29 2019-12-06 北京大恒普信医疗技术有限公司 Method and device for identifying left eye and right eye in fundus image
CN109800615A (en) * 2018-12-28 2019-05-24 新大陆数字技术股份有限公司 The detection localization method and system of two-dimension code area
CN110335254A (en) * 2019-06-10 2019-10-15 北京至真互联网技术有限公司 Eye fundus image compartmentalization deep learning method, apparatus and equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937085A (en) * 2022-06-28 2023-04-07 哈尔滨学院 Nuclear cataract image processing method based on neural network learning
CN115937085B (en) * 2022-06-28 2023-08-01 哈尔滨学院 Nuclear cataract image processing method based on neural network learning

Similar Documents

Publication Publication Date Title
US10719954B2 (en) Method and electronic device for extracting a center position of an infrared spot
TWI746674B (en) Type prediction method, device and electronic equipment for identifying objects in images
US10216979B2 (en) Image processing apparatus, image processing method, and storage medium to detect parts of an object
CN108764358B (en) Terahertz image identification method, device and equipment and readable storage medium
JP2004326805A (en) Method of detecting and correcting red-eye in digital image
CN109697719B (en) Image quality evaluation method and device and computer readable storage medium
US10687913B2 (en) Image processing device, image processing method, and computer-readable recording medium for detecting abnormality from intraluminal image using integrated feature data
CN112464829B (en) Pupil positioning method, pupil positioning equipment, storage medium and sight tracking system
CN110866932A (en) Multi-channel tongue edge detection device and method and storage medium
CN110969046B (en) Face recognition method, face recognition device and computer-readable storage medium
US10229498B2 (en) Image processing device, image processing method, and computer-readable recording medium
CN116542982B (en) Departure judgment device defect detection method and device based on machine vision
US9762773B2 (en) Image processing apparatus and method for increasing sharpness of images
JP2017045441A (en) Image generation method and image generation system
US9830363B2 (en) Image evaluation apparatus, image evaluation method, and computer program product
CN116057949B (en) System and method for quantifying flare in an image
US8891879B2 (en) Image processing apparatus, image processing method, and program
CN111292296A (en) Training set acquisition method and device based on eye recognition model
JP6887154B2 (en) Image processing system, evaluation model construction method, image processing method and program
CN111126191B (en) Iris image acquisition method, iris image acquisition device and storage medium
CN111179245A (en) Image quality detection method, device, electronic equipment and storage medium
US10192308B2 (en) Image processing apparatus, image processing method, and storage medium
Khan et al. Segmentation of single and overlapping leaves by extracting appropriate contours
JP2019109710A (en) Image processing apparatus, image processing method, and program
CN114648751A (en) Method, device, terminal and storage medium for processing video subtitles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination