CN113592948A - Detection method of bar vision - Google Patents

Detection method of bar vision Download PDF

Info

Publication number
CN113592948A
CN113592948A CN202110943058.XA CN202110943058A CN113592948A CN 113592948 A CN113592948 A CN 113592948A CN 202110943058 A CN202110943058 A CN 202110943058A CN 113592948 A CN113592948 A CN 113592948A
Authority
CN
China
Prior art keywords
image
bar
pixel
grayscale
grid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110943058.XA
Other languages
Chinese (zh)
Inventor
陈大伟
陈志�
王追
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sibionics Technology Co Ltd
Original Assignee
Shenzhen Sibionics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sibionics Technology Co Ltd filed Critical Shenzhen Sibionics Technology Co Ltd
Publication of CN113592948A publication Critical patent/CN113592948A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/77Determining position or orientation of objects or cameras using statistical methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to a bar vision detection method, which includes: the method comprises the steps of obtaining an initial image, carrying out graying processing on the initial image to obtain a grayscale image, obtaining a histogram of the grayscale image based on the grayscale image, obtaining characteristic information of the grayscale image based on the grayscale image and the histogram, further judging whether the grayscale image is a bar-grid vision image based on a first-class preset threshold value, carrying out binarization processing on the grayscale image to obtain a binary image if the grayscale image is the bar-grid vision image, obtaining the number of corresponding pixel points with target pixel value projection on each preset mapping line based on the binary image and a plurality of preset mapping lines, and determining the target preset mapping lines based on the number of the corresponding pixel points with the target pixel value projection on each preset mapping line and a second-class preset threshold value so as to determine the bar-grid direction on the initial image, wherein the bar-grid direction on the initial image is perpendicular to the direction of the target preset mapping lines.

Description

Detection method of bar vision
The application is filed as2020, 04 months and 01 daysApplication No. is2020102514518The invention is named asRetina Method for detecting bar grid vision of stimulatorDivisional application of the patent application.
Technical Field
The disclosure particularly relates to a bar vision detection method.
Background
Retinal diseases such as RP (retinitis pigmentosa), AMD (age-related macular degeneration), and the like are important blinding diseases, and patients suffer from visual deterioration or blindness due to obstruction of the light-sensing pathway. With the research and development of the technology, there has appeared a technical means for repairing the above-mentioned retinal diseases using a retinal stimulator or the like. The existing retinal stimulators generally include a camera device disposed outside the patient's body, an image processing device, and an intraocular implant (also referred to as an "implant device") placed in the patient's eyeball. In which an image pickup device outside the body captures an image of the outside world to obtain an image signal, and an image processing device processes the image signal and transmits the processed image signal (also referred to as a "visual signal") to the implant. The implant further converts these image signals into electrical stimulation signals to stimulate ganglion cells or bipolar cells on the retina, thereby producing light perception to the patient.
In the prior art, bar-grid vision is generally used as a main evaluation index of retinal stimulators. However, since the image processing apparatus generally processes the image signal directly by a compression method, it is difficult for the retina stimulator to recognize a bar-grid vision image requiring a higher vision requirement, for example, less than the logarithm of the minimum resolution viewing angle of 2.3. A conventional image processing apparatus compresses an image signal.
Disclosure of Invention
The present disclosure has been made in view of the above circumstances, and an object thereof is to provide a bar-vision detecting method of a retina stimulator capable of effectively recognizing a bar-vision image and having a small amount of computation.
To this end, the present disclosure provides a method for detecting bar-and-grid vision of a retinal stimulator, the retinal stimulator including an imaging device located outside a body, the method comprising the steps of: (a) acquiring an initial image by using the camera device; (b) carrying out graying processing on the initial image to obtain a grayscale image, obtaining a histogram of pixel distribution of the grayscale image according to the grayscale image, and obtaining feature information of the grayscale image according to the grayscale image and the histogram; (c) judging whether the gray level image is a bar-grid vision image or not according to the characteristic information and a first-class preset threshold value; (d) if the gray level image is the bar grid vision image, performing binarization processing on the gray level image to obtain a binary image; (e) selecting a target pixel value from the binary image, and obtaining the number of corresponding pixel points with target pixel value projection on each preset mapping line according to whether the target pixel value for projection exists in the vertical direction of a plurality of preset mapping lines arranged in the binary image; and (f) determining a target predetermined mapping line according to the number of corresponding pixel points projected by the target pixel value on each predetermined mapping line and a second type predetermined threshold value, so as to determine a bar grid direction on the binary image, wherein the bar grid direction on the binary image is perpendicular to the direction of the target predetermined mapping line.
In the disclosure, the retina stimulator acquires an initial image by using the image pickup device, performs graying processing on the initial image to obtain a grayscale image, obtains a histogram of the grayscale image based on the grayscale image, obtains feature information of the grayscale image based on the grayscale image and the histogram, and further determines whether the grayscale image is a bar-vision image based on a first-class predetermined threshold. If the gray level image is a bar-grid vision image, performing binarization processing on the gray level image to obtain a binary image, obtaining the number of corresponding pixel points with target pixel value projection on each preset mapping line based on the binary image and a plurality of preset mapping lines, and determining the target preset mapping lines based on the number of the corresponding pixel points with the target pixel value projection on each preset mapping line and a second preset threshold value, so as to determine the bar-grid direction on the binary image. Therefore, the bar-grid vision image can be effectively recognized, and the calculation amount is small.
In the detection method related to the present disclosure, optionally, the feature information includes a first number of pixels, a gray-scale mean value and a region centroid coordinate obtained from the gray-scale image, where the first number of pixels is the number of pixels in each pixel interval of a gray-scale value in the gray-scale image obtained based on the histogram, and the gray-scale mean value is an average value of gray-scale values of all pixels of the gray-scale image. Thereby, the characteristic information of the gradation image can be obtained.
In the detection method related to the present disclosure, optionally, the feature information further includes a second number of pixels, where the second number of pixels is the number of pixels in the grayscale image whose grayscale values are smaller than the grayscale mean value and/or the number of pixels whose grayscale values are larger than the grayscale mean value. Whereby the second number of pixels can be obtained.
In the detection method according to the present disclosure, optionally, the pixel section includes a black pixel section, a white pixel section, and a plurality of other pixel sections, and the feature information includes a sum of differences between the number of pixels in the black pixel section and the number of pixels in the white pixel section, and a sum of the numbers of pixels in the plurality of other pixel sections, where the black pixel section includes pixels having a grayscale value of 0 to 50, and the white pixel section includes pixels having a grayscale value of 201 to 255. Thereby, the characteristic information of the gradation image can be obtained.
In the detection method related to the present disclosure, optionally, the feature information includes the first number of pixels, the grayscale mean, the area centroid coordinate, and the second number of pixels, and the first type of predetermined threshold has a threshold range corresponding to each type of feature information. Thereby enabling a corresponding comparison of the first type of predetermined threshold value and the characteristic information.
In the detection method related to the present disclosure, optionally, the bar grid direction of the bar grid vision image is selected from one of a horizontal direction, a vertical direction, a first direction having an angle of 45 ° with the horizontal direction, and a second direction having an angle of 135 ° with the horizontal direction. Thereby facilitating subsequent determination of the bar orientation of the bar vision image.
In the detection method related by the present disclosure, optionally, the predetermined mapping lines are from one side of the binary image to the other side of the binary image, and directions of the predetermined mapping lines are respectively a horizontal direction, a vertical direction, a first direction having an angle of 45 ° with the horizontal direction, and a second direction having an angle of 135 ° with the horizontal direction. Thereby enabling the determination of the predetermined mapping line to be facilitated.
In the detection method according to the present disclosure, optionally, the target pixel value is a minimum grayscale value in the binary image. This enables the target pixel value to be selected from the binary image.
Optionally, in step (f), whether corresponding pixel points on each predetermined mapping line are continuous is determined by comparing the number of corresponding pixel points projected by the target pixel value on each predetermined mapping line with the second predetermined threshold, so as to determine the target predetermined mapping line. Thereby enabling the determination of the target predetermined mapping line.
In the detection method related to the present disclosure, optionally, before the step (e), the binary image is further subjected to an expansion and erosion process. Therefore, the bar grid direction in the bar grid vision image can be conveniently identified subsequently.
According to the detection method, the bar-grid vision image can be effectively identified and the calculation amount is small.
Drawings
Fig. 1 is a schematic diagram showing a structure of a retinal stimulator according to an example of the present disclosure.
Fig. 2 is a flowchart illustrating a method of detecting bar-and-grid vision of a retinal stimulator according to an example of the present disclosure.
Fig. 3 is a grayscale image and histogram diagram illustrating a detection method of bar vision of a retinal stimulator according to an example of the present disclosure.
Fig. 4 is an application diagram illustrating a bar-grid vision testing method of a retinal stimulator according to an example of the present disclosure.
Fig. 5 is a schematic diagram illustrating a direction template to which examples of the present disclosure relate.
Fig. 6 is a schematic diagram showing the configuration positions of a stimulation electrode array according to an example of the present disclosure.
Detailed Description
Hereinafter, preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In the following description, the same components are denoted by the same reference numerals, and redundant description thereof is omitted. The drawings are schematic and the ratio of the dimensions of the components and the shapes of the components may be different from the actual ones.
The present disclosure provides a bar vision detection method. In the present disclosure, a bar-grid vision image can be recognized efficiently and with a small amount of computation. The present disclosure is described in detail below with reference to the attached drawings.
Fig. 1 is a schematic diagram showing a structure of a retinal stimulator 1 according to an example of the present disclosure. The retinal stimulator 1 of the present disclosure may be particularly useful, for example, in patients who have retinopathy leading to blindness, but whose visual pathways remain intact, such as bipolar cells, ganglion cells, etc. In the present disclosure, the retinal stimulator 1 is also sometimes referred to as "artificial retina", "artificial retinal system", or the like.
In some examples, as shown in fig. 1, the retinal stimulator 1 may include an implant device 10, a camera device 20, and an image processing device 30. The implant device 10 may receive the visual signal and generate an electrical stimulation signal based on the visual signal to create a sensation of light in the patient. Wherein, the visual signal can be collected by the camera device 20 and processed by the image processing device 30.
In some examples, the implant device 10 may include a stimulation electrode array 11 (see fig. 6). The stimulation electrode array 11 may include a predetermined number of stimulation electrodes (also referred to as "electrodes") such as the stimulation electrodes 101, 102, 103, and the like in fig. 6. The stimulation electrode may generate an electrical stimulation signal based on the visual signal. In particular, the implant device 10 may receive visual signals and the stimulation electrodes may convert the received visual signals into bi-directional pulsed current signals as electrical stimulation signals, thereby delivering bi-directional pulsed current signals to ganglion cells or bipolar cells of the retina to produce light sensation. Alternatively, the implant device 10 may be implanted in a human body, such as an eyeball. In some examples, image processing device 30 may have a transmitting antenna for transmitting visual signals and implant device 10 may have a receiving antenna for receiving visual signals.
In some examples, the visual signals received by the implant device 10 may be captured and processed by the camera device 20 and the image processing device 30.
In some examples, the camera 20 may be used to capture images and convert the captured images into visual signals. For example, the camera 20 may capture images of the environment in which the patient is located.
In some examples, the image capture device 20 may be an apparatus having an image capture function, such as a video camera, a still camera, or the like. For ease of use, a camera of smaller volume may be designed on (e.g., embedded in) the eyewear.
In other examples, the patient may also capture images by wearing lightweight camera-enabled glasses as the camera 20. The imaging device 20 may be implemented by google glasses or the like. In addition, the camera device 20 may be mounted on smart wearable devices such as smart glasses, smart headsets, and smart bracelets.
In some examples, the image processing device 30 may receive visual signals generated by the camera device 20. The image processing device 30 may process the visual signal and send it to the implant device 10 via the transmitting antenna.
In some examples, the image processing device 30 may be connected with the camera device 20. The connection between the imaging device 20 and the image processing device 30 may be a wired connection or a wireless connection. The wired connection can be data line connection, the wireless connection can be Bluetooth connection, WiFi connection, infrared connection, NFC connection or radio frequency connection and the like.
In some examples, the camera device 20 and the image processing device 30 may be configured outside the patient's body. For example, the patient may wear the imaging device 20 on glasses. The patient may also wear the camera device 20 on a wearable accessory such as a headgear, hair band, or brooch. The patient can wear the image processing device 30 on the waist, and the patient can wear the image processing device 30 on the arm, leg, or the like. Examples of the present disclosure are not limited thereto, and for example, the patient may also place the image processing device 30 in, for example, a handbag or a backpack that is carried around.
Hereinafter, the procedure of the bar vision testing method of the retinal stimulator 1 will be described in detail with reference to the drawings. Fig. 2 is a flowchart illustrating a detection method of bar vision of the retinal stimulator 1 according to an example of the present disclosure. Fig. 3 is a grayscale image and a histogram diagram illustrating a detection method of bar vision of the retinal stimulator 1 according to an example of the present disclosure.
In the present embodiment, as shown in fig. 2, the method for detecting the bar vision of the retinal stimulator 1 includes the steps of: (a) acquiring an initial image by the camera 20 (step S10); (b) performing graying processing on the initial image to obtain a grayscale image, obtaining a histogram of pixel distribution of the grayscale image from the grayscale image, and obtaining feature information of the grayscale image from the grayscale image and the histogram (step S20); (c) judging whether the gray image is a bar-grid vision image or not according to the characteristic information and a first-class preset threshold value (step S30); (d) if the gray level image is a bar-grid vision image, performing binarization processing on the gray level image to obtain a binary image (step S40); (e) selecting a target pixel value from the binary image, and obtaining the number of corresponding pixel points with target pixel value projection on each preset mapping line according to whether the target pixel value for projection exists in the vertical direction of a plurality of preset mapping lines arranged in the binary image (step S50); (f) and determining a target preset mapping line according to the number of corresponding pixel points projected by the target pixel value on each preset mapping line and the second type preset threshold value, thereby determining the bar grid direction on the binary image, wherein the bar grid direction on the binary image is perpendicular to the direction of the target preset mapping line (step S60).
In the present disclosure, the retina stimulator 1 may acquire an initial image by using the image pickup device 20, perform graying processing on the initial image to obtain a grayscale image, obtain a histogram of the grayscale image based on the grayscale image, obtain feature information of the grayscale image based on the grayscale image and the histogram, and further determine whether the grayscale image is a bar-grid vision image based on a first-type predetermined threshold. If the gray level image is a bar-grid vision image, performing binarization processing on the gray level image to obtain a binary image, obtaining the number of corresponding pixel points with target pixel value projection on each preset mapping line based on the binary image and a plurality of preset mapping lines, and determining the target preset mapping lines based on the number of the corresponding pixel points with the target pixel value projection on each preset mapping line and a second preset threshold value, so as to determine the bar-grid direction on the binary image. Therefore, the bar-grid vision image can be effectively recognized, and the calculation amount is small.
In step S10, an initial image may be acquired with the camera device 20. In some examples, the initial image is, for example, an external environment in which the patient is located, such as a life scene, a traffic scene, and so forth. By photographing the external environment by the camera device 20, a desired initial image can be captured. In other examples, the camera 20 may capture an initial image every preset time T. In the present embodiment, the initial image may be a bar-grid vision image for evaluating the retinal stimulator 1.
In this embodiment, the number of pixels of the initial image may be, for example, 30 ten thousand, 100 ten thousand, 200 ten thousand, 500 ten thousand, 1200 ten thousand, or the like, but this embodiment is not limited thereto.
In some examples, the initial image may be an image captured by the camera 20 without any processing. In general, the initial image obtained by capturing the surrounding environment by the image capturing device 20 is a color image. That is, the initial image captured by the image capturing apparatus 20 without any processing may be a color image. In some examples, the color image may be an HSI image. The color image may also be an RGB image. However, examples of the present disclosure are not limited thereto, and the initial image captured by the image capture device 20 may be a grayscale image, a binary image, or the like.
In some examples, the initial image may be an external environment in which the patient is located, where objects or obstacles present are information of major interest to the patient, particularly identifying the outline of objects or obstacles that is advantageous for the blind or low vision patient's actions. Since information such as color features in a color image is not always used to reflect morphological features of an object in an initial image, the information can relatively well retain the contours of the object or obstacle even if a portion of the color image is removed. In the present embodiment, the initial image may be a bar-vision image, and the bar-vision direction in which the bar-vision image is recognized is information that is mainly focused on by the patient. Since information such as color characteristics in a color image is not always available to reflect information such as the shape (e.g., the width of the raster, i.e., the number of pairs of corresponding minimum resolution viewing angles) and the direction of the raster in the initial image, the shape and the direction of the raster can be maintained relatively well even if a portion of the color image is removed.
In some examples, steps S20 to S60 may be performed in the image processing apparatus 30.
In step S20, a graying process may be performed on the initial image to obtain a grayscale image, a histogram of pixel distribution of the grayscale image may be obtained from the grayscale image, and feature information of the grayscale image may be obtained from the grayscale image and the histogram.
In some examples, in step S20, a graying process may be performed on the initial image to obtain a grayscale image (see fig. 3 (a)). In some examples, the grayscale image may be R, G, B a special color image with the same size of the three components (i.e., R-G-B value), which has less information than a normal color image. Each pixel of a grayscale image has a corresponding grayscale value. In some examples, each gray value may be represented using, for example, an 8-bit binary number, i.e., the gray value of the gray image ranges from 0-255. In other examples, each gray value may be represented by a 16-bit binary number, for example, or may be represented by a binary number of 24 bits or more, for example.
In some examples, the graying processing in step S20 mainly processes the color information of the initial image, and does not change the initial image information other than the color information. In this case, the graying processing can reduce the amount of calculation of the subsequent processing, and is helpful for subsequently recognizing whether the initial image is a bar-grid vision image.
In some examples, the graying processing method may be a component method, i.e., a value of any one of the three components may be selected R, G, B as the grayscale value. For example, for a pixel in the initial image, if R is 70, G is 110, and B is 150, then 70 may be selected as the grayscale value of the pixel, that is, R is 70; for example, 110 may be selected as the gradation value of the pixel, and for example, 150 may be selected as the gradation value of the pixel. In this case, a grayscale image can be obtained by sequentially processing each pixel in the initial image.
In addition, in some examples, the graying processing method may also be a maximum value method, i.e., the maximum value of the three components may be selected R, G, B as the grayscale value. For example, if R is 70, G is 110, and B is 150 for one pixel in the initial image, 150 may be selected as the grayscale value of the pixel. In this case, a grayscale image can be obtained by sequentially processing each pixel in the initial image.
In addition, in some examples, the graying processing method may also be an average value method, that is, an average value of R, G, B three components may be selected as the grayscale value. For example, if R is 70, G is 110, and B is 150 for one pixel in the initial image, the average of the three values R, G, B is 110, and 110 may be selected as the grayscale value of the pixel. In this case, a grayscale image can be obtained by sequentially processing each pixel in the initial image.
In addition, in some examples, the graying processing method may also be a weighting method, that is, R, G, B three components may be weighted according to different weighting coefficients to obtain the grayscale value. For example, for a pixel in the initial image, if R70, G110, and B150, the weighting coefficient of R is 0.3, the weighting coefficient of G is 0.5, and the weighting coefficient of B is 0.2, the gray-scale value of the pixel is 0.3 +0.5 + 110+0.2 + 150 is 106. In this case, a grayscale image can be obtained by sequentially processing each pixel in the initial image.
In the above example, the graying process can reduce the data amount (or information amount) of the initial image, facilitate the subsequent processing of the image, and facilitate the subsequent identification of whether the initial image is a bar-grid vision image.
In some examples, as shown in fig. 3, a histogram of the pixel distribution of the grayscale image may be obtained from the grayscale image in step S20. In some examples, a histogram of the grayscale image may be obtained using MATLAB processing, as shown in fig. 3(a) and 3 (c). In some examples, as shown in fig. 3(b) and 3(d), the grayscale image may be divided into "grid" and then a histogram may be obtained based on the divided grayscale image.
In some examples, the feature information of the grayscale image may be obtained from the grayscale image and the histogram in step S20. In some examples, it may be determined whether the grayscale image (or initial image) is a bar-grid-vision image (described later) based on the feature information.
In some examples, the feature information may include a first number of pixels, which is the number of pixels having a gray value within each pixel interval in the gray image obtained based on the histogram, a gray mean, which is an average of gray values corresponding to all pixels of the gray image, and a region centroid coordinate obtained from the gray image. Thereby, the characteristic information of the gradation image can be obtained.
In some examples, the first number of pixels may be the number of pixels having a gray value within each pixel interval in the gray image, that is, the number of pixel intervals may be divided into several pixel intervals by a gray value, and then the number of pixel intervals may be counted by a histogram, for example, the number of pixel intervals may be divided into 5 pixel intervals by a gray value, wherein the first pixel interval (also referred to as a "black pixel interval") may include pixels having a gray value of 0 to 50, the second pixel interval may include pixels having a gray value of 51 to 100, the third pixel interval may include pixels having a gray value of 101 to 150, the fourth pixel interval may include pixels having a gray value of 151 to 200, the fifth pixel interval (also referred to as a "white pixel interval") may include pixels having a gray value of 201 to 255, and then the number of corresponding pixels in each pixel interval may be counted by a histogram.
In some examples, as above, the pixel intervals may include a black pixel interval, a white pixel interval, and a plurality of other pixel intervals. In some examples, the feature information may also include derived feature information obtained based on the histogram and the partitioned pixel bins. In some examples, the derived feature information may be a sum of differences between the number of pixels corresponding to a black pixel interval (i.e., "first pixel interval") including pixels having a gray value of 0 to 50 and a sum of the number of pixels corresponding to a white pixel interval (i.e., "fifth pixel interval") including pixels having a gray value of 201 to 255, and a sum of the numbers of pixels of a plurality of other pixel intervals. Thereby, the characteristic information of the gradation image can be obtained.
In some examples, the feature information further includes a second number of pixels, wherein the second number of pixels is the number of pixels in the grayscale image having a grayscale value less than the grayscale mean and/or the number of pixels having a grayscale value greater than the grayscale mean. Whereby the second number of pixels can be obtained. Specifically, the gray level average value of the gray level image, that is, the average value of the gray level values corresponding to all pixels of the gray level image, may be obtained based on the gray level image, and the number of pixels in the gray level image whose gray level value is greater than the gray level average value may be counted, or the number of pixels in the gray level image whose gray level value is less than the gray level average value may be counted. The second number of pixels may include the number of pixels in the grayscale image having a grayscale value greater than the grayscale mean of the grayscale image, and may also include the number of pixels in the grayscale image having a grayscale value less than the grayscale mean.
In some examples, the feature information in step S20 may include region centroid coordinates obtained from the grayscale image. Specifically, the grayscale image may be binarized, and then the processed grayscale image is processed by an area centroid module (for example, a preset image processing program) to obtain an area centroid coordinate corresponding to the grayscale image.
As above, in step S20, the feature information of the grayscale image may be obtained, whereby it is possible to determine whether the grayscale image is a bar-vision image based on the obtained feature information in the subsequent process.
In step S30, it may be determined whether the grayscale image is a bar-grid-vision image according to the feature information and the first-class predetermined threshold. Specifically, the feature information in step S20 may be compared with a first predetermined threshold, so that it can be determined whether the grayscale image is a bar-vision image.
In some examples, the feature information obtained at step S20 may include the first number of pixels, the grayscale mean, and the region centroid coordinates. In other examples, the feature information obtained at step S20 may also include derived feature information and/or a second number of pixels. In this case, there may be a difference in the specific contents contained in the characteristic information obtained in step S20, and therefore the contents contained in the first-class predetermined threshold may also be changed. In some examples, the feature information includes a first number of pixels, a grayscale mean, a region centroid coordinate, and a second number of pixels, the first class of predetermined threshold having a threshold range corresponding to the class of feature information. Thereby enabling a corresponding comparison of the first type of predetermined threshold value and the characteristic information. In some examples, the first class of predetermined threshold may be obtained based on an image database. Specifically, the first predetermined threshold may be obtained by analyzing images in the image database, and the obtained first predetermined threshold may correspond to each type of feature information obtained in step S20.
In some examples, the images of the image database may include bar-vision images with different directions and morphologies (e.g., the width of the bar, i.e., the number of pairs of corresponding minimum resolution viewing angles is different), and non-bar-vision images (e.g., digital images that are easily confused with the bar-vision images, window images, daily life images, etc.), by analyzing the images in the image database, for example, histograms, gray-scale means, area centroid coordinates, etc. corresponding to the images in the image database may be obtained, different unique features of the bar-vision images and other images may be obtained through statistics, the unique features may refer to the feature information obtained in step S20, that is, the bar-vision images may be distinguished from other images and correspond to various threshold ranges of various types of feature information through statistics, wherein the first type of predetermined threshold may be various threshold ranges, whereby the first class of predetermined threshold has a threshold range corresponding to each class of characteristic information. In this case, it is possible to determine whether the gradation image is a bar-vision image by determining whether each type of feature information obtained in step S20 is within a corresponding threshold range in the first type of predetermined threshold.
In some examples, the first type predetermined threshold may be obtained in other devices, and the characteristic information may be compared with a preset first type predetermined threshold in step S30. This can reduce the amount of computation. In some examples, the size of the image in the image database may be the same as the size of the grayscale image, e.g., if the grayscale image is 160 × 120, then the size of the image in the image database is also 160 × 120. In some examples, the first type of predetermined threshold may be obtained by performing the same processing as the grayscale image on the images in the image database. In this case, the first-class predetermined threshold value obtained based on the image database can be made valid for the feature information obtained in step S20.
In some examples, the first class predetermined threshold may correspond to the characteristic information in step S20. For example, the feature information obtained in step S20 may include a first pixel number, a grayscale mean value, and a region centroid coordinate, and the first predetermined threshold obtained through image analysis statistics of the image database may include a unique feature possessed by the bar-grid vision image, that is, a threshold range corresponding to the first pixel number, the grayscale mean value, and the region centroid coordinate, and may be determined by determining whether each type of feature information is within a corresponding threshold range in the first predetermined threshold, that is, whether the feature information is within a threshold range corresponding to the first predetermined threshold, that is, by determining whether the first pixel number in the feature information is within a threshold range corresponding to the first pixel number in the first predetermined threshold, specifically, by determining whether the grayscale mean value in the feature information is within a threshold range corresponding to the grayscale mean value in the first predetermined threshold, and whether the grayscale mean value in the feature information is within a threshold range corresponding to the grayscale mean value in the first predetermined threshold, in the first predetermined threshold range corresponding to the grayscale mean value in the first predetermined threshold In the periphery, if the first pixel number, the gray average value and the area centroid coordinate are all in the respective corresponding threshold ranges, the gray image is a bar-grid vision image. If the feature information obtained in step S20 further includes derived feature information, the obtained first-class predetermined threshold value may include unique derived feature information possessed by the bar-vision image obtained based on the images in the image database.
In some examples, the grayscale image is determined to be a bar-vision image if the various types of feature information obtained in step S20 are within corresponding threshold ranges of the first type of predetermined threshold, otherwise the grayscale image is not a bar-vision image. In some examples, if the grayscale image is a bar-grid-vision image, the step S40 may be continuously executed, and if the grayscale image is not a bar-grid-vision image, the retina stimulator 1 may process the initial image in other manners (e.g., compression, binarization, etc.) and stimulate the retina through the stimulation electrode array 11, so that the patient can obtain information such as the contour of the object in the initial image (not described in detail). This makes it possible to distinguish the bar-eye vision image from other non-bar-eye vision images (for example, daily life images) and perform different processing, thereby making it possible to make the retinal stimulator 1 better cope with different images and to make the retinal stimulator 1 better recognize the bar-eye vision image.
In step S40, if the grayscale image is a grid-vision image, the grayscale image may be binarized to obtain a binary image. In some examples, the binarization process may include comparing the magnitude of a grayscale value of each pixel in the grayscale image with a preset grayscale value. The gray values in the gray image can be set into two types, namely a maximum gray value and a minimum gray value, and the binary image can be obtained after the gray values are changed. In some examples, the preset gray value may be set by the relevant person or determined by the relevant algorithm of the software used.
In some examples, the obtained binary image may be subjected to an optimization process. Therefore, the bar grid direction in the bar grid vision image can be conveniently identified subsequently. In some examples, a more suitable binary image may be obtained by adjusting (e.g., increasing) the binarization threshold.
In some examples, the binarized image may be optimized by adding morphological algorithms. In some examples, the binary image is also subject to dilation and erosion processing prior to step S50. For example, the binarized image may be subjected to 3 × 3 expansion processing and then subjected to 3 × 3 erosion processing. Therefore, the bar grid direction in the bar grid vision image can be conveniently identified subsequently. In some examples, the binary image may be processed by 2 x 2 erosion.
Fig. 4 is an application diagram illustrating a bar-grid vision detection method of the retinal stimulator 1 according to an example of the present disclosure.
In step S50, a target pixel value may be selected from the binary image, and the number of corresponding pixels on each predetermined mapping line having the projection of the target pixel value is obtained according to whether the target pixel value for projection exists in the vertical direction of the plurality of predetermined mapping lines disposed in the binary image. In some examples, the minimum grayscale value in the binary image may be selected as the target pixel value. This enables the target pixel value to be selected from the binary image. But examples of the present disclosure are not limited thereto, and the maximum gradation value in the binary image may also be selected as the target pixel value.
In some examples, the bar direction of the bar vision image is selected from one of a horizontal direction, a vertical direction, a first direction and a second direction, wherein the first direction and the second direction are respectively at an angle to the horizontal direction. In some examples, as shown in fig. 4, the angle γ between the first direction and the horizontal direction (i.e., the horizontal line L) may be 45 °, and the angle θ between the second direction and the horizontal direction (i.e., the horizontal line L) may be 135 °. In some examples, the bar direction of the bar vision image may refer to the bar direction in fig. 5. Thereby facilitating subsequent determination of the bar orientation of the bar vision image. In some examples, in step S50, the binary image may be used as a coordinate map, and each pixel point in the binary image may be used as a coordinate point having specific coordinates, for example, as shown in fig. 4, the coordinate of the pixel point corresponding to the point O is (0, 0).
In some examples, the directions of the plurality of predetermined mapping lines may be a horizontal direction, a vertical direction, a first direction and a second direction, respectively, wherein the first direction and the second direction are respectively at an angle with the horizontal direction. In some examples, the plurality of predetermined mapping lines may be obtained in the binary image according to directions of the plurality of predetermined mapping lines. In some examples, the directions of the respective predetermined mapping lines are respectively a horizontal direction, a vertical direction, a first direction (see line C in fig. 4) in which an angle γ with the horizontal direction is 45 °, and a second direction (see line D in fig. 4) in which an angle θ with the horizontal direction is 135 °, from one side of the binary image to the other side of the binary image (see fig. 4, for example, line C from a lower base side to an upper side of the binary image in fig. 4). Thereby enabling the determination of the predetermined mapping line to be facilitated. In some examples, a portion of the line segments or all of the line segments of each predetermined mapping line in the binary image may be selected as valid counterparts. In some examples, each predetermined mapping line may keep an effective corresponding portion, that is, the number of corresponding pixel points having a projection of a target pixel value on the effective corresponding portion of each predetermined mapping line may be counted, specifically, the number of corresponding pixel points having a projection of a target pixel value on the effective corresponding portion of each predetermined mapping line in a binary image may be counted. In some examples, if a target pixel makes a perpendicular to a predetermined mapping line and there is no intersection point in a valid corresponding portion of the predetermined mapping line, then the target pixel has no corresponding pixel on the predetermined mapping line. In this case, it is possible to facilitate subsequent statistics of the number of corresponding pixels having the projection of the target pixel value on each predetermined mapping line.
In some examples, the number of corresponding pixel points having the target pixel value projection on each predetermined mapping line may be obtained according to whether there is a target pixel value (i.e., a pixel point having the target pixel value) for projection in the vertical direction of the plurality of predetermined mapping lines set in the binary image. Specifically, by determining whether a pixel point having a target pixel value exists in the vertical direction of each predetermined mapping line, the pixel point having the target pixel value is projected on each predetermined mapping line to obtain a corresponding pixel point, and the number of the corresponding pixel points having the target pixel value projected on each predetermined mapping line can be obtained. That is, it is possible to determine each pixel point on the predetermined mapping line by determining whether a target pixel point having a target pixel value and located on the same straight line as a pixel point on the predetermined mapping line exists in the binary image in the vertical direction of each predetermined mapping line, and if so, taking the pixel point as a corresponding pixel point, and then, counting the number of corresponding pixel points on the predetermined mapping line. Under the condition, whether each pixel point on each preset mapping line is a corresponding pixel point can be judged, and the number of the corresponding pixel points with the target pixel value projection on each preset mapping line can be counted.
In some examples, a target pixel point having a target pixel value in the binary image may be respectively made a vertical line to each predetermined mapping line, and an intersection point with the predetermined mapping line may be projected as a corresponding pixel point of the target pixel point on the predetermined mapping line. In some examples, the coordinate point of each predetermined mapping line is a pixel point (the abscissa and the ordinate thereof are integers), and therefore the corresponding pixel point of each target pixel point on each predetermined mapping line should be a pixel point. In some examples, if the abscissa or the ordinate corresponding to the intersection point of the target pixel point making the vertical line to the predetermined mapping line is not an integer, any adjacent pixel point of the intersection point on the predetermined mapping line may be used as the corresponding pixel point of the target pixel point. But examples of the disclosureNot limited to this, it may also be considered that the target pixel point has no projection on the predetermined mapping line when counting the number of corresponding pixel points on the predetermined mapping line, that is, the target pixel point has no corresponding pixel point on the predetermined mapping line. For example, as shown in fig. 4, in the binary image with the size of 160 × 120, the coordinates of the target pixel point are (x)0,,y0,) Line a corresponds to the active mapped portion of the predetermined mapping line in the horizontal direction and satisfies: and y is 110, wherein the corresponding pixel point of the target pixel point on line A can satisfy: (x)0,110); line B corresponds to the active mapped portion of the predetermined mapping line in the vertical direction and satisfies: x is 150, wherein the corresponding pixel point of the target pixel point on line B can satisfy: (150, y)0,) (ii) a Line C corresponds to the valid mapped portion of the predetermined mapping line in the first direction and satisfies: x-24, wherein the corresponding pixel point of the target pixel point on line C can satisfy: ((x)0,+y0,)/2+12,(x0,+y0,) 2-12); line D corresponds to the valid mapped portion of the predetermined mapping line in the second direction and satisfies: y is-x +144, wherein the corresponding pixel point of the target pixel point on line D can satisfy: (72- (y)0-,x0,)/2,(y0-,x0,)/2+72). If the coordinate of the target pixel point M in fig. 4 is (153,0), the target pixel point M has no corresponding pixel point on the line D, and the coordinate of the corresponding pixel point on the line C of the target pixel point M should theoretically be (177/2, 129/2), but actually, no point (177/2, 129/2) exists on the line C, so that the adjacent point (88, 64) or (89, 65) thereof can be used as the corresponding pixel point of the target pixel point M on the line C, and it can also be considered that the target pixel point M has no corresponding pixel point on the line C, that is, the corresponding pixel point of the target pixel point M on the line C is not counted.
In some examples, when counting the number of corresponding pixels on the predetermined mapping line, the same corresponding pixels on the predetermined mapping line of the target pixels may be counted only once.
As described above, in step S50, the number of corresponding pixels of the binary image, which are projected on each predetermined mapping line by the target pixel having the target pixel value, may be obtained, that is, the number of corresponding pixels of the binary image, which are projected by the target pixel having the target pixel value, may be obtained.
In step S60, a target predetermined mapping line is determined according to the number of corresponding pixel points projected by the target pixel value on each predetermined mapping line and the second predetermined threshold, so as to determine a bar grid direction on the binary image, where the bar grid direction on the binary image is perpendicular to the direction of the target predetermined mapping line.
In some examples, whether the corresponding pixel points on each predetermined mapping line are continuous may be determined by comparing the number of the corresponding pixel points projected by the target pixel point having the target pixel value on each predetermined mapping line obtained in step S50 with the second type predetermined threshold, and the target predetermined mapping line may be determined from the plurality of predetermined mapping lines. Thereby enabling the determination of the target predetermined mapping line.
In some examples, the bar direction may be one of a horizontal direction, a vertical direction, a direction (or a first direction) having an angle γ of 45 ° with the horizontal direction, and a direction (or a second direction) having an angle θ of 135 ° with the horizontal direction. The plurality of predetermined mapping lines may be in a horizontal direction, a vertical direction, a direction having an angle of 45 ° with the horizontal direction, and a direction having an angle of 135 ° with the horizontal direction. In some examples, the plurality of target pixel points may be discontinuous at corresponding pixel points on a predetermined mapping line perpendicular to the bar grid direction of the binary image, which may be a target predetermined mapping line. For example, as shown in fig. 4, if the bar grid direction is a direction having an angle of 135 ° with the horizontal direction, the corresponding pixel points of the plurality of target pixel points on the line C may be discontinuous, in which case, the predetermined mapping line corresponding to the line C is the target predetermined mapping line.
In some examples, the number of corresponding pixel points on the effective corresponding portion of each predetermined mapping line is limited, so that whether the corresponding pixel points of the plurality of target pixel points on the predetermined mapping line are continuous or not can be judged by judging the number of corresponding pixel points of the plurality of target pixel points on the effective corresponding portion of each predetermined mapping line, and thus the target predetermined mapping line can be determined, and the bar grid direction can be determined. In some examples, the number of corresponding pixels of the target pixels on each predetermined mapping line may be compared with a second predetermined threshold to determine whether the corresponding pixels of the target pixels on the predetermined mapping line are consecutive.
In some examples, the second type of predetermined threshold may be obtained in other devices, and the number of the target pixels on each predetermined mapping line in the binary image obtained in step S50 may be compared with the second type of predetermined threshold set in advance in step S60.
In some examples, the second type of predetermined threshold may be obtained based on an image database. In some examples, the simulation experiment is performed on a plurality of bar vision images with different bar widths and bar directions in the image database, for example, the number of pairs of the minimum resolution viewing angles may be 1.5 to 2.9, and the corresponding bar vision images may be sequentially a horizontal direction, a vertical direction, a direction (or a first direction) having an angle γ of 45 ° with the horizontal direction, and a direction (or a second direction) having an angle θ of 135 ° with the horizontal direction. The plurality of bar-grid vision images in different conditions (i.e., different bar-grid widths and/or different bar-grid directions) are subjected to the same processing as the binary image, for example, the plurality of bar-grid vision images may be subjected to the processing of the steps S40 and S50, then the number of corresponding pixels of the target pixel having the target pixel value on each predetermined mapping line in the bar-grid vision images in different conditions (i.e., different bar-grid widths and/or different bar-grid directions) is counted respectively, and then the second-type predetermined threshold is obtained through analysis and statistics. In some examples, the second type of predetermined threshold includes a threshold range of the number of corresponding pixels of the plurality of target pixels on each predetermined mapping line when the bar grid direction is a horizontal direction, a vertical direction, a direction (or a first direction) having an angle γ of 45 ° with the horizontal direction, and a direction (or a second direction) having an angle θ of 135 ° with the horizontal direction, respectively. In this case, the number of the target pixel points on each predetermined mapping line in the binary image obtained in step S50 may be compared with the second predetermined threshold, and a target predetermined mapping line (that is, a predetermined mapping line in which corresponding pixel points of the target pixel points on the predetermined mapping line are discontinuous) may be determined, so as to determine a bar grid direction in the binary image, where the target predetermined mapping line may be perpendicular to the bar grid direction in the binary image.
In some examples, through the above steps, the initial image is converted into a binary image, but the bar direction in the initial image is not changed, i.e., the bar direction in the initial image is the same as the bar direction in the binary image, so the bar direction in the initial direction can be determined.
Fig. 5 is a schematic diagram illustrating a direction template to which examples of the present disclosure relate. Fig. 6 is a schematic diagram showing the arrangement position of the stimulation electrode array 11 according to an example of the present disclosure.
In some examples, the image processing apparatus 30 may set a plurality of direction templates corresponding to the raster directions in advance, for example, as shown in fig. 5, the plurality of direction templates include a horizontal template corresponding to the horizontal direction (see fig. 5(a)), a vertical template corresponding to the vertical direction (see fig. 5(b)), a first direction template corresponding to the first direction (see fig. 5(c)), and a second direction template corresponding to the second direction (see fig. 5 (d)). In some examples, the image processing apparatus 30 may select a direction template corresponding to the raster direction, for example, a second direction template corresponding to the raster direction in fig. 4, from among direction templates set in advance based on the determined raster direction (see fig. 5 (d)).
In some examples, the plurality of direction templates configured in the image processing apparatus 30 may match the configuration position of the stimulation electrode array 11, for example, as shown in fig. 5 and fig. 6, the stimulation electrode array 11 is horizontally disposed, and the plurality of direction templates preset by the image processing apparatus 30 may refer to fig. 5.
In some examples, the image processing device 30 may transmit the visual signal to the implant device 10 through the transmitting antenna according to the selected direction template (e.g., the second direction template), and the implant device 10 may convert the visual signal into an electrical stimulation signal to stimulate the ganglion cells or bipolar cells of the retina through the stimulation electrode array 11, thereby enabling the patient to recognize the bar grid direction of the initial image. For example, as shown in fig. 4 to 6, the stimulation electrode array 11 is horizontally disposed, the image processing device 30 transmits a visual signal to the implant device 10 according to a second direction template (see fig. 5(d)) selected from the bar grid direction of fig. 4, the implant device 10 may convert the visual signal into an electrical stimulation signal, and the black electrodes (e.g., the stimulation electrodes 101 and 103, etc.) in fig. 6 may be operated to stimulate ganglion cells or bipolar cells of the retina, thereby enabling the patient to recognize the bar grid direction.
While the present disclosure has been described in detail in connection with the drawings and examples, it should be understood that the above description is not intended to limit the disclosure in any way. Those skilled in the art can make modifications and variations to the present disclosure as needed without departing from the true spirit and scope of the disclosure, which fall within the scope of the disclosure.

Claims (10)

1. A bar-grating vision detection method is characterized in that,
the method comprises the following steps:
(a) acquiring an initial image;
(b) carrying out graying processing on the initial image to obtain a grayscale image, obtaining a histogram of pixel distribution of the grayscale image according to the grayscale image, and obtaining feature information of the grayscale image according to the grayscale image and the histogram;
(c) judging whether the gray level image is a bar-grid vision image or not according to the characteristic information and a first-class preset threshold value;
(d) if the gray level image is the bar grid vision image, performing binarization processing on the gray level image to obtain a binary image;
(e) selecting a target pixel value from the binary image, and obtaining the number of corresponding pixels projected by the target pixel point on each preset mapping line according to whether the target pixel point with the target pixel value exists in the vertical direction of a plurality of preset mapping lines arranged in the binary image; and is
(f) And determining target preset mapping rays according to the number of corresponding pixel points projected by the target pixel points on each preset mapping ray and a second-class preset threshold value, so as to determine a bar grid direction on the binary image, wherein the bar grid direction on the binary image is perpendicular to the direction of the target preset mapping rays.
2. The detection method according to claim 1, characterized in that:
the first type of predetermined threshold is each threshold range which is obtained by counting that a bar-grid vision image is different from other images and corresponds to various types of characteristic information, the target pixel value is the minimum gray value or the maximum gray value in the binary image, each predetermined mapping line is from one side of the binary image to the other side of the binary image, and the directions of the predetermined mapping lines are respectively the horizontal direction, the vertical direction, the first direction with an included angle of 45 degrees with the horizontal direction and the second direction with an included angle of 135 degrees with the horizontal direction.
3. The detection method according to claim 2, characterized in that:
the method comprises the steps of respectively counting the number of corresponding pixel points of pixel points with target pixel values on each preset mapping line in a plurality of bar-grid vision images with different bar-grid widths and bar-grid directions in an image database by processing the bar-grid vision images with different bar-grid widths and bar-grid directions as the binary image, then obtaining a second type preset threshold value through analysis and statistics, and judging whether the corresponding pixel points on each preset mapping line are continuous or not through comparing the number of the corresponding pixel points on each preset mapping line with the second type preset threshold value so as to determine the target preset mapping line.
4. The detection method according to claim 1, characterized in that:
the feature information comprises a first pixel number, a gray mean value and a region centroid coordinate obtained according to the gray image, wherein the first pixel number is the number of pixels of gray values in each pixel interval in the gray image obtained based on the histogram, the gray mean value is the mean value of the gray values of all pixels of the gray image, the pixel intervals comprise a black pixel interval, a white pixel interval and a plurality of other pixel intervals, the black pixel interval comprises pixels of which the gray values are 0-50, and the white pixel interval comprises pixels of which the gray values are 201-255.
5. The detection method according to claim 4, characterized in that:
the feature information further includes a second number of pixels, where the second number of pixels is the number of pixels in the grayscale image having grayscale values smaller than the grayscale mean and/or the number of pixels having grayscale values larger than the grayscale mean.
6. The detection method according to claim 5, characterized in that:
the feature information includes a sum of differences between the number of pixels in the black pixel section and the number of pixels in the white pixel section, and a sum of the numbers of pixels in the plurality of other pixel sections.
7. The detection method according to claim 1, characterized in that:
the bar grid direction of the bar grid vision image is selected from one of a horizontal direction, a vertical direction, a first direction with an included angle of 45 degrees with the horizontal direction and a second direction with an included angle of 135 degrees with the horizontal direction.
8. The detection method according to claim 1, characterized in that:
and selecting partial line segments or all line segments of the preset mapping lines of the binary image as effective corresponding parts, and counting the number of corresponding pixel points projected by the target pixel points on the preset mapping lines in the effective corresponding parts.
9. The detection method according to claim 1, characterized in that:
in the step (d), the binary image is optimized by adding a morphological algorithm.
10. The detection method according to claim 1, characterized in that:
and the bar grid direction of the initial image is the same as the bar grid direction of the binary image.
CN202110943058.XA 2019-12-31 2020-04-01 Detection method of bar vision Pending CN113592948A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201911415252X 2019-12-31
CN201911415252 2019-12-31
CN202010251451.8A CN111445527B (en) 2019-12-31 2020-04-01 Method for detecting bar-grid vision of retina stimulator

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202010251451.8A Division CN111445527B (en) 2019-12-31 2020-04-01 Method for detecting bar-grid vision of retina stimulator

Publications (1)

Publication Number Publication Date
CN113592948A true CN113592948A (en) 2021-11-02

Family

ID=71649536

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202110943058.XA Pending CN113592948A (en) 2019-12-31 2020-04-01 Detection method of bar vision
CN202010251451.8A Active CN111445527B (en) 2019-12-31 2020-04-01 Method for detecting bar-grid vision of retina stimulator
CN202110944423.9A Pending CN113643373A (en) 2019-12-31 2020-04-01 Retina stimulator with bar vision detection

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN202010251451.8A Active CN111445527B (en) 2019-12-31 2020-04-01 Method for detecting bar-grid vision of retina stimulator
CN202110944423.9A Pending CN113643373A (en) 2019-12-31 2020-04-01 Retina stimulator with bar vision detection

Country Status (1)

Country Link
CN (3) CN113592948A (en)

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100586403C (en) * 2008-03-06 2010-02-03 上海交通大学 Visual sense prosthesis image processing device and method
US9037252B2 (en) * 2009-02-27 2015-05-19 Pixium Vision Sa Visual prosthesis and retina stimulation device for same
EP2482760B1 (en) * 2009-09-30 2020-03-25 National ICT Australia Limited Object tracking for artificial vision
CN106037627B (en) * 2016-05-20 2017-12-22 上海青研科技有限公司 A kind of full-automatic eyesight exam method of infant and device
CN107818553B (en) * 2016-09-12 2020-04-07 京东方科技集团股份有限公司 Image gray value adjusting method and device
CN109224291B (en) * 2017-12-29 2021-03-02 深圳硅基仿生科技有限公司 Image processing method and device of retina stimulator and retina stimulator
CN109248378B (en) * 2018-09-09 2020-10-16 深圳硅基仿生科技有限公司 Video processing device and method of retina stimulator and retina stimulator
CN109146985B (en) * 2018-09-09 2019-06-14 深圳硅基仿生科技有限公司 Image processing method, device and the retina stimulator of retina stimulator
CN111311625A (en) * 2018-09-09 2020-06-19 深圳硅基仿生科技有限公司 Image processing method and image processing apparatus

Also Published As

Publication number Publication date
CN111445527B (en) 2021-09-07
CN111445527A (en) 2020-07-24
CN113643373A (en) 2021-11-12

Similar Documents

Publication Publication Date Title
US9795786B2 (en) Saliency-based apparatus and methods for visual prostheses
CN106909875B (en) Face type classification method and system
CN106980852B (en) Based on Corner Detection and the medicine identifying system matched and its recognition methods
WO2016175234A1 (en) Color image processing method, color image processing program, and object recognition method and device
JP4912206B2 (en) Image processing method, image processing apparatus, image processing system, and computer program
CN110232389A (en) A kind of stereoscopic vision air navigation aid based on green crop feature extraction invariance
CN109224291B (en) Image processing method and device of retina stimulator and retina stimulator
CN110060311B (en) Image processing device of retina stimulator
Chen et al. A robust segmentation approach to iris recognition based on video
Douglas et al. Eye feature extraction for diagnosing the facial phenotype associated with fetal alcohol syndrome
JP2002282210A (en) Method and apparatus for detecting visual axis
CN111445527B (en) Method for detecting bar-grid vision of retina stimulator
JP4107087B2 (en) Open / close eye determination device
KR100376415B1 (en) Pupil acquisition method using eye image
CN109308708B (en) Low-pixel image processing method and device and retina stimulator
CN206363347U (en) Based on Corner Detection and the medicine identifying system that matches
CN112396667B (en) Method for matching electrode positions of retina stimulator
JP2022066907A (en) Information processing device, information processing method, control program and recording medium
CN113343846B (en) Reflective clothing detecting system based on depth layer feature fusion
CN111241870A (en) Terminal device and face image recognition method and system thereof
CN111914585A (en) Iris identification method and system
US20240032856A1 (en) Method and device for providing alopecia information
JP5093540B2 (en) Eye position detection method and detection system
CN115937959A (en) Method and device for determining gazing information and eye movement tracking equipment
CN114429454A (en) Full-automatic fundus positioning method based on fundus image for OCT (optical coherence tomography)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 518000 Area A, 4/F, 3 #, Tingwei Industrial Park, No. 6, Liufang Road, Xin'an Street, Shenzhen, Guangdong

Applicant after: Shenzhen Silicon Bionics Technology Co.,Ltd.

Address before: 518000 Area A, 4/F, 3 #, Tingwei Industrial Park, No. 6, Liufang Road, Xin'an Street, Shenzhen, Guangdong

Applicant before: SHENZHEN SIBIONICS TECHNOLOGY Co.,Ltd.