CN109308708B - Low-pixel image processing method and device and retina stimulator - Google Patents

Low-pixel image processing method and device and retina stimulator Download PDF

Info

Publication number
CN109308708B
CN109308708B CN201811047463.8A CN201811047463A CN109308708B CN 109308708 B CN109308708 B CN 109308708B CN 201811047463 A CN201811047463 A CN 201811047463A CN 109308708 B CN109308708 B CN 109308708B
Authority
CN
China
Prior art keywords
target
area
target object
image processing
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811047463.8A
Other languages
Chinese (zh)
Other versions
CN109308708A (en
Inventor
陈大伟
陈志�
王追
钟灿武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Silicon Bionics Technology Co ltd
Original Assignee
Shenzhen Sibionics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sibionics Technology Co Ltd filed Critical Shenzhen Sibionics Technology Co Ltd
Priority to CN202010048619.5A priority Critical patent/CN111311625A/en
Priority to CN201811047463.8A priority patent/CN109308708B/en
Publication of CN109308708A publication Critical patent/CN109308708A/en
Application granted granted Critical
Publication of CN109308708B publication Critical patent/CN109308708B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure describes a low-pixel image processing method, comprising: acquiring an initial image; distinguishing a background region and a target region including a plurality of target objects from the initial image, and removing the background region; calculating the area of each target object from the target region; and removing the target objects whose area is smaller than the prescribed threshold value, and retaining the target objects whose area is greater than or equal to the prescribed threshold value. Thereby, image information of the target object required by the patient can be effectively retained to help the patient recognize the target object.

Description

Low-pixel image processing method and device and retina stimulator
Technical Field
The disclosure relates to the technical field of bionics, in particular to a low-pixel image processing method and device and a retina stimulator.
Background
Normal vision is developed by the photoreceptor cells on the retina within the eyeball converting external light signals into visual signals. Visual signals reach the cerebral cortex via bipolar cells and ganglion cells, creating light sensation. However, patients often suffer from obstruction of the light sensing pathway in life due to various retinal diseases such as RP (retinitis pigmentosa), AMD (age-related macular degeneration), etc., resulting in decreased vision or blindness.
With the research and development of technology, there has been a technical means for repairing the above-mentioned retinal diseases using artificial retina or retinal stimulator, etc., by which the brain can receive external stimulus signals and obtain improved vision. Existing retinal stimulator systems typically include a camera device and a video processing device disposed outside the patient's body, and an implant that is placed into the patient's eyeball. The external camera device captures an initial image and converts the obtained image into a visual signal, the video processing device processes the visual signal and sends the processed signal to the implant, and the implant further converts the visual signal into an electric stimulation signal to stimulate ganglion cells or bipolar cells on the retina, so that light sensation is generated for a patient.
In the conventional video processing apparatus, a stimulation electrode of an implant is required to compress a visual signal so that the compressed visual signal can be matched. However, when the camera device collects a complex scene, some small objects may become one or two pixel points after being compressed, which is not enough for the patient to distinguish, and the pixel points obtained after the small objects are processed may become "noise" to interfere the recognition ability of the patient.
Disclosure of Invention
The present disclosure has been made in view of the above-mentioned state of the art, and an object thereof is to provide an image processing method, apparatus, and retinal stimulator capable of effectively retaining image information of a target object required by a patient to help the patient recognize low pixels of the target object.
To this end, a first aspect of the present disclosure provides a low-pixel image processing method, including: acquiring an initial image; distinguishing a background region and a target region including a plurality of target objects from the initial image, removing the background region; calculating an area of each of the target objects from the target region; and removing the target objects with the areas smaller than a specified threshold value, and reserving the target objects with the areas larger than or equal to the specified threshold value.
In the present disclosure, a background region and a target region including a plurality of target objects are distinguished, the background region is removed to reserve the target region, the area of the target object in the target region is calculated, and the target object greater than or equal to a prescribed threshold value is reserved. In this case, the removal of the background region can preliminarily reduce the image pixels, remove the target object smaller than the prescribed threshold, and effectively retain the image information of the target object required by the patient to help the patient identify the target object.
In the image processing method according to the present disclosure, optionally, the area of the target object is calculated by counting the number of pixels of the target object. Thereby, the area of the target object can be obtained based on the number of pixels.
In addition, in the image processing method according to the present disclosure, it is preferable that, in calculating the area of the target object, a plurality of coordinate points along a periphery of an edge of the target object are selected so that a region formed by connecting the plurality of coordinate points fits the area of the target object, and the area of the region is calculated. Thereby, the area of the target object can be obtained based on the fitting region of the coordinate points.
In addition, in the image processing method according to the present disclosure, optionally, in calculating the area of the target object, edge recognition is performed on the target object, and the number of pixels at the edge of the target object is counted. Thus, the area of the target object can be obtained based on the pixel number analogy of the edge.
In the image processing method according to the present disclosure, the areas of the target objects may be sorted in descending or ascending order, and a median value in the arrangement order may be selected as the predetermined threshold value. Thereby, the predetermined threshold value can be obtained easily.
In the image processing method according to the present disclosure, an average value of the areas of the target objects may be calculated, and the average value may be the predetermined threshold. Thereby, the predetermined threshold value can be obtained easily.
In the image processing method according to the present disclosure, the pixel value of the target object whose area of the target object is smaller than the predetermined threshold may be set to zero. This enables the removal of a target object having an area smaller than a predetermined threshold.
A second aspect of the present disclosure provides a low-pixel image processing apparatus, comprising: an acquisition module for acquiring an initial image; a segmentation module for distinguishing a background region from a target region including a plurality of target objects from the initial image, removing the background region; a calculation module for calculating an area of each of the target objects from the target region; and the selection module is used for removing the target objects with the areas smaller than a specified threshold value and reserving the target objects with the areas larger than or equal to the specified threshold value.
In the present disclosure, the segmentation module distinguishes a background region and a target region including a plurality of target objects, removes the background region to reserve the target region, the calculation module calculates an area of the target object in the target region, and the selection module reserves the target object greater than or equal to a prescribed threshold. In this case, removing the background region may preliminarily reduce image pixels, remove target objects smaller than a prescribed threshold, effectively retaining image information of the target object required by the patient to help the patient identify the target object.
In the image processing apparatus according to the present disclosure, the calculation module may calculate the area of the target object by counting the number of pixels of the target object. Thereby, the area of the target object can be obtained based on the number of pixels.
In the image processing apparatus according to the present disclosure, the calculation module may be configured to select a plurality of coordinate points along a periphery of the edge of the target object, fit a region formed by connecting the plurality of coordinate points to an area of the target object, and calculate the area of the region. Thereby, the area of the target object can be obtained based on the fitting region of the coordinate points.
In addition, in the image processing apparatus according to the present disclosure, optionally, the calculation module performs edge recognition on the target object, and counts the number of pixels on the edge of the target object. Thus, the area of the target object can be obtained based on the pixel number analogy of the edge.
In the image processing apparatus according to the present disclosure, the selection module may sort the areas of the target objects in descending order, and select a median value in the sort order as the predetermined threshold value. Thereby, the predetermined threshold value can be obtained easily.
In the image processing apparatus according to the present disclosure, the selection module may calculate an average value of the areas of the target objects, and the average value may be the predetermined threshold. Thereby, the predetermined threshold value can be obtained easily.
In the image processing apparatus according to the present disclosure, the selection module may set a pixel value of a target object having an area of the target object smaller than a predetermined threshold to zero. This enables the removal of a target object having an area smaller than a predetermined threshold.
A third aspect of the present disclosure provides a retinal stimulator, characterized by comprising: a camera device for capturing a video image and converting the video image into a visual signal; a video processing device at least comprising the image processing device of any one of the above, the video processing device being connected with the image pickup device, the video processing device being configured to process the visual signal and send the processed visual signal to an implanted device via a transmitting antenna; and the implant device is used for converting the received visual signals into pulse current signals so as to deliver the pulse current signals to the retina.
According to the present disclosure, it is possible to provide an image processing method, apparatus, and retinal stimulator that can effectively retain image information of a target object required by a patient to help the patient recognize low pixels of the target object.
Drawings
Embodiments of the present disclosure will now be explained in further detail, by way of example only, with reference to the accompanying drawings, in which:
fig. 1 is a schematic view of a retina stimulator according to the present disclosure.
Fig. 2 is a schematic structural diagram of a low-pixel image processing apparatus according to the present disclosure.
Fig. 3 is a schematic structural diagram of a low-pixel image processing apparatus according to the present disclosure.
Fig. 4 is a flow chart of a low-pixel image pixel processing method according to the present disclosure.
Fig. 5 is a flowchart illustrating an example of an area calculation method according to the present disclosure.
Fig. 6 is a flowchart schematically illustrating a modification 1 of the area calculation method according to the present disclosure.
Fig. 7 is a flowchart schematically illustrating a modification 2 of the area calculation method according to the present disclosure.
Detailed Description
Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. In the following description, the same components are denoted by the same reference numerals, and redundant description thereof is omitted. The drawings are schematic and the ratio of the dimensions of the components and the shapes of the components may be different from the actual ones.
In addition, the subtitles and the like referred to in the following description of the present invention are not intended to limit the content or the scope of the present invention, and serve only as a cue for reading. Such a subtitle should neither be understood as a content for segmenting an article, nor should the content under the subtitle be limited to only the scope of the subtitle.
Fig. 1 is a schematic view of a retina stimulator according to the present disclosure. The retinal stimulator 1 according to the present disclosure may be suitable for patients who have retinal pathology leading to blindness, but have intact visual pathways such as bipolar cells, ganglion cells, etc. The retinal stimulator 1 may sometimes also be referred to as a "retinal stimulator system", "artificial retina", "artificial retinal system", or the like.
In some examples, as shown in fig. 1, the retinal stimulator 1 may include a camera device 10, a video processing device 20, and an implant device 30. The implant device 30 may receive the visual signal and generate a pulsed current signal based on the visual signal. Wherein, the visual signal can be collected by the camera device 10 and processed by the video processing device 20.
In some examples, the camera 10 may be used to capture video images and convert the video images into visual signals. For example, the camera 10 may capture video images of the environment in which the patient is located.
In some examples, the image capture apparatus 10 may be a device having an image capture function, such as a video camera, a still camera, or the like. For ease of use, a camera of smaller volume may be designed on (e.g., embedded in) the eyewear.
In other examples, the patient may also capture video images by wearing lightweight camera-enabled glasses as the camera 10. For example, the image pickup device 10 may be implemented with google glasses or the like. In addition, the imaging device 10 may be mounted on smart wearable devices such as smart glasses, smart head wears, and smart bracelets.
In some examples, the video processing device 20 may be connected with the camera device 10. The camera device 10 may be connected to the video processing device 20 by a wired connection or a wireless connection.
In some examples, the wired connection may be a data line connection. Additionally, in some examples, the wireless connection may be a bluetooth connection, a WiFi connection, an infrared connection, an NFC connection, or a radio frequency connection, among others.
In other examples, the video processing device 20 and the camera device 10 may be disposed outside the patient's body. For example, the patient may place the imaging device 10 on glasses. The patient may also place the imaging device 10 on a wearable accessory such as a headgear, hair band, or brooch. The patient can dispose the video processing device 20 at the waist, and the patient can dispose the video processing device 20 at, for example, the arm, leg, or the like. Examples of the present disclosure are not limited thereto, and for example, the patient may also place the video processing device 20 in, for example, a handbag or backpack that is carried around.
In some examples, the video processing device 20 may receive visual signals generated by the camera device 10. Video processing device 20 processes the visual signals and sends them to implanted device 30 via the transmitting antenna.
Additionally, in some examples, video processing device 20 may include a low-pixel image processing device. The relevant modules and functions of the low-pixel image processing apparatus are described in detail later.
In some examples, implant device 30 may be used to convert the received visual signal into a pulsed current signal to deliver the pulsed current signal to the retina.
In some examples, implant device 30 may include a prescribed number of stimulation electrodes. Stimulation electrodes (sometimes simply referred to as "electrodes") may generate electrical stimulation signals based on the visual signals. In particular, the implant device 30 may receive a visual signal, and the stimulation electrodes may convert the received visual signal into a pulsed current signal, e.g., a bi-directional pulsed current signal, as an electrical stimulation signal, thereby delivering the bi-directional pulsed current signal to tissue of the retina, e.g., ganglion cells or bipolar cells of the retina, to produce light sensation. Alternatively, implant device 30 may be implanted within a human body, such as an eyeball.
Fig. 2 is a schematic structural diagram of a low-pixel image processing apparatus according to the present disclosure. The low-pixel image processing apparatus 200 (which may be simply referred to as the image processing apparatus 200) according to the present disclosure may be used for the retinal stimulator 1 as a functional block of image processing. The low-pixel image processing apparatus 200 may be included in the video processing apparatus 20.
In some examples, as shown in fig. 2, the image processing apparatus 200 may include an acquisition module 211, a segmentation module 212, a calculation module 213, and a selection module 214. The segmentation module 212 may distinguish the background region from the target region of the initial image obtained by the acquisition module 211, remove the background region, and reserve the target region. The calculation module 213 may calculate the area of the target object in the target region and the selection module 214 may retain the target object greater than or equal to a prescribed threshold.
In some examples, the acquisition module 211 may be used to acquire an initial image. The initial image may be a color image, and the initial image may also be a grayscale image. Specifically, the acquisition module 211 may obtain an initial image based on a visual signal output by the image pickup device 10. The pixels of the initial image may be determined by pixels of an imaging lens (not shown) of the imaging device 10. For example, the pixels of the imaging lens may be, for example, 30 ten thousand, 50 ten thousand, 100 ten thousand, 200 ten thousand, 500 ten thousand, 1200 ten thousand, or the like. The number of pixels of the initial image may accordingly be pixels matching the lens, for example, 30, 50, 100, 200, 500, 1200, etc.
In some examples, the segmentation module 212 may be configured to distinguish a background region from a target region comprising a plurality of target objects from the initial image, removing the background region.
In some examples, the initial image may generally include a background region and a target region. Wherein the target area may be an area containing information (e.g. objects or obstacles) required by the patient. The background region may be a region other than the target region in the initial image. For example, an initial image is obtained based on the environment in which the patient is located, wherein the area in which the object or obstacle is located may be a target area and the area in which the object or obstacle is unexpected may be a background area.
In some examples, the target area may include one, two, or more target objects. For example, the object or obstacle may be a target object of the target area. The number of target objects is the number of objects or obstacles.
In some examples, the pixels of the initial image include pixels of the background area, since the background area is a part of the initial image. In this case, the pixels of the background region are also processed when the initial image is processed.
Generally, due to the limitations of the implantation space and the design process, the stimulation electrodes disposed on the implant of the eyeball of the patient are limited, such as 60 electrodes, 120 electrodes, 256 electrodes, etc., and the stimulation electrodes have very limited information receiving capability, and if the image information directly captured by the camera device 10 of the retinal stimulator 1 corresponds to the limited stimulation electrodes implanted in the eye, a large amount of information is lost, resulting in image distortion. In addition, the background region of the initial image captured by the imaging device 10 occupies a part of the initial image, reducing the specific gravity of pixels of a target region required for the patient in the initial image, thereby causing image information of the target region not to be well transferred to the patient. In this case, in the present embodiment, the information contained in the target region is useful information for the patient with respect to the background region, and the background region may be removed by the segmentation module 212. Therefore, the proportion of the target object with the area larger than or equal to the specified threshold value in the target area in the subsequent image in the image can be improved, the subsequent processing on the target area is optimized, and the patient can be better helped to identify the target object.
In some examples, segmentation module 212 may distinguish between the background region and the target region in the initial image by thresholding. Specifically, the threshold comparison method may distinguish the background region from the target region by setting a preset threshold using the preset threshold and the initial image.
In some examples, the preset threshold may be an average pixel value of the initial image. Wherein the average pixel value may be an average pixel value of all pixels of the initial image. Examples of the present disclosure are not limited thereto, and the preset threshold may also be an average gradation value of the initial image.
In some examples, the average pixel value may also be an average pixel value of a portion of pixels of the initial image. The partial pixels may be obtained based on a mode of pixel values of the initial image. Specifically, the pixel values of the pixels of the initial image may be arranged, and the mode of the pixel values of the initial image may be acquired. And selecting pixels with the pixel values equal to the mode to obtain partial pixels. Examples of the present disclosure are not limited thereto, and the partial pixels may be obtained based on a preset ratio. For example, the pixels of the initial image may be arranged, and the pixels of the initial image of a preset ratio are acquired as the partial pixels in the arrangement order. The preset ratio may be a ratio of a part of pixels to all pixels of the initial image. The preset ratio may be, for example, 50%, 60%, 80%, or the like.
In some examples, the arrangement order may be ascending or descending, or may be arranged from left to right, top to bottom, according to the position of the pixels. But the arrangement order is not limited to the above.
In some examples, the preset threshold may be a median of pixel values of the initial image. Specifically, the pixels of the initial image may be arranged in an ascending order or a descending order, and a median value of the pixels of the initial image may be determined to obtain the preset threshold value.
In some examples, the preset threshold may be obtained based on various regions of the initial image. Specifically, the initial image may be divided into a plurality of regions, an average pixel value of each region may be calculated, and a target region among the plurality of regions and an average pixel value of the target region may be determined. Wherein the difference between the average pixel values of the target region and the other regions is the largest. The average pixel value of the target area may be used as a preset threshold value of the initial image.
In some examples, based on the preset threshold described above, the segmentation module 212 may compare the preset threshold with a pixel value of each pixel of the initial image, and divide the initial image into a first region greater than or equal to the preset threshold and a second region smaller than the preset threshold. The background area is usually larger than the target area, and therefore, the first area and the second area can be compared, and the larger area of the first area and the second area is selected as the background area.
In some examples, segmentation module 212 may also set the pixel values of the pixels of the background region to zero. Setting the pixel value to zero may be considered that the pixel is not present, i.e., the segmentation module 212 may remove the background region. Thus, the background region is removed, and the complexity of subsequent image processing can be reduced.
In some examples, calculation module 213 may be used to calculate the area of each target object from the target region. The target area may contain one, two or more target objects. For example, the object or obstacle may be a target object of the target area. The number of target objects is the number of objects or obstacles.
In some examples, the calculation module 213 may calculate the area of the target object by counting the number of pixels of the target object. Specifically, different target objects typically occupy different areas, with large target objects typically occupying a larger area than small target objects. In addition, the number of pixels is in a proportional relationship with the area of the target object, and in this case, the number of pixels of a large target object is also larger than the number of pixels of a small target object. The number of pixels of different target objects can be counted, and the number of pixels of each target object can be analogized to the area of each target object. Thereby, the area of the target object can be obtained according to the number of pixels.
In some examples, the calculation module 213 may select a plurality of coordinate points along the periphery of the edge of the target object so that a region formed by connecting the plurality of coordinate points fits the area of the target object, and calculate the area of the region. Wherein the edge periphery may include the edge and the vicinity of the edge. That is, the selected coordinate point may be a point on the edge of the target object or may be near the edge of the target object.
In some examples, the number of coordinate points selected may be, for example, three, four, and more than four. For example, the number of the selected coordinate points may be three, and a region formed by connecting the three coordinate points may be fitted with a triangular region approximating the target object, and the area of the triangular region may be calculated. The area of the triangular region may approximately reflect the area of the target object.
In some examples, the number of the selected coordinate points may be four, and a region formed by connecting the four coordinate points may be fitted to a quadrilateral region approximating the target object, and the area of the quadrilateral region may be calculated. The area of the quadrilateral region may approximately reflect the area of the target object.
In some examples, the number of coordinate points may approximate the number of edge points of the target object, in which case, the coordinate point connection may fit a region approximating the target object, the area of the region may be calculated, and the area of the region may approximately reflect the area of the target object.
In this case, the larger the number of coordinate points, the closer the fitted region is to the target object. That is, the larger the number of coordinate points is, the closer the area of the region to be fitted is to the area of the target object. Thus, the area of the target object can be obtained from the fitted region.
In some examples, the calculation module 213 may perform edge identification for the target object, and count the number of pixels of the edge of the target object. Edges may also sometimes be referred to as "contours".
In some examples, different target objects typically occupy different areas, a large target object typically occupies a larger area than a small target object, and a corresponding large target object has a larger outline than a small target object. In this case, the number of pixels of the contour of each target object may be analogized to the area of each target object, and thus the area of the target object can be analogized from the number of pixels of the contour.
Additionally, in some examples, selection module 214 may be to remove target objects having an area of the target object less than a prescribed threshold, and retain target objects having an area of the target object greater than or equal to the prescribed threshold.
In some examples, the prescribed threshold may be obtained from an area of the target object. The selection module 214 may sort the areas of the respective target objects in descending order (or ascending order), and select the median value in the arrangement order as the prescribed threshold value.
In some examples, the prescribed threshold may be obtained from an average of areas of the target object. The selection module 214 may calculate an average of the areas of the target objects, with the average as the prescribed threshold. Wherein the average of the areas of the target objects may be an average of the areas of the target objects in the target region.
In some examples, the prescribed threshold may be obtained from an average of areas of the partial target objects. The areas of all the target objects are sorted according to descending order, a plurality of target objects with the areas before being arranged are selected as partial target objects, the areas of the partial target objects are calculated, and the average value of the areas of the partial target objects is calculated. The average value of the areas is set as a predetermined threshold value.
Additionally, in some examples, selection module 214 may set the pixel values of the target object for which the area of the target object is less than a prescribed threshold to zero. This makes it possible to remove a target object having an area smaller than a predetermined threshold value.
In some examples, the number of stimulation electrodes may correspond to pixels of the compressed image. Because stimulation electrodes disposed on an implant of a patient's eyeball are limited and have limited ability to receive information, the stimulation electrodes have limited ability to transmit image information. In this case, a target object more desirable for the patient can be selected by specifying a threshold value.
In some examples, when the number of target objects (e.g., objects or obstacles) includes two or more, the patient may need information of the target object having a larger area than the target object having a smaller area. In this case, target objects smaller than a prescribed threshold value may be removed. That is, the pixel value of the target object smaller than the prescribed threshold value may be set to zero.
In some examples, when the patient is located near a target object with a large area, the initial image and the target area in the initial image may be changed, and the image processing apparatus 200 may perform the above-mentioned correlation processing on the changed initial image. For example, a target object having a small area before the change may become a target object having a large area in the initial image after the change, and in this case, the patient may see the small target object before the change in the initial image.
In this case, removing the background region can initially reduce the image pixels, remove the target object smaller than the prescribed threshold, can further reduce the image pixels, can optimize the subsequent processing on the target region, and better help the patient to identify the target object.
Here, the functions of the above-mentioned respective units of the image processing apparatus 200, including the acquisition module 211, the segmentation module 212, the calculation module 213, and the selection module 214, can be realized by the image processing apparatus 200 of fig. 3 described below. This is explained in detail below.
Fig. 3 is a schematic structural diagram of a low-pixel image processing apparatus according to the present disclosure. In some examples, as shown in fig. 3, the image processing apparatus 200 may include a processor 221, a memory 222, and a communication interface 223.
In some examples, the processor 221 may be configured to control and manage actions performed by the image processing apparatus 200. For example, the processor 221 may be configured to implement the functions of the respective units of the image processing apparatus 200 described above. In addition, the processor 221 may also be used to support the image processing apparatus 200 in performing steps S110-S140 in fig. 4 and/or other processes for the techniques described herein.
In some examples, Processor 221 may be a Central Processing Unit (CPU), a general purpose Processor, a Digital Signal Processor (DSP), an Application-Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, transistor logic, hardware components, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 221 may also be a combination of computing functions, e.g., comprising one or more microprocessors, a combination of a DSP and a microprocessor, or the like.
In some examples, the communication interface 223 may be used to support communication of the image processing apparatus 200 with other devices (e.g., the camera apparatus 10).
Additionally, in some examples, communication interface 223 may be a communication interface, a transceiver, a transceiving circuit, and/or the like. Communication interface 223 is a generic term that may include one or more interfaces.
In some examples, memory 222 may be used to store program codes and data for image processing apparatus 200.
Additionally, in some examples, the image processing apparatus 200 may also include a communication bus 224. The communication bus 224 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus 224 may also be divided into an address bus, a data bus, a control bus, etc. There may be one or more communication buses 224. For ease of illustration, only one line is shown in FIG. 3, but this does not represent only one bus or one type of bus.
The above is the image processing apparatus 200 according to the present disclosure, and the image processing method according to the present disclosure is described in detail below with reference to a flowchart. The respective steps of the image processing method may correspond to the respective units of the image processing apparatus 200.
Fig. 4 is a flow chart of a low-pixel image pixel processing method according to the present disclosure. Fig. 5 is a flowchart illustrating an example of an area calculation method according to the present disclosure. Fig. 6 is a flowchart schematically illustrating a modification 1 of the area calculation method according to the present disclosure. Fig. 7 is a flowchart schematically illustrating a modification 2 of the area calculation method according to the present disclosure.
In some examples, the low-pixel image pixel processing method may correspond to a method of implementing a function of each module of the low-pixel image processing apparatus 200 described above, respectively.
In some examples, as shown in fig. 4, a low pixel image processing method includes acquiring an initial image (step S110).
In step S110, an initial image may be obtained based on the visual signal output by the image pickup device 10. The pixels of the initial image may be determined by pixels of an imaging lens (not shown) of the imaging device 10. The initial image may be a color image, and the initial image may also be a grayscale image.
In some examples, as shown in fig. 4, the low-pixel image processing method may further include distinguishing a background region and a target region including a plurality of target objects from the initial image, and removing the background region (step S120).
In step S120, the initial image may generally include a background region and a target region. Wherein the target area may be an area containing information (e.g. objects or obstacles) required by the patient. The background region may be a region other than the target region in the initial image.
In some examples, the background region and the target region in the initial image may be distinguished by thresholding. Specifically, the threshold comparison method may distinguish the background region from the target region by setting a preset threshold using the preset threshold and the initial image. The method of obtaining the predetermined threshold may be similar to the method of the segmentation module 212.
In some examples, a preset threshold value and a pixel value of each pixel of the initial image may be compared, and the initial image may be divided into a first region greater than or equal to the preset threshold value and a second region less than the preset threshold value. The background area is usually larger than the target area, and therefore, the first area and the second area can be compared, and the larger area of the first area and the second area is selected as the background area.
In some examples, after distinguishing the background region, the pixel values of the pixels of the background region may be set to zero, which may be considered as the absence of pixels. Thus, the background region is removed, and the complexity of subsequent image processing can be reduced.
In some examples, as shown in fig. 4, the image processing method of low pixels may further include calculating areas of respective target objects from the target region (step S130).
In step S130, the target area may contain one, two, or more than two target objects. For example, the object or obstacle may be a target object of the target area. The number of target objects is the number of objects or obstacles.
In some examples, as shown in fig. 5, step S130 may include counting the number of pixels of the target object to calculate the area of the target object (step S1310).
In step S1310, a large target object typically occupies a larger area than a small target object. In addition, the number of pixels is in a proportional relationship with the area of the target object, and in this case, the number of pixels of a large target object is also larger than the number of pixels of a small target object. The number of pixels of different target objects can be counted, and the number of pixels of each target object can be analogized to the area of each target object. Thereby, the area of the target object can be obtained according to the number of pixels.
In other examples, as shown in fig. 6, step S130 may include selecting a plurality of coordinate points along the periphery of the edge of the target object (step S1320).
In step S1320, the edge periphery may include the edge and the edge vicinity. That is, the selected coordinate point may be a point on the edge of the target object or may be near the edge of the target object. The number of coordinate points selected may be, for example, three, four, and four or more.
In some examples, as shown in fig. 6, step S130 may include fitting a region formed by connecting a plurality of coordinate points to an area of the target object (step S1321).
In step S1321, the number of the selected coordinate points may be three, a triangular region that approximates the target object may be fit to a region formed by connecting the three coordinate points, and the area of the triangular region may approximately reflect the area of the target object.
In some examples, the number of coordinate points selected may be four, and a region formed by connecting the four coordinate points may fit a quadrilateral region approximating the target object. The area of the quadrilateral region may approximately reflect the area of the target object.
In this case, the larger the number of coordinate points, the closer the fitted region is to the target object. That is, the larger the number of coordinate points is, the closer the area of the region to be fitted is to the area of the target object.
In some examples, as shown in fig. 6, step S130 may include calculating an area of the region (step S1322).
In step S1322, the area of the corresponding region is calculated from the shape of the fitted region. For example, the fitted region is a triangle, and the area of the region can be calculated by using an area formula.
Additionally, in some examples, as shown in fig. 7, step S130 may include performing edge recognition for the target object (step S1330).
In step S1330, a large target object typically occupies a larger area than a small target object, and the outline of the corresponding large target object is larger than the outline of the small target object. In addition, the number of pixels is in a proportional relationship with the contour of the target object, and in this case, the number of pixels of the contour of each target object may be analogized to the area of each target object.
In some examples, the contour of the target object may be identified by a method of edge detection.
In some examples, as shown in fig. 7, step S130 may further include counting the number of pixels of the edge of the target object (step S1331). Thus, the area of the target object can be obtained by analogy according to the number of pixels of the contour.
In some examples, as shown in fig. 4, the low-pixel image processing method may further include removing a target object whose area is smaller than a prescribed threshold value, and leaving a target object whose area is greater than or equal to the prescribed threshold value (step S140).
In step S140, the areas of the respective target objects may be sorted in descending or ascending order, and the median value in the arrangement order may be selected as a predetermined threshold value.
In step S140, an average value of the areas of the target objects may be calculated, and the average value may be set as a predetermined threshold value. Wherein the average of the areas of the target objects may be an average of the areas of the target objects in the target region.
In some examples, the areas of the respective target objects may be sorted in descending order, several target objects before the area arrangement are taken as partial target objects, the areas of the partial target objects are calculated, and an average value of the areas of the partial target objects is calculated. The average value of the areas is set as a predetermined threshold value.
In addition, step S140 may set the pixel value of the target object whose area of the target object is smaller than the prescribed threshold value to zero.
In the present disclosure, a background region and a target region including a plurality of target objects are distinguished, the background region is removed to reserve the target region, the area of the target object in the target region is calculated, and the target object greater than or equal to a prescribed threshold value is reserved. In this case, removing the background region may initially reduce image pixels, remove target objects smaller than a prescribed threshold, and enable optimization of subsequent processing of the target region to assist the patient in identifying the target object.
While the invention has been specifically described above in connection with the drawings and examples, it will be understood that the above description is not intended to limit the invention in any way. Those skilled in the art can make modifications and variations to the present invention as needed without departing from the true spirit and scope of the invention, and such modifications and variations are within the scope of the invention.

Claims (15)

1. A low-pixel image processing method is an image processing method for a retina stimulator including an image pickup device, a video processing device, and an implantation device having a prescribed number of stimulation electrodes,
the image processing method comprises the following steps:
acquiring an initial image of an environment in which a patient wearing the retinal stimulator is located;
distinguishing a background region and a target region including a plurality of target objects from the initial image, and removing the background region from the initial image, acquiring a target region including a plurality of target objects from the initial image;
calculating an area of each of the target objects from the target region; and is
Removing the target objects with the area of the target objects smaller than a specified threshold value, and reserving the target objects with the area of the target objects larger than or equal to the specified threshold value,
wherein the number of stimulation electrodes corresponds to pixels of the compressed initial image.
2. The image processing method according to claim 1,
calculating the area of the target object by counting the number of pixels of the target object.
3. The image processing method according to claim 1,
in calculating the area of the target object, a plurality of coordinate points along the periphery of the edge of the target object are selected so that a region formed by connecting the plurality of coordinate points fits the area of the target object, and the area of the region is calculated.
4. The image processing method according to claim 1,
in the calculation of the area of the target object, edge recognition is performed on the target object, and the number of pixels on the edge of the target object is counted.
5. The image processing method according to claim 1,
and sorting the areas of the target objects according to descending or ascending times, and selecting a median value in the arrangement order as the specified threshold value.
6. The image processing method according to claim 1,
calculating an average value of the area of the target object, the average value being the prescribed threshold value.
7. The image processing method according to claim 1,
setting the pixel value of the target object with the area of the target object smaller than a specified threshold value to be zero.
8. A low-pixel image processing device for a retinal stimulator including an image pickup device, a video processing device having the image processing device, and an implantation device having a predetermined number of stimulation electrodes,
the image processing apparatus includes:
an acquisition module for acquiring an initial image of an environment in which a patient wearing the retinal stimulator is located;
a segmentation module for distinguishing a background region from a target region comprising a plurality of target objects from the initial image, and removing the background region from the initial image, obtaining a target region comprising a plurality of target objects from the initial image;
a calculation module for calculating an area of each of the target objects from the target region; and
a selection module for removing target objects having an area of the target objects smaller than a prescribed threshold, retaining target objects having an area of the target objects greater than or equal to the prescribed threshold,
wherein the number of stimulation electrodes corresponds to pixels of the compressed initial image.
9. The image processing apparatus according to claim 8,
the calculation module calculates the area of the target object by counting the number of pixels of the target object.
10. The image processing apparatus according to claim 8,
the calculation module selects a plurality of coordinate points along the periphery of the edge of the target object so that a region formed by connecting the plurality of coordinate points fits the area of the target object, and calculates the area of the region.
11. The image processing apparatus according to claim 8,
and the calculation module carries out edge identification on the target object and counts the number of pixels on the edge of the target object.
12. The image processing apparatus according to claim 8,
the selection module sorts the areas of the target objects according to descending order, and selects a median value in the arrangement order as the specified threshold value.
13. The image processing apparatus according to claim 8,
the selection module calculates an average value of the areas of the target objects, the average value being the prescribed threshold value.
14. The image processing apparatus according to claim 8,
the selection module sets a pixel value of a target object having an area of the target object smaller than a prescribed threshold to zero.
15. A retinal stimulator, characterized in that,
the method comprises the following steps:
a camera device for capturing a video image and converting the video image into a visual signal;
video processing means comprising at least an image processing device according to any one of claims 8 to 14, connected to said camera means, for processing said visual signals and sending them to an implanted device via a transmitting antenna; and
the implant device for converting the received visual signal into a pulsed current signal for delivery to a retina.
CN201811047463.8A 2018-09-09 2018-09-09 Low-pixel image processing method and device and retina stimulator Active CN109308708B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010048619.5A CN111311625A (en) 2018-09-09 2018-09-09 Image processing method and image processing apparatus
CN201811047463.8A CN109308708B (en) 2018-09-09 2018-09-09 Low-pixel image processing method and device and retina stimulator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811047463.8A CN109308708B (en) 2018-09-09 2018-09-09 Low-pixel image processing method and device and retina stimulator

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202010048619.5A Division CN111311625A (en) 2018-09-09 2018-09-09 Image processing method and image processing apparatus

Publications (2)

Publication Number Publication Date
CN109308708A CN109308708A (en) 2019-02-05
CN109308708B true CN109308708B (en) 2020-04-03

Family

ID=65225011

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202010048619.5A Pending CN111311625A (en) 2018-09-09 2018-09-09 Image processing method and image processing apparatus
CN201811047463.8A Active CN109308708B (en) 2018-09-09 2018-09-09 Low-pixel image processing method and device and retina stimulator

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202010048619.5A Pending CN111311625A (en) 2018-09-09 2018-09-09 Image processing method and image processing apparatus

Country Status (1)

Country Link
CN (2) CN111311625A (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111445527B (en) * 2019-12-31 2021-09-07 深圳硅基仿生科技有限公司 Method for detecting bar-grid vision of retina stimulator
CN116929311B (en) * 2023-09-19 2024-02-02 中铁第一勘察设计院集团有限公司 Section deformation monitoring method, device and system for zoom imaging and storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070237408A1 (en) * 2006-04-05 2007-10-11 Kabushiki Kaisha Toshiba Image processing apparatus and image processing method
CN101826228B (en) * 2010-05-14 2012-05-30 上海理工大学 Detection method of bus passenger moving objects based on background estimation
CN104599502B (en) * 2015-02-13 2017-01-25 重庆邮电大学 Method for traffic flow statistics based on video monitoring
CN104748684B (en) * 2015-04-13 2017-04-05 北方工业大学 Visual detection method and device for crankshaft shoulder back chipping
CN105136070A (en) * 2015-08-21 2015-12-09 上海植物园 Method for utilizing electronic device to measure plant leaf area
CN106390285B (en) * 2016-09-30 2017-10-17 深圳硅基仿生科技有限公司 Charge compensating circuit, charge compensation method and retinal prosthesis system
CN107657639A (en) * 2017-08-09 2018-02-02 武汉高德智感科技有限公司 A kind of method and apparatus of quickly positioning target
CN108230341B (en) * 2018-03-07 2021-12-17 汕头大学 Eyeground image blood vessel segmentation method based on layered matting algorithm

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
视网膜假体中人工视觉信息处理及优化表达研究;王静;《中国博士学位论文全文数据库医药卫生科技辑》;20160115;论文正文 *

Also Published As

Publication number Publication date
CN109308708A (en) 2019-02-05
CN111311625A (en) 2020-06-19

Similar Documents

Publication Publication Date Title
US10013599B2 (en) Face detection, augmentation, spatial cueing and clutter reduction for the visually impaired
CN109308708B (en) Low-pixel image processing method and device and retina stimulator
EP1933256B1 (en) Eye closure recognition system and method
KR102561991B1 (en) Eyewear-mountable eye tracking device
CN109993115B (en) Image processing method and device and wearable device
CN108629261B (en) Remote identity recognition method and system and computer readable recording medium
WO2016044296A1 (en) Method and system for detecting obstacles for a visual prosthesis
WO2012124146A1 (en) Image processing device and image processing method
EP1868138A2 (en) Method of tracking a human eye in a video image
JP2012190350A (en) Image processing device and image processing method
CN110060311B (en) Image processing device of retina stimulator
US10624538B2 (en) Eyelid detection device, drowsiness determination device, and eyelid detection method
JP2018007792A (en) Expression recognition diagnosis support device
CN110462625B (en) Face recognition device
CN111105368A (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN110796632B (en) Pig counting device
JP2003317084A (en) System, method and program for detecting gaze from face image
KR20010016242A (en) Pupil acquisition method using eye image
CN112972889B (en) Image processing device and method, and retina stimulator
CN111445527B (en) Method for detecting bar-grid vision of retina stimulator
CN112972889A (en) Image processing device and method and retina stimulator
CN112396667B (en) Method for matching electrode positions of retina stimulator
KR102305387B1 (en) Nystagmus test device and test method for the diagnosis of benign paroxysmal positional vertigo
WO2022015411A1 (en) On-eye image processing
CN113538462A (en) Image processing method and device, computer readable storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Chen Dawei

Inventor after: Chen Zhi

Inventor after: Wang Zhui

Inventor after: Zhong Canwu

Inventor before: Chen Dawei

Inventor before: Wang Zhui

Inventor before: Chen Zhi

Inventor before: Zhong Canwu

GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 518000 A, 4 building, 3 Ting Wei Industrial Park, 6 Baoan District Road, Xin'an, Shenzhen, Guangdong.

Patentee after: Shenzhen Silicon Bionics Technology Co.,Ltd.

Address before: 518000 A, 4 building, 3 Ting Wei Industrial Park, 6 Baoan District Road, Xin'an, Shenzhen, Guangdong.

Patentee before: SHENZHEN SIBIONICS TECHNOLOGY Co.,Ltd.