WO2020248389A1 - 区域识别方法、装置、计算设备和计算机可读存储介质 - Google Patents

区域识别方法、装置、计算设备和计算机可读存储介质 Download PDF

Info

Publication number
WO2020248389A1
WO2020248389A1 PCT/CN2019/103439 CN2019103439W WO2020248389A1 WO 2020248389 A1 WO2020248389 A1 WO 2020248389A1 CN 2019103439 W CN2019103439 W CN 2019103439W WO 2020248389 A1 WO2020248389 A1 WO 2020248389A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
target
information
fundus
color
Prior art date
Application number
PCT/CN2019/103439
Other languages
English (en)
French (fr)
Inventor
王立龙
王瑞
刘莉芬
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020248389A1 publication Critical patent/WO2020248389A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • This application relates to the field of image recognition technology, and in particular to an area recognition method, device, computing device, and computer-readable storage medium based on an edge recognition algorithm.
  • a color fundus camera is usually used to take pictures of the fundus of the user to obtain a color photo of the fundus of the user.
  • the inventor of the present application has found in practice that when the user needs to identify the area contained in the fundus color photograph, since the current fundus color camera cannot automatically recognize the area contained in the fundus color photograph, it can only be done manually Recognizing the area included in the color fundus photograph, which leads to poor intelligence in the current way of identifying the area included in the color fundus photograph.
  • the present application provides an area recognition method, device, computing device, and computer-readable storage medium based on an edge recognition algorithm.
  • an area recognition method based on an edge recognition algorithm including:
  • the target area including a drusen area, a pigmented enhancement area, and a pigment loss area;
  • target information of the target area including at least the area of the target area and the long diameter of the target area;
  • an area recognition device based on an edge recognition algorithm including:
  • the cropping unit is used to crop the macular area from the obtained color fundus photos of the user;
  • a segmentation unit configured to segment a target area in the macula area according to an edge recognition algorithm, the target area including a drusen area, a pigment-enhanced area, and a depigmented area;
  • the statistical unit is configured to perform statistics on the target information of each of the target regions, and comprehensively obtain the quantitative information of the target region.
  • a computing device including a memory and a processor, the memory stores computer-readable instructions, and when the computer-readable instructions are executed by the processor, the processor executes the above-mentioned The steps of the area recognition method of the edge recognition algorithm.
  • a computer-readable storage medium storing computer-readable instructions.
  • the one or more processors execute the aforementioned edge recognition algorithm. The steps of the area identification method.
  • the above-mentioned area recognition method, device, computing device, and computer-readable storage medium based on edge recognition algorithm can identify the drusen area, the pigment-enhanced area, and the depigmented area in the user's fundus color photos that need to be identified, and Specific quantitative information such as the area, length, and quantity of the target area can also be collected, which improves the accuracy of target area recognition.
  • the intelligence of recognizing the areas contained in the color fundus photos can be improved based on the quantitative information of the target area.
  • Fig. 1 is a schematic diagram showing a device according to an exemplary embodiment
  • Fig. 2 is a flow chart showing an area recognition method based on an edge recognition algorithm according to an exemplary embodiment
  • Fig. 3 is a flow chart showing a method of region recognition based on an edge recognition algorithm according to another exemplary embodiment
  • Fig. 4 is a block diagram showing an area recognition device based on an edge recognition algorithm according to an exemplary embodiment
  • Fig. 5 is a block diagram showing an area recognition device based on an edge recognition algorithm according to another exemplary embodiment.
  • the implementation environment of this application can be portable mobile devices, such as smart phones, tablet computers, and desktop computers.
  • the images stored in the portable mobile device can be: images downloaded from the Internet; images received through a wireless connection or wired connection; images captured by its built-in camera.
  • Fig. 1 is a schematic diagram showing a device according to an exemplary embodiment.
  • the apparatus 100 may be the aforementioned portable mobile device.
  • the device 100 may include one or more of the following components: a processing component 102, a memory 104, a power supply component 106, a multimedia component 108, an audio component 110, a sensor component 114, and a communication component 116.
  • the processing component 102 generally controls the overall operations of the device 100, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • the processing component 102 may include one or more processors 118 to execute instructions to complete all or part of the steps of the following method.
  • the processing component 102 may include one or more modules to facilitate the interaction between the processing component 102 and other components.
  • the processing component 102 may include a multimedia module to facilitate the interaction between the multimedia component 108 and the processing component 102.
  • the memory 104 is configured to store various types of data to support operations in the device 100. Examples of these data include instructions for any application or method operating on the device 100.
  • the memory 104 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as a static random access memory (Static Random Access Memory). Access Memory, SRAM for short), electrically erasable programmable read-only memory (Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read Only Memory (EPROM), Programmable Read-Only Memory (EPROM) Red-Only Memory, PROM for short), Read-Only Memory (ROM for short), magnetic memory, flash memory, magnetic disk or optical disk.
  • the memory 104 also stores one or more modules, and the one or more modules are configured to be executed by the one or more processors 118 to complete all or part of the steps in the method shown below.
  • the power supply component 106 provides power to various components of the device 100.
  • the power supply component 106 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 100.
  • the multimedia component 108 includes a screen that provides an output interface between the device 100 and the user.
  • the screen may include a liquid crystal display (Liquid Crystal Display, referred to as LCD) and touch panel. If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
  • the screen may also include an organic electroluminescence display (Organic Light Emitting Display, OLED for short).
  • the audio component 110 is configured to output and/or input audio signals.
  • the audio component 110 includes a microphone (Microphone, MIC for short).
  • the microphone is configured to receive an external audio signal.
  • the received audio signal can be further stored in the memory 104 or sent via the communication component 116.
  • the audio component 110 further includes a speaker for outputting audio signals.
  • the sensor component 114 includes one or more sensors for providing the device 100 with various aspects of state evaluation.
  • the sensor component 114 can detect the open/close state of the device 100 and the relative positioning of components.
  • the sensor component 114 can also detect the position change of the device 100 or a component of the device 100 and the temperature change of the device 100.
  • the sensor component 114 may also include a magnetic sensor, a pressure sensor or a temperature sensor.
  • the communication component 116 is configured to facilitate wired or wireless communication between the apparatus 100 and other devices.
  • the device 100 can access a wireless network based on a communication standard, such as WiFi (Wireless-Fidelity, wireless fidelity).
  • the communication component 116 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 116 further includes a near field communication (Near Field Communication, NFC for short) module for facilitating short-range communication.
  • NFC Near Field Communication
  • the NFC module can be based on radio frequency identification (Radio Frequency Identification, referred to as RFID) technology, infrared data association (Infrared Data Association, referred to as IrDA) technology, ultra-wideband (Ultra Wideband, referred to as UWB) technology, Bluetooth technology and other technologies.
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB ultra-wideband
  • Bluetooth Bluetooth technology and other technologies.
  • the apparatus 100 may be implemented by one or more application specific integrated circuits (Application Specific Integrated Circuits). Specific Integrated Circuit, referred to as ASIC), digital signal processor, digital signal processing equipment, programmable logic device, field programmable gate array, controller, microcontroller, microprocessor or other electronic components to implement the following method.
  • ASIC Application Specific Integrated Circuit
  • digital signal processor digital signal processing equipment
  • programmable logic device programmable logic device
  • field programmable gate array programmable gate array
  • controller microcontroller
  • microprocessor microprocessor or other electronic components to implement the following method.
  • Fig. 2 is a flow chart showing a method for region recognition based on an edge recognition algorithm according to an exemplary embodiment. As shown in Figure 2, this method includes the following steps.
  • Step 201 Cut out the macular area from the acquired color fundus photos of the user.
  • the user's fundus color photos can be obtained by shooting with a dedicated fundus color camera.
  • the macular area can be included in the color fundus photo, and the macular area can be cut out from the color fundus photo.
  • Step 202 Segment the target area in the macula area according to the edge recognition algorithm.
  • the target area includes a drusen area, a pigmented enhancement area, and a depigmented area.
  • the target area may include drusen area, pigment-enhanced area, and depigmented area.
  • AMD disease can pass through the drusen area, pigment-enhanced area, and depigmented area.
  • the status is diagnosed, so the target area can focus on identifying the above three target areas. Since the expressions of the above three target regions are different from the normal macular area, the above three target regions have obvious boundaries, so that the above three target regions can be identified by the edge recognition algorithm, thereby improving the accuracy of target region recognition.
  • Step 203 Collect target information of the target area, where the target information includes at least the area of the target area and the long diameter of the target area.
  • the shape of the target area may be a regular figure or an irregular figure, which is not limited in the embodiment of the present application.
  • the area recognition device based on the edge recognition algorithm can calculate the area of each target area and obtain the long diameter of each target area, and the long diameter may be the longest diameter of the target area.
  • Step 204 Perform statistics on the lesion information of each target area, and comprehensively obtain quantitative information of the target area.
  • the intelligence of recognizing the area contained in the color fundus photograph can be improved based on the quantitative information of the target area.
  • the above three target regions can be identified through the edge recognition algorithm, thereby improving the accuracy of disease target region recognition.
  • Fig. 3 is a flow chart showing a method for region recognition based on an edge recognition algorithm according to another exemplary embodiment. As shown in Figure 3, this method includes the following steps:
  • step 301 a color fundus photo of the user is captured by a color fundus camera.
  • step 301 the following steps may also be performed:
  • the average optic disc radius is calculated according to the optic disc radius and stored as the pre-stored average optic disc radius.
  • a large number of optic disc radii can be acquired based on the massive collected fundus color photos, and the average optic disc radius can be calculated according to the acquired massive optic disc radius, so that the pre-stored average optic disc radius can be more accurate.
  • Step 302 Use the deep learning detection method to detect the fovea of the macula in the color fundus photograph.
  • a shallow funnel-shaped depression with a diameter of about 2 mm in the posterior pole of the human body is the macular area, and a small depression in the center of the macular area is the macular fovea.
  • the deep learning detection method can detect the macula in the fundus color photo. Area, and then detect the fovea in the macular area.
  • Step 303 Crop the macular area in the fundus color photograph with the macular fovea as the center and the first preset long diameter as the radius, wherein the first preset long diameter is determined according to a prestored average optic disc radius.
  • the range of the macular area can be clarified, so that the area recognition device based on the edge recognition algorithm can crop the macular area of the current user according to the preset rules for cropping the macular area. Improve the accuracy of macular area acquisition.
  • Step 304 Acquire personal information of the user.
  • the user’s personal information can be the personal information output by the user himself, or the user’s personal information can be searched based on the identified iris by identifying the user’s iris. Since the iris is unique, it can be passed through Uniquely identify the user's personal information corresponding to the iris, so as to ensure the accuracy of the user's personal information recognition.
  • Step 305 Determine the user's ethnic information from the personal information.
  • Step 306 Generate a target image optimization algorithm corresponding to the race information according to the automatic image optimization algorithm.
  • Step 307 Perform image optimization on the macula area according to the target image optimization algorithm.
  • the user's personal information can be obtained, and the user's ethnic information can be determined from the user's personal information. Because different races have different pigment concentrations in the macular area of the fundus, The corresponding image optimization algorithm can be generated according to the race information, so that the image optimization effect is the best.
  • Step 308 Segment the target area in the macula area according to the edge recognition algorithm.
  • the target area includes a drusen area, a pigmented area, and a pigmented area.
  • the method of segmenting the target area in the macula according to the edge recognition algorithm may include the following steps:
  • the area with the macular fovea as the center and the second preset long diameter as the radius is determined as the parafove area, where the second preset long diameter is determined according to the average optic disc radius;
  • the parafoveal area in addition to determining the macular area, the parafoveal area can also be determined. Only the drusen in the macular area and the pigment-enhanced area and the pigment-deficient area in the parafoveal area have statistical value. Therefore, , Determining the scope of the macular area and parafoveal area can make the diagnosis of AMD disease more accurate.
  • Step 309 Collect target information of the target area, where the target information includes at least the area of the target area and the long diameter of the target area.
  • the manner of collecting target information of the target area may include the following steps:
  • target information of the target area is generated.
  • each index of each drusen area, pigment enhanced area, and pigment loss area can be recorded in detail, so that the generated target information is more accurate.
  • Step 310 Perform statistics on the target information of each target area, and comprehensively obtain the quantitative information of the target area.
  • Step 311 Identify the region quantization level level corresponding to the quantization information.
  • Step 312 Obtain a regional analysis report corresponding to the regional quantitative level.
  • the corresponding area quantization level can be obtained according to the quantitative information, and then the corresponding area analysis report can be generated.
  • the generated area analysis report performs professional analysis of the user's fundus image. So that users can better understand their own eyes.
  • the intelligence of recognizing the area contained in the color fundus photograph can be improved based on the quantitative information of the target area.
  • implementing the method described in FIG. 3 can make the pre-stored average optic disc radius more accurate.
  • implementing the method described in FIG. 3 can improve the accuracy of obtaining the macular area.
  • implement the method described in FIG. 3 to generate a corresponding image optimization algorithm based on race information, so that the effect of image optimization is the best.
  • implementing the method described in FIG. 3 can make the generated target information more accurate.
  • implementing the method described in FIG. 3 can enable users to better understand their own fundus.
  • the present application also provides an area recognition device based on an edge recognition algorithm.
  • the following are device embodiments of the present application.
  • Fig. 4 is a block diagram showing an area recognition device based on an edge recognition algorithm according to an exemplary embodiment. As shown in Figure 4, the device includes:
  • the cropping unit 401 is configured to crop the macular area from the acquired color fundus photos of the user.
  • the segmentation unit 402 is configured to segment the target area in the macula area obtained by the cropping unit 401 according to the edge recognition algorithm.
  • the target area includes a drusen area, a pigmented enhancement area, and a depigmented area.
  • the method for the segmentation unit 402 to segment the target area in the macula according to the edge recognition algorithm may specifically be:
  • the area with the macular fovea as the center and the second preset long diameter as the radius is determined as the parafove area, where the second preset long diameter is determined according to the average optic disc radius;
  • the parafoveal area in addition to determining the macular area, the parafoveal area can also be determined. Only the drusen in the macular area and the pigment-enhanced area and the pigment-deficient area in the parafoveal area have statistical value. Therefore, , Determining the scope of the macular area and parafoveal area can make the diagnosis of AMD disease more accurate.
  • the collecting unit 403 is configured to collect the target information of the target area obtained by the segmentation unit 402, and the target information includes at least the area of the target area and the long diameter of the target area.
  • the manner in which the collection unit 403 collects the target information of the target area may include the following steps:
  • target information of the target area is generated.
  • each index of each drusen area, pigment enhanced area, and pigment loss area can be recorded in detail, so that the generated target information is more accurate.
  • the statistics unit 404 is configured to perform statistics on the target information of each target area collected by the collection unit 403, and comprehensively obtain the quantitative information of the target area.
  • the intelligence of recognizing the area included in the color fundus photograph can be improved according to the quantitative information of the target area.
  • diagnosis of AMD disease can be made more accurate.
  • generated target information can be made more accurate.
  • Fig. 5 is a block diagram showing an area recognition device based on an edge recognition algorithm according to another exemplary embodiment.
  • the area recognition device based on the edge recognition algorithm shown in FIG. 5 is optimized by the area recognition device based on the edge recognition algorithm shown in FIG. 4.
  • the area recognition device based on the edge recognition algorithm shown in FIG. 5 may further include:
  • the identification unit 405 is configured to, after the statistics unit 404 counts the target information of each target region collected by the collection unit 403, and comprehensively obtain the quantitative information of the target region, identify the region quantization level level corresponding to the quantitative information.
  • the first obtaining unit 406 is configured to obtain an area analysis report corresponding to the area quantization level level recognized by the recognition unit 405.
  • the corresponding regional quantification level can be obtained according to the lesion quantification information, and then the corresponding regional analysis report can be generated.
  • the generated regional analysis report performs professional analysis on the user's fundus image, so that the user can analyze his own fundus understand more.
  • the cropping unit 401 of the region recognition apparatus based on the edge recognition algorithm shown in FIG. 5 may include:
  • the photographing sub-unit 4011 is used to obtain the user's fundus color photos by shooting with the fundus color camera;
  • the detection subunit 4012 is used to detect the fovea of the macula in the fundus color photograph obtained by the shooting subunit 4011 by using a deep learning detection method;
  • the cropping subunit 4013 is used to crop the macular area with the center of the macula detected by the detection subunit 4012 as the center and the radius of the first preset major diameter in the fundus color photograph obtained by the shooting subunit 4011, wherein the first preset Let the major diameter be determined according to the pre-stored average optic disc radius.
  • the range of the macular area can be clarified, so that the area recognition device based on the edge recognition algorithm can crop the macular area of the current user according to the preset rules for cutting the macular area, which improves the accuracy of obtaining the macular area. Sex.
  • the photographing subunit 4011 of the area recognition device based on the edge recognition algorithm shown in FIG. 5 may also be used for:
  • the average optic disc radius is calculated according to the optic disc radius and stored as the pre-stored average optic disc radius.
  • a large number of optic disc radii can be acquired based on the massive collected fundus color photos, and the average optic disc radius can be calculated according to the acquired massive optic disc radius, so that the pre-stored average optic disc radius can be more accurate.
  • the region recognition apparatus based on the edge recognition algorithm shown in FIG. 5 may further include:
  • the second obtaining unit 407 is configured to obtain the user's personal information after the cropping subunit 4013 crops the macular area with the macular fovea as the center and the first preset major diameter as the radius in the fundus color photograph;
  • the determining unit 408 is configured to determine the racial information of the user from the personal information obtained by the second obtaining unit 407;
  • the second generating unit 409 is configured to generate a target image optimization algorithm corresponding to the race information determined by the determining unit 408 according to the automatic image optimization algorithm;
  • the optimization unit 410 is configured to perform image optimization on the macula area according to the target image optimization algorithm generated by the second generation unit 409, and trigger the segmentation unit 402 to perform segmentation of the target area in the macula area according to the edge recognition algorithm.
  • the user's personal information can be obtained, and the user's ethnic information can be determined from the user's personal information. Because the pigment concentration of the macular area of the fundus of different races is different, the corresponding information can be generated according to the ethnic information.
  • the image optimization algorithm which makes the image optimization effect the best.
  • the intelligence of recognizing the area included in the color fundus photograph can be improved according to the quantitative information of the target area.
  • the user can have a better understanding of his own fundus.
  • the accuracy of obtaining the macular area can be improved.
  • the pre-stored average optic disc radius can be made more accurate.
  • a corresponding image optimization algorithm can be generated according to race information, so that the effect of image optimization is the best.
  • a computing device which executes all or part of the steps of any of the above-mentioned region recognition methods based on edge recognition algorithms.
  • the computing equipment includes:
  • At least one processor At least one processor
  • a memory communicatively connected with the at least one processor; wherein,
  • the memory stores instructions that can be executed by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute as shown in any one of the above exemplary embodiments.
  • the computing device may be the apparatus 100 shown in FIG. 1.
  • a computer-readable storage medium storing computer-readable instructions.
  • the computer-readable instructions are executed by one or more processors, one or more processors can perform the above-mentioned edge-based recognition. The steps in the embodiment of the algorithm area recognition method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

一种基于边缘识别算法的区域识别方法、装置、计算设备和计算机可读存储介质。所述方法包括:从获取的用户的眼底彩照中裁剪得到黄斑区(201);根据边缘识别算法分割黄斑区中的目标区域(202),目标区域包括玻璃膜疣区域、色素增强区域以及色素脱失区域;采集目标区域的目标信息(203),目标信息至少包括目标区域的面积和目标区域的长径;对各个目标区域的目标信息进行统计,综合得到目标区域的量化信息(204)。此方法下,基于图像识别的图像提取中的边缘识别算法技术,可以采集到目标区域的面积、长径以及数量等具体的量化信息,提高了目标区域识别的精确度。综上,可以根据目标区域的量化信息提高识别眼底彩照中包含的区域的智能性。

Description

区域识别方法、装置、计算设备和计算机可读存储介质 技术领域
本申请基于并要求2019年6月13日申请的、申请号为CN 201910510868.9、名称为“一种基于边缘识别算法的区域识别方法、装置及电子设备”的中国专利申请的优先权,其全部内容在此并入作为参考。
本申请涉及图像识别技术领域,特别是涉及一种基于边缘识别算法的区域识别方法、装置、计算设备和计算机可读存储介质。
背景技术
目前,通常使用眼底彩照相机对用户的眼底进行拍照,以得到用户的眼底彩照。然而,本申请的发明人在实践中发现,当用户需要识别眼底彩照中包含的区域时,由于当前的眼底彩照相机无法自动的对眼底彩照中包含的区域进行识别,因此只能通过人工的方式识别出眼底彩照中包含的区域,从而导致当前的识别眼底彩照中包含的区域的方式智能性较差。
技术问题
为了解决相关技术中存在的识别眼底彩照中包含的区域的方式智能性较差的技术问题,本申请提供了一种基于边缘识别算法的区域识别方法、装置、计算设备和计算机可读存储介质。
技术解决方案
第一方面,提供了一种基于边缘识别算法的区域识别方法,包括:
从获取的用户的眼底彩照中裁剪得到黄斑区;
根据边缘识别算法分割所述黄斑区中的目标区域,所述目标区域包括玻璃膜疣区域、色素增强区域以及色素脱失区域;
采集所述目标区域的目标信息,所述目标信息至少包括所述目标区域的面积和所述目标区域的长径;
对各个所述目标区域的所述目标信息进行统计,综合得到所述目标区域的量化信息。
第二方面,提供了一种基于边缘识别算法的区域识别装置,包括:
裁剪单元,用于从获取的用户的眼底彩照中裁剪得到黄斑区;
分割单元,用于根据边缘识别算法分割所述黄斑区中的目标区域,所述目标区域包括玻璃膜疣区域、色素增强区域以及色素脱失区域;
采集单元,用于采集所述目标区域的目标信息,所述目标信息至少包括所述目标区域的面积和所述目标区域的长径;
统计单元,用于对各个所述目标区域的所述目标信息进行统计,综合得到所述目标区域的量化信息。
第三方面,提供了一种计算设备,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行上述基于边缘识别算法的区域识别方法的步骤。
第四方面,提供了一种存储有计算机可读指令的计算机可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行上述基于边缘识别算法的区域识别方法的步骤。
有益效果
本申请的实施例提供的技术方案可以包括以下有益效果:
上述基于边缘识别算法的区域识别方法、装置、计算设备和计算机可读存储介质,可以识别出用户眼底彩照中的玻璃膜疣区域、色素增强区域以及色素脱失区域等需要识别的目标区域,并且还可以采集到目标区域的面积、长径以及数量等具体的量化信息,提高了目标区域识别的精确度。综上,可以根据目标区域的量化信息提高识别眼底彩照中包含的区域的智能性。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性的,并不能限制本申请。
附图说明
图1是根据一示例性实施例示出的一种装置的示意图;
图2是根据一示例性实施例示出的一种基于边缘识别算法的区域识别方法的流程图;
图3是根据另一示例性实施例示出的一种基于边缘识别算法的区域识别方法的流程图;
图4是根据一示例性实施例示出的一种基于边缘识别算法的区域识别装置的框图;
图5是根据另一示例性实施例示出的一种基于边缘识别算法的区域识别装置的框图。
本发明的实施方式
这里将详细地对示例性实施例执行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
本申请的实施环境可以是便携移动设备,例如智能手机、平板电脑、台式电脑。便携移动设备中所存储的图像可以是:从互联网下载的图像;通过无线连接或有线连接接收的图像;通过自身所内置摄像头拍摄得到的图像。
图1是根据一示例性实施例示出的一种装置的示意图。装置100可以是上述便携移动设备。如图1所示,装置100可以包括以下一个或多个组件:处理组件102,存储器104,电源组件106,多媒体组件108,音频组件110,传感器组件114以及通信组件116。
处理组件102通常控制装置100的整体操作,诸如与显示,电话呼叫,数据通信,相机操作以及记录操作相关联的操作等。处理组件102可以包括一个或多个处理器118来执行指令,以完成下述的方法的全部或部分步骤。此外,处理组件102可以包括一个或多个模块,用于便于处理组件102和其他组件之间的交互。例如,处理组件102可以包括多媒体模块,用于以方便多媒体组件108和处理组件102之间的交互。
存储器104被配置为存储各种类型的数据以支持在装置100的操作。这些数据的示例包括用于在装置100上操作的任何应用程序或方法的指令。存储器104可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(Static Random Access Memory,简称SRAM),电可擦除可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,简称EEPROM),可擦除可编程只读存储器(Erasable Programmable Read Only Memory,简称EPROM),可编程只读存储器(Programmable Red-Only Memory,简称PROM),只读存储器(Read-Only Memory,简称ROM),磁存储器,快闪存储器,磁盘或光盘。存储器104中还存储有一个或多个模块,用于该一个或多个模块被配置成由该一个或多个处理器118执行,以完成如下所示方法中的全部或者部分步骤。
电源组件106为装置100的各种组件提供电力。电源组件106可以包括电源管理系统,一个或多个电源,及其他与为装置100生成、管理和分配电力相关联的组件。
多媒体组件108包括在所述装置100和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(Liquid Crystal Display,简称LCD)和触摸面板。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。屏幕还可以包括有机电致发光显示器(Organic Light Emitting Display,简称OLED)。
音频组件110被配置为输出和/或输入音频信号。例如,音频组件110包括一个麦克风(Microphone,简称MIC),当装置100处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器104或经由通信组件116发送。在一些实施例中,音频组件110还包括一个扬声器,用于输出音频信号。
传感器组件114包括一个或多个传感器,用于为装置100提供各个方面的状态评估。例如,传感器组件114可以检测到装置100的打开/关闭状态,组件的相对定位,传感器组件114还可以检测装置100或装置100一个组件的位置改变以及装置100的温度变化。在一些实施例中,该传感器组件114还可以包括磁传感器,压力传感器或温度传感器。
通信组件116被配置为便于装置100和其他设备之间有线或无线方式的通信。装置100可以接入基于通信标准的无线网络,如WiFi(Wireless-Fidelity,无线保真)。在一个示例性实施例中,通信组件116经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件116还包括近场通信(Near Field Communication,简称NFC)模块,用于以促进短程通信。例如,在NFC模块可基于射频识别(Radio Frequency Identification,简称RFID)技术,红外数据协会(Infrared Data Association,简称IrDA)技术,超宽带(Ultra Wideband,简称UWB)技术,蓝牙技术和其他技术来实现。
在示例性实施例中,装置100可以被一个或多个应用专用集成电路(Application Specific Integrated Circuit,简称ASIC)、数字信号处理器、数字信号处理设备、可编程逻辑器件、现场可编程门阵列、控制器、微控制器、微处理器或其他电子元件实现,用于执行下述方法。
图2是根据一示例性实施例示出的一种基于边缘识别算法的区域识别方法的流程图。如图2所示,此方法包括以下步骤。
步骤201,从获取的用户的眼底彩照中裁剪得到黄斑区。
本申请实施例中,用户的眼底彩照可以通过专用的眼底彩照相机拍摄得到。眼底彩照中可以包含黄斑区,并且可以从眼底彩照中裁剪出黄斑区。
步骤202,根据边缘识别算法分割黄斑区中的目标区域,目标区域包括玻璃膜疣区域、色素增强区域以及色素脱失区域。
本申请实施例中,目标区域可以包含玻璃膜疣区域、色素增强区域以及色素脱失区域,举例来说,由于AMD疾病可以通过玻璃膜疣区域、色素增强区域以及色素脱失区域等目标区域的状态进行诊断,因此目标区域可以着重识别上述三种目标区域。由于上述三种目标区域与正常黄斑区的表现形式不同,因此上述三种目标区域有明显的边界,从而可以通过边缘识别算法识别出上述三种目标区域,从而提高了目标区域识别的准确性。
步骤203,采集目标区域的目标信息,目标信息至少包括目标区域的面积和目标区域的长径。
本申请实施例中,目标区域的形状可以为规则图形或不规则图形,对此,本申请实施例不做限定。基于边缘识别算法的区域识别装置可以计算各个目标区域的面积以及获取各个目标区域的长径,该长径可以为目标区域最长的直径。
步骤204,对各个目标区域的病灶信息进行统计,综合得到目标区域的量化信息。
在图2所描述的方法中,可以根据目标区域的量化信息提高识别眼底彩照中包含的区域的智能性。此外,实施图2所描述的方法,可以通过边缘识别算法识别出上述三种目标区域,从而提高了病目标区域识别的准确性。
图3是根据另一示例性实施例示出的一种基于边缘识别算法的区域识别方法的流程图。如图3所示,此方法包括以下步骤:
步骤301,通过眼底彩照相机拍摄得到用户的眼底彩照。
作为一种可选的实施方式,步骤301之前,还可以执行以下步骤:
获取若干张预存储眼底彩照;
识别各个预存储眼底彩照的视盘,并计算各个视盘的视盘半径;
根据视盘半径计算得到平均视盘半径,并存储为预存的平均视盘半径。
其中,实施这种实施方式,可以根据采集到的海量的眼底彩照,获取海量的视盘半径,并且根据获取到的海量的视盘半径计算得到平均视盘半径,以使预存储的平均视盘半径更加准确。
步骤302,利用深度学习检测方法检测眼底彩照中的黄斑中心凹。
本申请实施例中,人体的视网膜后极部有一直径约2mm的浅漏斗状小凹陷区为黄斑区,黄斑区中央有一小凹为黄斑中心凹,深度学习检测方法可以检测出眼底彩照中的黄斑区,进而检测出黄斑区中的黄斑中心凹。
步骤303,裁剪得到眼底彩照中以黄斑中心凹为圆心、以第一预设长径为半径的黄斑区,其中,第一预设长径根据预存的平均视盘半径确定。
本申请实施例中,实施上述的步骤301~步骤303,可以明确黄斑区的范围,以使基于边缘识别算法的区域识别装置可以根据预设的裁剪黄斑区的规则裁剪得到当前用户的黄斑区,提高了黄斑区获取的准确性。
步骤304,获取用户的个人信息。
本申请实施例中,用户的个人信息可以为用户自己输出的个人信息,还可通过识别用户的虹膜,根据识别到的虹膜搜索到的用户的个人信息,由于虹膜具有唯一性,因此可以通过虹膜唯一的识别出与该虹膜对应的用户的个人信息,从而保证用户的个人信息识别的准确率。
步骤305,从个人信息中确定用户的人种信息。
步骤306,根据图像自动优化算法生成与人种信息对应的目标图像优化算法。
步骤307,根据目标图像优化算法对黄斑区进行图像优化。
本申请实施例中,实施上述的步骤304~步骤307,可以获取到用户的个人信息,并从用户的个人信息中确定用户的人种信息,由于不同人种眼底黄斑区的色素浓度不同,因此可以根据人种信息生成对应的图像优化算法,从而使得图像优化的效果最好。
步骤308,根据边缘识别算法分割黄斑区中的目标区域,目标区域包括玻璃膜疣区域、色素增强区域以及色素脱失区域。
作为一种可选的实施方式,根据边缘识别算法分割黄斑区中的目标区域的方式可以包括以下步骤:
根据边缘识别算法将以黄斑中心凹为圆心、以第二预设长径为半径的区域确定为旁中心凹区域,其中,第二预设长径根据平均视盘半径确定;
通过深度学习分割网络分割得到黄斑区中的玻璃膜疣区域,以及分割得到旁中心凹区域中的色素增强区域和色素脱失区域;
综合玻璃膜疣区域、色素增强区域以及色素脱失区域,得到目标区域。
其中,实施这种实施方式,除了确定黄斑区,还可以确定旁中心凹区,只有黄斑区内的玻璃膜疣以及旁中心凹区内的色素增强区域和色素缺失区域才有统计的价值,因此,确定黄斑区和旁中心凹区的范围可以使AMD疾病的诊断更加准确。
步骤309,采集目标区域的目标信息,目标信息至少包括目标区域的面积和目标区域的长径。
作为一种可选的实施方式,采集目标区域的目标信息的方式可以包括以下步骤:
分别采集目标区域中的玻璃膜疣区域、色素增强区域以及色素脱失区域的颜色、面积和长径,以及分别确定玻璃膜疣区域、色素增强区域以及色素脱失区域的数量;
根据玻璃膜疣区域、色素增强区域以及色素脱失区域的数量、以及各个玻璃膜疣区域、色素增强区域以及色素脱失区域的颜色、面积和长径,生成目标区域的目标信息。
其中,实施这种实施方式,可以将每个玻璃膜疣区域、色素增强区域以及色素脱失区域的各项指标都详细的进行记录,以使生成的目标信息更加的准确。
步骤310,对各个目标区域的目标信息进行统计,综合得到目标区域的量化信息。
步骤311,识别量化信息对应的区域量化水平等级。
步骤312,获取与区域量化水平等级对应的区域分析报告。
本申请实施例中,实施上述的步骤311~步骤312,可以根据量化信息得到对应的区域量化水平等级,进而生成对应的区域分析报告,生成的区域分析报告对用户的眼底图像进行专业的分析,以使用户可以对自身的眼底更加了解。
在图3所描述的方法中,可以根据目标区域的量化信息提高识别眼底彩照中包含的区域的智能性。此外,实施图3所描述的方法,可以使预存储的平均视盘半径更加准确。此外,实施图3所描述的方法,可以提高黄斑区获取的准确性。此外,实施图3所描述的方法,根据人种信息生成对应的图像优化算法,从而使得图像优化的效果最好。此外,实施图3所描述的方法,可以使生成的目标信息更加的准确。此外,实施图3所描述的方法,能够使用户可以对自身的眼底更加了解。
在一个实施例中,本申请还提供了一种基于边缘识别算法的区域识别装置,以下是本申请的装置实施例。
图4是根据一示例性实施例示出的一种基于边缘识别算法的区域识别装置的框图。如图4所示,该装置包括:
裁剪单元401,用于从获取的用户的眼底彩照中裁剪得到黄斑区。
分割单元402,用于根据边缘识别算法分割裁剪单元401得到的黄斑区中的目标区域,该目标区域包括玻璃膜疣区域、色素增强区域以及色素脱失区域。
作为一种可选的实施方式,分割单元402根据边缘识别算法分割黄斑区中的目标区域的方式具体可以为:
根据边缘识别算法将以黄斑中心凹为圆心、以第二预设长径为半径的区域确定为旁中心凹区域,其中,第二预设长径根据平均视盘半径确定;
通过深度学习分割网络分割得到黄斑区中的玻璃膜疣区域,以及分割得到旁中心凹区域中的色素增强区域和色素脱失区域;
综合玻璃膜疣区域、色素增强区域以及色素脱失区域,得到目标区域。
其中,实施这种实施方式,除了确定黄斑区,还可以确定旁中心凹区,只有黄斑区内的玻璃膜疣以及旁中心凹区内的色素增强区域和色素缺失区域才有统计的价值,因此,确定黄斑区和旁中心凹区的范围可以使AMD疾病的诊断更加准确。
采集单元403,用于采集分割单元402得到的目标区域的目标信息,该目标信息至少包括目标区域的面积和目标区域的长径。
作为一种可选的实施方式,采集单元403采集目标区域的目标信息的方式可以包括以下步骤:
分别采集目标区域中的玻璃膜疣区域、色素增强区域以及色素脱失区域的颜色、面积和长径,以及分别确定玻璃膜疣区域、色素增强区域以及色素脱失区域的数量;
根据玻璃膜疣区域、色素增强区域以及色素脱失区域的数量、以及各个玻璃膜疣区域、色素增强区域以及色素脱失区域的颜色、面积和长径,生成目标区域的目标信息。
其中,实施这种实施方式,可以将每个玻璃膜疣区域、色素增强区域以及色素脱失区域的各项指标都详细的进行记录,以使生成的目标信息更加的准确。
统计单元404,用于对采集单元403采集的各个目标区域的目标信息进行统计,综合得到目标区域的灶量化信息。
在图4所示的基于边缘识别算法的病灶统计装置中,可以根据目标区域的量化信息提高识别眼底彩照中包含的区域的智能性。此外,在图4所示的装置中,可以使AMD疾病的诊断更加准确。此外,在图4所示的装置中,可以使生成的目标信息更加的准确。
图5是根据另一示例性实施例示出的一种基于边缘识别算法的区域识别装置的框图。其中,图5所示的基于边缘识别算法的区域识别装置是由图4所示的基于边缘识别算法的区域识别装置进行优化得到的。与图4所示的基于边缘识别算法的区域识别装置相比,图5所示的基于边缘识别算法的区域识别装置还可以包括:
识别单元405,用于在统计单元404对采集单元403采集的各个目标区域的目标信息进行统计,综合得到目标区域的量化信息之后,识别量化信息对应的区域量化水平等级。
第一获取单元406,用于获取与识别单元405识别的区域量化水平等级对应的区域分析报告。
本申请实施例中,可以根据病灶量化信息得到对应的区域量化水平等级,进而生成对应的区域分析报告,生成的区域分析报告对用户的眼底图像进行专业的分析,以使用户可以对自身的眼底更加了解。
作为一种可选的实施方式,图5所示的基于边缘识别算法的区域识别装置的裁剪单元401可以包括:
拍摄子单元4011,用于通过眼底彩照相机拍摄得到用户的眼底彩照;
检测子单元4012,用于利用深度学习检测方法检测拍摄子单元4011得到的眼底彩照中的黄斑中心凹;
裁剪子单元4013,用于裁剪得到拍摄子单元4011得到的眼底彩照中以检测子单元4012检测出的黄斑中心凹为圆心、以第一预设长径为半径的黄斑区,其中,第一预设长径根据预存的平均视盘半径确定。
其中,实施这种实施方式,可以明确黄斑区的范围,以使基于边缘识别算法的区域识别装置可以根据预设的裁剪黄斑区的规则裁剪得到当前用户的黄斑区,提高了黄斑区获取的准确性。
作为一种可选的实施方式,图5所示的基于边缘识别算法的区域识别装置的拍摄子单元4011还可以用于:
获取若干张预存储眼底彩照;
识别各个预存储眼底彩照的视盘,并计算各个视盘的视盘半径;
根据视盘半径计算得到平均视盘半径,并存储为预存的平均视盘半径。
其中,实施这种实施方式,可以根据采集到的海量的眼底彩照,获取海量的视盘半径,并且根据获取到的海量的视盘半径计算得到平均视盘半径,以使预存储的平均视盘半径更加准确。
作为一种可选的实施方式,图5所示的基于边缘识别算法的区域识别装置还可以包括:
第二获取单元407,用于在裁剪子单元4013裁剪得到眼底彩照中以黄斑中心凹为圆心、以第一预设长径为半径的黄斑区之后,获取用户的个人信息;
确定单元408,用于从第二获取单元407获取的个人信息中确定用户的人种信息;
第二生成单元409,用于根据图像自动优化算法生成与确定单元408确定的人种信息对应的目标图像优化算法;
优化单元410,用于根据第二生成单元409生成的目标图像优化算法对黄斑区进行图像优化,并触发分割单元402执行根据边缘识别算法分割黄斑区中的目标区域。
其中,实施这种实施方式,可以获取到用户的个人信息,并从用户的个人信息中确定用户的人种信息,由于不同人种眼底黄斑区的色素浓度不同,因此可以根据人种信息生成对应的图像优化算法,从而使得图像优化的效果最好。
在图5所示的基于边缘识别算法的病灶统计装置中,可以根据目标区域的量化信息提高识别眼底彩照中包含的区域的智能性。此外,在图5所示的装置中,能够使用户可以对自身的眼底更加了解。此外,在图5所示的装置中,可以提高黄斑区获取的准确性。此外,在图5所示的装置中,可以使预存储的平均视盘半径更加准确。此外,在图5所示的装置中,可以根据人种信息生成对应的图像优化算法,从而使得图像优化的效果最好。
在一个实施例中,提出了一种计算设备,执行上述任一所示的基于边缘识别算法的区域识别方法的全部或者部分步骤。该计算设备包括:
至少一个处理器;以及
与所述至少一个处理器通信连接的存储器;其中,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如上述任一个示例性实施例所示出的基于边缘识别算法的区域识别方法。
该计算设备可以是图1所示装置100。
在一个实施例中,提出了一种存储有计算机可读指令的计算机可读存储介质,该计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行上述基于边缘识别算法的区域识别方法实施例中的步骤。
应当理解的是,本申请并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围执行各种修改和改变。本申请的范围仅由所附的权利要求来限制。

Claims (20)

  1. 一种基于边缘识别算法的区域识别方法,其特征在于,所述方法包括:
    从获取的用户的眼底彩照中裁剪得到黄斑区;
    根据边缘识别算法分割所述黄斑区中的目标区域,所述目标区域包括玻璃膜疣区域、色素增强区域以及色素脱失区域;
    采集所述目标区域的目标信息,所述目标信息至少包括所述目标区域的面积和所述目标区域的长径;
    对各个所述目标区域的所述目标信息进行统计,综合得到所述目标区域的量化信息。
  2. 根据权利要求1所述的方法,其特征在于,所述对各个所述目标区域的所述目标信息进行统计,综合得到所述目标区域的量化信息之后,所述方法还包括:
    识别所述量化信息对应的区域量化水平等级;
    获取与所述区域量化水平等级对应的区域分析报告。
  3. 根据权利要求2所述的方法,其特征在于,所述从获取的用户的眼底彩照中裁剪得到黄斑区,包括:
    通过眼底彩照相机拍摄得到用户的眼底彩照;
    利用深度学习检测方法检测所述眼底彩照中的黄斑中心凹;
    裁剪得到所述眼底彩照中以所述黄斑中心凹为圆心、以第一预设长径为半径的黄斑区,其中,所述第一预设长径根据预存的平均视盘半径确定。
  4. 根据权利要求3所述的方法,其特征在于,所述通过眼底彩照相机拍摄得到用户的眼底彩照之前,所述方法还包括:
    获取若干张预存储眼底彩照;
    识别各个所述预存储眼底彩照的视盘,并计算各个所述视盘的视盘半径;
    根据所述视盘半径计算得到平均视盘半径,并存储为所述预存的平均视盘半径。
  5. 根据权利要求3或4所述的方法,其特征在于,所述裁剪得到所述眼底彩照中以所述黄斑中心凹为圆心、以第一预设长径为半径的黄斑区之后,以及所述根据边缘识别算法分割所述黄斑区中的目标区域之前,所述方法还包括:
    获取所述用户的个人信息;
    从所述个人信息中确定所述用户的人种信息;
    根据图像自动优化算法生成与所述人种信息对应的目标图像优化算法;
    根据所述目标图像优化算法对所述黄斑区进行图像优化。
  6. 根据权利要求5所述的方法,其特征在于,所述根据边缘识别算法分割所述黄斑区中的目标区域,包括:
    根据边缘识别算法将以所述黄斑中心凹为圆心、以第二预设长径为半径的区域确定为旁中心凹区域,其中,所述第二预设长径根据所述平均视盘半径确定;
    通过深度学习分割网络分割得到所述黄斑区中的玻璃膜疣区域,以及分割得到所述旁中心凹区域中的色素增强区域和色素脱失区域;
    综合所述玻璃膜疣区域、所述色素增强区域以及所述色素脱失区域,得到目标区域。
  7. 根据权利要求6所述的方法,其特征在于,所述采集所述目标区域的目标信息,包括:
    分别采集所述目标区域中的所述玻璃膜疣区域、所述色素增强区域以及所述色素脱失区域的颜色、面积和长径,以及分别确定所述玻璃膜疣区域、所述色素增强区域以及所述色素脱失区域的数量;
    根据所述玻璃膜疣区域、所述色素增强区域以及所述色素脱失区域的数量、以及各个所述玻璃膜疣区域、所述色素增强区域以及所述色素脱失区域的颜色、面积和长径,生成所述目标区域的目标信息。
  8. 一种基于边缘识别算法的区域识别装置,其特征在于,所述装置包括:
    裁剪单元,用于从获取的用户的眼底彩照中裁剪得到黄斑区;
    分割单元,用于根据边缘识别算法分割所述黄斑区中的目标区域,所述目标区域包括玻璃膜疣区域、色素增强区域以及色素脱失区域;
    采集单元,用于采集所述目标区域的目标信息,所述目标信息至少包括所述目标区域的面积和所述目标区域的长径;
    统计单元,用于对各个所述目标区域的所述目标信息进行统计,综合得到所述目标区域的量化信息。
  9. 根据权利要求8所述的装置,其特征在于,所述装置还包括:
    识别单元,用于在所述统计单元对各个所述目标区域的所述目标信息进行统计,综合得到所述目标区域的量化信息之后,识别所述量化信息对应的区域量化水平等级;
    第一获取单元,用于获取与所述区域量化水平等级对应的区域分析报告。
  10. 一种计算设备,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行:
    从获取的用户的眼底彩照中裁剪得到黄斑区;
    根据边缘识别算法分割所述黄斑区中的目标区域,所述目标区域包括玻璃膜疣区域、色素增强区域以及色素脱失区域;
    采集所述目标区域的目标信息,所述目标信息至少包括所述目标区域的面积和所述目标区域的长径;
    对各个所述目标区域的所述目标信息进行统计,综合得到所述目标区域的量化信息。
  11. 根据权利要求10所述的计算设备,其特征在于,所述对各个所述目标区域的所述目标信息进行统计,综合得到所述目标区域的量化信息之后,所述计算机可读指令被所述处理器执行时,使得所述处理器还执行:
    识别所述量化信息对应的区域量化水平等级;
    获取与所述区域量化水平等级对应的区域分析报告。
  12. 根据权利要求11所述的计算设备,其特征在于,所述从获取的用户的眼底彩照中裁剪得到黄斑区,包括:
    通过眼底彩照相机拍摄得到用户的眼底彩照;
    利用深度学习检测方法检测所述眼底彩照中的黄斑中心凹;
    裁剪得到所述眼底彩照中以所述黄斑中心凹为圆心、以第一预设长径为半径的黄斑区,其中,所述第一预设长径根据预存的平均视盘半径确定。
  13. 根据权利要求12所述的计算设备,其特征在于,所述通过眼底彩照相机拍摄得到用户的眼底彩照之前,所述计算机可读指令被所述处理器执行时,使得所述处理器还执行:
    获取若干张预存储眼底彩照;
    识别各个所述预存储眼底彩照的视盘,并计算各个所述视盘的视盘半径;
    根据所述视盘半径计算得到平均视盘半径,并存储为所述预存的平均视盘半径。
  14. 根据权利要求12或13所述的计算设备,其特征在于,所述裁剪得到所述眼底彩照中以所述黄斑中心凹为圆心、以第一预设长径为半径的黄斑区之后,以及所述根据边缘识别算法分割所述黄斑区中的目标区域之前,所述计算机可读指令被所述处理器执行时,使得所述处理器还执行:
    获取所述用户的个人信息;
    从所述个人信息中确定所述用户的人种信息;
    根据图像自动优化算法生成与所述人种信息对应的目标图像优化算法;
    根据所述目标图像优化算法对所述黄斑区进行图像优化。
  15. 根据权利要求14所述的计算设备,其特征在于,所述根据边缘识别算法分割所述黄斑区中的目标区域,包括:
    根据边缘识别算法将以所述黄斑中心凹为圆心、以第二预设长径为半径的区域确定为旁中心凹区域,其中,所述第二预设长径根据所述平均视盘半径确定;
    通过深度学习分割网络分割得到所述黄斑区中的玻璃膜疣区域,以及分割得到所述旁中心凹区域中的色素增强区域和色素脱失区域;
    综合所述玻璃膜疣区域、所述色素增强区域以及所述色素脱失区域,得到目标区域。
  16. 根据权利要求15所述的计算设备,其特征在于,所述采集所述目标区域的目标信息,包括:
    分别采集所述目标区域中的所述玻璃膜疣区域、所述色素增强区域以及所述色素脱失区域的颜色、面积和长径,以及分别确定所述玻璃膜疣区域、所述色素增强区域以及所述色素脱失区域的数量;
    根据所述玻璃膜疣区域、所述色素增强区域以及所述色素脱失区域的数量、以及各个所述玻璃膜疣区域、所述色素增强区域以及所述色素脱失区域的颜色、面积和长径,生成所述目标区域的目标信息。
  17. 一种存储有计算机可读指令的计算机可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行:
    从获取的用户的眼底彩照中裁剪得到黄斑区;
    根据边缘识别算法分割所述黄斑区中的目标区域,所述目标区域包括玻璃膜疣区域、色素增强区域以及色素脱失区域;
    采集所述目标区域的目标信息,所述目标信息至少包括所述目标区域的面积和所述目标区域的长径;
    对各个所述目标区域的所述目标信息进行统计,综合得到所述目标区域的量化信息。
  18. 根据权利要求17所述的计算机可读存储介质,其特征在于,所述对各个所述目标区域的所述目标信息进行统计,综合得到所述目标区域的量化信息之后,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器还执行:
    识别所述量化信息对应的区域量化水平等级;
    获取与所述区域量化水平等级对应的区域分析报告。
  19. 根据权利要求18所述的计算机可读存储介质,其特征在于,所述从获取的用户的眼底彩照中裁剪得到黄斑区,包括:
    通过眼底彩照相机拍摄得到用户的眼底彩照;
    利用深度学习检测方法检测所述眼底彩照中的黄斑中心凹;
    裁剪得到所述眼底彩照中以所述黄斑中心凹为圆心、以第一预设长径为半径的黄斑区,其中,所述第一预设长径根据预存的平均视盘半径确定。
  20. 根据权利要求19所述的计算机可读存储介质,其特征在于,所述通过眼底彩照相机拍摄得到用户的眼底彩照之前,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器还执行:
    获取若干张预存储眼底彩照;
    识别各个所述预存储眼底彩照的视盘,并计算各个所述视盘的视盘半径;
    根据所述视盘半径计算得到平均视盘半径,并存储为所述预存的平均视盘半径。
PCT/CN2019/103439 2019-06-13 2019-08-29 区域识别方法、装置、计算设备和计算机可读存储介质 WO2020248389A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910510868.9A CN110363782B (zh) 2019-06-13 2019-06-13 一种基于边缘识别算法的区域识别方法、装置及电子设备
CN201910510868.9 2019-06-13

Publications (1)

Publication Number Publication Date
WO2020248389A1 true WO2020248389A1 (zh) 2020-12-17

Family

ID=68216201

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/103439 WO2020248389A1 (zh) 2019-06-13 2019-08-29 区域识别方法、装置、计算设备和计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN110363782B (zh)
WO (1) WO2020248389A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127431A (zh) * 2019-12-24 2020-05-08 杭州求是创新健康科技有限公司 一种基于区域自适应多任务神经网络的干眼症分级评估系统
CN111915541B (zh) * 2020-07-31 2021-08-17 平安科技(深圳)有限公司 基于人工智能的图像增强处理方法、装置、设备及介质
CN116777794B (zh) * 2023-08-17 2023-11-03 简阳市人民医院 一种角膜异物图像的处理方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170140544A1 (en) * 2010-01-20 2017-05-18 Duke University Segmentation and identification of layered structures in images
CN108416344A (zh) * 2017-12-28 2018-08-17 中山大学中山眼科中心 眼底彩照视盘与黄斑定位识别方法
CN109308701A (zh) * 2018-08-31 2019-02-05 南京理工大学 深度级联模型的sd-oct图像ga病变分割方法
CN109493954A (zh) * 2018-12-20 2019-03-19 广东工业大学 一种基于类别判别定位的sd-oct图像视网膜病变检测系统
CN109784337A (zh) * 2019-03-05 2019-05-21 百度在线网络技术(北京)有限公司 一种黄斑区识别方法、装置及计算机可读存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5996959B2 (ja) * 2012-07-30 2016-09-21 株式会社トプコン 眼底解析装置
JP6158535B2 (ja) * 2013-02-28 2017-07-05 国立大学法人大阪大学 眼底解析装置
CN104102899B (zh) * 2014-05-23 2017-07-14 首都医科大学附属北京同仁医院 视网膜血管识别方法及装置
CN108765379B (zh) * 2018-05-14 2019-11-19 深圳明眸科技有限公司 眼底病变区域面积的计算方法、装置、医疗设备和存储介质
CN108717696B (zh) * 2018-05-16 2022-04-22 上海鹰瞳医疗科技有限公司 黄斑影像检测方法和设备
CN109829894B (zh) * 2019-01-09 2022-04-26 平安科技(深圳)有限公司 分割模型训练方法、oct图像分割方法、装置、设备及介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170140544A1 (en) * 2010-01-20 2017-05-18 Duke University Segmentation and identification of layered structures in images
CN108416344A (zh) * 2017-12-28 2018-08-17 中山大学中山眼科中心 眼底彩照视盘与黄斑定位识别方法
CN109308701A (zh) * 2018-08-31 2019-02-05 南京理工大学 深度级联模型的sd-oct图像ga病变分割方法
CN109493954A (zh) * 2018-12-20 2019-03-19 广东工业大学 一种基于类别判别定位的sd-oct图像视网膜病变检测系统
CN109784337A (zh) * 2019-03-05 2019-05-21 百度在线网络技术(北京)有限公司 一种黄斑区识别方法、装置及计算机可读存储介质

Also Published As

Publication number Publication date
CN110363782B (zh) 2023-06-16
CN110363782A (zh) 2019-10-22

Similar Documents

Publication Publication Date Title
WO2020088328A1 (zh) 一种结肠息肉图像的处理方法和装置及系统
CN105631408B (zh) 基于视频的面孔相册处理方法和装置
KR102420100B1 (ko) 건강 상태 정보를 제공하는 전자 장치, 그 제어 방법, 및 컴퓨터 판독가능 저장매체
JP4196714B2 (ja) デジタルカメラ
US11612314B2 (en) Electronic device and method for determining degree of conjunctival hyperemia by using same
US10115019B2 (en) Video categorization method and apparatus, and storage medium
KR20180052002A (ko) 이미지 처리 방법 및 이를 지원하는 전자 장치
CN108280418A (zh) 脸部图像的欺骗识别方法及装置
CN111937082B (zh) 远程牙科成像的引导方法和系统
CN105357425B (zh) 图像拍摄方法及装置
WO2020248389A1 (zh) 区域识别方法、装置、计算设备和计算机可读存储介质
US11451704B2 (en) Image capturing apparatus, method for controlling the same, and storage medium
CN111566693B (zh) 一种皱纹检测方法及电子设备
KR102688619B1 (ko) 이미지 처리 방법 및 이를 지원하는 전자 장치
WO2017000491A1 (zh) 获取虹膜图像的方法、装置及红膜识别设备
CN107025441B (zh) 肤色检测方法及装置
WO2017140109A1 (zh) 压力检测方法和装置
EP3328062A1 (en) Photo synthesizing method and device
JP2014223063A (ja) ペット健康診断装置、ペット健康診断方法及びプログラム
US11729488B2 (en) Image capturing apparatus, method for controlling the same, and storage medium
KR102351496B1 (ko) 영상 처리 장치 및 그 동작 방법
US9684828B2 (en) Electronic device and eye region detection method in electronic device
CN113974546A (zh) 一种翼状胬肉检测方法和移动终端
CN111557007B (zh) 一种检测眼睛睁闭状态的方法及电子设备
WO2020015148A1 (zh) 一种色斑检测方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19932328

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19932328

Country of ref document: EP

Kind code of ref document: A1