WO2022267300A1 - 用于自动提取图像中目标区域的方法、系统及存储介质 - Google Patents

用于自动提取图像中目标区域的方法、系统及存储介质 Download PDF

Info

Publication number
WO2022267300A1
WO2022267300A1 PCT/CN2021/129447 CN2021129447W WO2022267300A1 WO 2022267300 A1 WO2022267300 A1 WO 2022267300A1 CN 2021129447 W CN2021129447 W CN 2021129447W WO 2022267300 A1 WO2022267300 A1 WO 2022267300A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
area
region
target area
target
Prior art date
Application number
PCT/CN2021/129447
Other languages
English (en)
French (fr)
Inventor
蒋婕
汪琪
万春玲
Original Assignee
上海添音生物科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海添音生物科技有限公司 filed Critical 上海添音生物科技有限公司
Publication of WO2022267300A1 publication Critical patent/WO2022267300A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Definitions

  • the invention relates to the field of image processing, in particular to a method, a system and a computer-readable storage medium for automatically extracting a target area in an image.
  • the present invention provides the following technical solutions.
  • a method for automatically extracting a target area in an image includes the following steps:
  • the feature information of the region at least including image pixel color information
  • step A of the method for automatically extracting a target area in an image according to an embodiment of the present invention the area is selected based on the feature image used for positioning recognition in the image to be processed .
  • the target area is substantially circular.
  • the feature image includes at least two border images, and in step A:
  • At least one characteristic position on the image to be processed is determined, the at least one characteristic position is associated with the target area.
  • step B of the method for automatically extracting a target area in an image at least one color space model is constructed based on the area, so as to extract Acquire the color information of the image pixel in the region.
  • step B the HIS color model and the Lab color model are respectively constructed based on the areas, and the obtained The H component in the HIS color model and the a component in the Lab color model; in step C, compare the obtained H component with the H component threshold, and identify the part that reaches the H component threshold as the first Candidate areas, and compare the acquired a component with the a component threshold, identify the part that reaches the a component threshold as the second candidate area, and calculate the first candidate area and the second candidate area respectively The ratio of the perimeter to the area, and the smaller ratio of the two is determined as the target area.
  • the preset standard is a preset threshold
  • the used threshold is corrected by the following formula:
  • T 1 (xy ⁇ S) ⁇ T 0 ;
  • x and y are the first coefficient and the second coefficient respectively
  • S is the score obtained by classifying the region through machine learning
  • T 0 is the initial threshold
  • T 1 is a modified threshold used to replace the initial threshold T 0 for the comparison.
  • the machine learning is based on a deep learning network of the AlexNet architecture, and the training sample library of the machine learning includes from Based on the input data that is processed by human grade scoring.
  • the grade scoring process completed manually includes four levels, and the respective grade scores correspond to the The area is zero, the area of the target area is smaller than a preset area threshold, the area of the target area is equal to the area threshold, and the area of the target area is greater than the area threshold.
  • the grade scoring process of the machine learning includes the four levels, or the machine learning is based on the four levels The corresponding weight values of each level in the level are weighted and summed to determine the level score.
  • step C of the method for automatically extracting a target area in an image according to an embodiment of the present invention morphological processing is further performed on the portion meeting the preset standard.
  • the morphological processing includes:
  • An open operation is performed on the maximum connected area, and then the largest connected area is reserved as the target area.
  • the feature information includes color, area, perimeter and/or shape.
  • a system for automatically extracting a target area in an image includes:
  • a memory configured to store instructions
  • a processor configured to implement the method for automatically extracting a target area in an image according to any embodiment of the present invention when the instructions are executed.
  • the system for automatically extracting a target area in an image further includes:
  • detection means arranged to detect a region of interest on the target object
  • An imaging device is configured to take the region of interest detected by the detection device into an image, and the imaging device is connected to the processor to provide it with the taken image.
  • the detection device includes a niacin skin reactor, and the region of interest includes the skin of the target object .
  • the system for automatically extracting a target area in an image further includes a communication device.
  • the communication means is connected to the processor and arranged to send data out of the system for processing or storing the data.
  • the data includes the image to be processed, the identified target area and/or the acquired feature information.
  • a computer-readable storage medium which is used to store instructions, and when the instructions are executed, the method for automatically extracting objects in an image described in any embodiment of the present invention is implemented. area method.
  • the technical solution for automatically extracting the target area in the image of the present invention the automatic extraction of the target area is realized, the cost of extraction is greatly saved, the efficiency of extraction is improved, and batch and automatic extraction becomes possible.
  • Fig. 1 shows an image 100 to be extracted to which a method for automatically extracting a target region in an image according to an embodiment of the present invention can be applied.
  • FIG. 2 shows an embodiment of a method for selecting a region 120 in the image to be extracted 100 in FIG. 1 .
  • Fig. 3 shows a method 300 for automatically extracting a target area in an image according to an embodiment of the present invention.
  • Fig. 4 shows a system 400 for automatically extracting a target area in an image according to an embodiment of the present invention.
  • a method for automatically extracting a target area in an image comprises the following steps: selecting at least one region from the image to be processed, the region comprising at least a part of the target region, and the target region is limited to be at least related to the color of the image pixel; obtaining characteristic information possessed by the region, The feature information at least includes image pixel color information; comparing the acquired feature information with a corresponding preset standard, identifying a part of the area that meets the preset standard as a target area; and acquiring feature information of the target area.
  • the technical solution according to the present invention can be applied to many technical fields such as petroleum, chemical industry, automobile, biology, medicine and the like.
  • the method for automatically extracting a target area in an image can be used to extract a target area from an image 100 to be processed as shown in FIG. 1 .
  • the image to be processed 100 is obtained by taking pictures of the test area subjected to the niacin skin test.
  • nicotinic acid reagents for example, nicotinic acid ester solutions
  • they will cause visual changes, such as redness and swelling, on the skin of some subjects, and these changes may be related to those tested. Emotional health and mental health status of the patient. Therefore, it is necessary to perform image analysis on the measured area of the subject.
  • the target area of the niacin skin test may be substantially circular in consideration of factors such as ease of production and testing. Also, due to the specifics of the test, the target area may still have burrs.
  • the target area claimed in the present invention is not limited to such a shape, but may be any suitable shape depending on the situation, such as ellipse, triangle, rectangle, etc., and even suitable irregular shapes are allowed in some application situations .
  • a skin test reactor can be used to simultaneously conduct skin tests of 4 concentrations of niacin on a subject, thus obtaining an image 100 to be processed containing 4 target areas as shown in FIG. 1 .
  • the number of target areas is not limited thereto, and may also be 1, 3, 6 or any other appropriate number.
  • a method 300 for automatically extracting the target area 110 in the image to be processed 100 will be described in detail below with reference to FIG. 3 .
  • an area 120 is selected from the image 100 to be processed.
  • the area 120 includes the target area 110 . It should be noted that in other embodiments, the selected area may only include a part of the target area 110 .
  • the region selection 120 is implemented based on the feature image used for position identification in the image to be processed 100 .
  • the feature image may include two frame images 130 and 131 located at the edge region of the image to be processed 100 and spaced apart from each other.
  • the selected area 120 can be implemented by the following methods.
  • the sliding window 140 may be used to traverse upward from the middle point of the lower edge of the image 100 to be processed.
  • the sliding window 140 is a square, and its side length can optionally be set to be 2-3 times the gap between the frame images 130 and 131, and the sliding step can optionally be set to be 1/6 .
  • the later detected line is determined as the lower side of the frame image 131 .
  • the upper side, the left side and the right side of the frame image 131 are determined by a similar method, which will not be repeated here.
  • the area in the border image 131 is equally divided to obtain the area 120 .
  • the number of target areas contained in the image to be processed 100 is 4, so the area in the frame image 131 is divided into four equal parts, thereby obtaining the area 120, area 120, area 121, area 122, and area 123.
  • the characteristic information of the target area 110 is acquired.
  • the feature information mentioned here includes image pixel color information.
  • a color space model is constructed based on the area 120 to obtain image pixel color information of the area 120 according to the color space model.
  • the color space model may be a HIS color model, and the acquired image pixel color information at this time is the H component in the HIS color model.
  • the color space model may also be a Lab color model, and the acquired image pixel color information at this time is a component in the Lab color model.
  • the acquired feature information for example, H component or a component
  • the target region 110 can be effectively and automatically extracted from the image to be processed 100 by acquiring and comparing feature information such as the H component or the a component, and its accuracy can exceed manual recognition.
  • the comparison operation is performed separately for each pixel in the area 120 . It should be understood that the comparison operation in the present invention is not limited thereto, and it may also be performed on some pixels in the area. For example, the pixels in the area may be evenly divided into 100 groups, a comparison operation is performed on one pixel in each group, and the group of pixels meeting the preset standard is identified as the target area.
  • the preset standard may be a preset threshold
  • the comparison operation and the identification operation in S13 may be performed based on threshold segmentation.
  • morphological processing can also be performed on the parts that meet the preset standard.
  • the opening operation and closing operation can be performed sequentially on the part of the image that meets the preset standard, and then the largest connected region in it can be reserved; target area.
  • target area Through the morphological processing of this part, the noise in the final target area can be effectively reduced.
  • feature information of the identified target area 110 is acquired.
  • the feature information acquired here includes color, area, perimeter or shape and so on.
  • adopting the method for automatically extracting the target area in the image according to the present invention can effectively realize the automatic extraction of the target area, thereby greatly saving the extraction cost, improving the extraction efficiency, and enabling batch and intelligent identification and extraction processing is possible.
  • target regions 111, 112, or 113 in FIG. 1 can be automatically extracted using a similar method, which will not be repeated here. It should also be noted that although the shapes of other target areas 111, 112 or 113 shown in FIG. However, it has a different shape from the target area 110 .
  • more than one color space model may be constructed in S12.
  • a HIS color model and a Lab color model are respectively constructed for the region 120 shown in FIG. 1 , and the H component in the HIS color model and the a component in the Lab color model are obtained respectively.
  • the H component and the a component are processed separately, for example, by threshold segmentation.
  • the first candidate area is identified for the H component
  • the second candidate area is identified for the a component
  • the ratios of the circumference and the area of the first candidate area and the second candidate area are calculated respectively, and The smaller ratio of the two is determined as the target area.
  • the finally determined target region is more accurate. It should be noted that in the concept of the present invention, multiple color space models can also be used to optimize the determination of the target area in other ways, not limited to the above-mentioned ways.
  • the threshold may be corrected by machine learning.
  • the used threshold is corrected by the following formula.
  • T 1 (xy ⁇ S) ⁇ T 0 ;
  • x and y are the first coefficient and the second coefficient respectively
  • S is the score obtained by performing grade scoring processing on the region (for example, region 120) through machine learning,
  • T 0 is the initial threshold
  • T 1 is the revised threshold, which is used to replace the initial threshold T 0 for the above comparison.
  • the first coefficient and the second coefficient are empirical factors, and they can be obtained through methods such as experimental testing, provision by a third party (for example, a research institution, etc.).
  • the first coefficient x may take values in [1.2, 1.6]
  • the second coefficient y may take values in [0.05, 0.2].
  • the above-mentioned machine learning is based on a deep learning network of the AlexNet architecture, and the training sample library of the machine learning includes input data from manual grade scoring processing.
  • the grade scoring process completed manually includes four grades, for example, grade 0 corresponds to the area of the target area being zero, grade 1 corresponds to the area of the target region being smaller than the preset area threshold, grade 2 corresponds to the size of the target region The area is equal to the area threshold, and level 3 corresponds to the area of the target area being greater than the area threshold.
  • grade 0 corresponds to the area of the target area being zero
  • grade 1 corresponds to the area of the target region being smaller than the preset area threshold
  • grade 2 corresponds to the size of the target region The area is equal to the area threshold
  • level 3 corresponds to the area of the target area being greater than the area threshold.
  • machine learning may also perform weighted summation processing on the basis of the corresponding weight values of each of the above four levels to determine the level score.
  • the weight value here can be selected, set and adjusted according to the actual application situation. For example, class 0 has a weight value of 50%, class 1 has a weight value of 18%, class 2 has a weight value of 18%, and class 3 has a weight value of 14%.
  • Fig. 4 shows a system 400 for automatically extracting a target area in an image according to an embodiment of the present invention.
  • the system 400 comprises detection means 401 arranged to detect a region of interest on a target object.
  • the detection device 401 is the above-mentioned skin test reactor, and the region of interest is the skin of the subject.
  • the detection device can also be any other suitable device that can cause image-recognizable changes in the region of interest on the target object, such as a light test reaction detection device, a sound test reaction detection device Wait.
  • the region of interest may also be any suitable region of the target object where image-recognizable changes have occurred, such as eyes, head, limbs, and the like.
  • the system 400 further includes an imaging device 402 configured to capture the region of interest detected by the detection device into an image.
  • the imaging device may be a CCD or CMOS camera, a digital camera, etc., which capture the subject's skin detected by the skin test reactor into images.
  • the system 400 may further include a processor 403 .
  • the imaging device 402 can be connected to the processor 403 so as to provide the processor 403 with the captured image.
  • the processor 403 can be implemented by any feasible hardware such as chips, units, modules, etc., of course, it is also allowed to be implemented by combining software and hardware.
  • system 400 may further include a memory 404 for storing instructions.
  • the processor 403 may be configured to implement the method according to any one of the above embodiments of the present application when the instructions are executed, so as to automatically extract the target area from the captured image.
  • the processor 403 may include one or more processing devices, and the memory 404 may include one or more tangible non-transitory machine-readable media.
  • machine-readable media can include RAM, ROM, EPROM, EEPROM, or optical disk storage, magnetic disk storage, or other magnetic storage means or can be used to carry or store information in the form of machine-executable instructions or data structures. desired program code and any other medium that can be accessed by the processor 403 or by other processor-based devices.
  • FIG. 4 Some block diagrams shown in Fig. 4 are only used to schematically represent functional entities, and do not necessarily correspond to physically or logically independent entities. These functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different network and/or processor means and/or microcontroller means.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

提供用于自动提取图像中目标区域的方法、系统及计算机可读存储介质。该方法包括步骤:A.从待处理的图像中选取至少一个区域,所述区域包含目标区域的至少一部分,所述目标区域被限定为至少与图像像元色彩相关;B.获取所述区域所具有的特征信息,所述特征信息至少包括图像像元色彩信息;C.比较所获取的所述特征信息与对应的预设标准,将所述区域中达到所述预设标准的部分识别为所述目标区域;以及D.获取所述目标区域的特征信息。该方法节省了提取成本,提高了提取效率。

Description

用于自动提取图像中目标区域的方法、系统及存储介质 技术领域
本发明涉及图像处理领域,具体涉及用于自动提取图像中目标区域的方法、系统及计算机可读存储介质。
背景技术
图像作为人类感知世界的视觉基础,是人类获取信息、表达信息和传递信息的重要手段。20世纪以来,随着计算机技术的不断发展,图像处理技术也随之得到不断发展。
在实际应用中,常常需要在初步获取的图像中提取特定的目标区域。传统上,可以通过人工识别来完成该工作。但是,一旦需要处理的图像存在样本量增大、图像复杂等情况,识别所消耗的时间、人力都将随之相应增大。另一方面,人工识别的精确度、一致度等方面往往会根据不同识别人员的感知能力和工作状态而波动,也难以控制。这也给识别工作带来了困难。
发明内容
为了解决或至少缓解诸如以上所述的现有问题中的一个或多个,本发明提供了以下技术方案。
根据本发明的一方面,提供一种用于自动提取图像中目标区域的方法。该方法包括以下步骤:
A.从待处理的图像中选取至少一个区域,所述区域包含目标区域的至少一部分,所述目标区域被限定为至少与图像像元色彩相关;
B.获取所述区域所具有的特征信息,所述特征信息至少包括图像像元色彩信息;
C.比较所获取的所述特征信息与对应的预设标准,将所述区域中达到所述预设标准的部分识别为所述目标区域;以及
D.获取所述目标区域的特征信息。
作为上述方案的替代或补充,根据本发明一实施例的用于自动提取图像中目标区域的方法的步骤A中,基于所述待处理的图像中用于定位识别的特征图像来选取所述区域。
作为上述方案的替代或补充,根据本发明一实施例的用于自动提取 图像中目标区域的方法中,所述目标区域基本上呈圆形。
作为上述方案的替代或补充,根据本发明一实施例的用于自动提取图像中目标区域的方法中,所述特征图像包括位于所述待处理的图像的边缘区域处且彼此间隔开的至少两个边框图像,并且在步骤A中:
确定所述待处理的图像的边缘区域图像;
从所述边缘区域图像中识别出所述至少两个边框图像;以及
基于所识别出的所述至少两个边框图像,确定所述待处理的图像上的至少一个特征位置,所述至少一个特征位置与所述目标区域相关联。
作为上述方案的替代或补充,根据本发明一实施例的用于自动提取图像中目标区域的方法的步骤B中,基于所述区域构建至少一个颜色空间模型,用以根据所述颜色空间模型来获取所述区域的所述图像像元色彩信息。
作为上述方案的替代或补充,根据本发明一实施例的用于自动提取图像中目标区域的方法中,在步骤B中,基于所述区域分别构建HIS颜色模型和Lab颜色模型,并且分别获取所述HIS颜色模型中的H分量和所述Lab颜色模型中的a分量;在步骤C中,比较所获取的所述H分量与H分量阈值,将达到所述H分量阈值的部分识别为第一候选区域,并且比较所获取的所述a分量与a分量阈值,将达到所述a分量阈值的部分识别为第二候选区域,分别计算出所述第一候选区域和所述第二候选区域的周长与面积的比值,并将二者中所述比值较小者确定为所述目标区域。
作为上述方案的替代或补充,根据本发明一实施例的用于自动提取图像中目标区域的方法中,所述预设标准为预设的阈值,通过以下公式来修正所使用的阈值:
T 1=(x-y×S)×T 0
其中,x和y分别为第一系数和第二系数,
S为通过机器学习对所述区域进行等级评分处理所得到的分数,
T 0为初始阈值,以及
T 1为经修正后的阈值,用于取代所述初始阈值T 0进行所述比较。
作为上述方案的替代或补充,根据本发明一实施例的用于自动提取图像中目标区域的方法中,所述机器学习基于AlexNet架构的深度学习网络,并且所述机器学习的训练样本库包括来自于由人工完成等级评分 处理的输入数据。
作为上述方案的替代或补充,根据本发明一实施例的用于自动提取图像中目标区域的方法中,由人工完成的等级评分处理包括四级,其各自等级分数分别对应于所述目标区域的面积为零、所述目标区域的面积小于预设的面积阈值、所述目标区域的面积等于所述面积阈值、所述目标区域的面积大于所述面积阈值的情况。
作为上述方案的替代或补充,根据本发明一实施例的用于自动提取图像中目标区域的方法中,所述机器学习的等级评分处理包括所述四级,或者所述机器学习根据所述四级中各等级的对应权重值进行加权求和处理来确定等级分数。
作为上述方案的替代或补充,根据本发明一实施例的用于自动提取图像中目标区域的方法的步骤C中,还针对所述达到所述预设标准的所述部分进行形态学处理。
作为上述方案的替代或补充,根据本发明一实施例的用于自动提取图像中目标区域的方法中,所述形态学处理包括:
对所述部分的图像依次进行开运算和闭运算,然后保留其中的最大连通区域;以及
对所述最大连通区域进行开运算,然后保留其中的最大连通区域作为所述目标区域。
作为上述方案的替代或补充,根据本发明一实施例的用于自动提取图像中目标区域的方法中,所述特征信息包括颜色、面积、周长和/或形状。
作为上述方案的替代或补充,根据本发明一实施例的用于自动提取图像中目标区域的方法中,所述目标区域至少有四个。
此外,根据本发明的另一方面,提供一种用于自动提取图像中目标区域的系统。所述系统包括:
存储器,其设置成用于存储指令;以及
处理器,其设置成用于在所述指令被执行时实现本发明的任一实施例所述的用于自动提取图像中目标区域的方法。
作为上述方案的替代或补充,根据本发明一实施例的用于自动提取图像中目标区域的系统还包括:
检测装置,其设置成用于检测目标对象上的感兴趣区域;以及
成像装置,其设置成用于将经由所述检测装置检测后的所述感兴趣区域摄制成图像,并且所述成像装置与所述处理器相连以向其提供所摄制成的图像。
作为上述方案的替代或补充,根据本发明一实施例的用于自动提取图像中目标区域的系统中,所述检测装置包括烟酸皮肤反应器,所述感兴趣区域包括所述目标对象的皮肤。
作为上述方案的替代或补充,根据本发明一实施例的用于自动提取图像中目标区域的系统还包括通信装置。所述通信装置连接至所述处理器,并且设置成向所述系统之外发送数据以用于处理或存储所述数据。所述数据包括所述待处理的图像、所识别的目标区域和/或所获取的特征信息。
另外,根据本发明的又一方面,提供一种计算机可读存储介质,其用于存储指令,所述指令在被执行时实现本发明的任一实施例所述的用于自动提取图像中目标区域的方法。
根据本发明的用于自动提取图像中目标区域的技术方案实现了对目标区域的自动提取,大大节省了提取的成本,提高了提取的效率,使得批量化、自动化提取成为可能。
附图说明
从结合附图的以下详细说明中,将会使本发明的上述和其他目的及优点更加完整清楚。
图1示出可应用根据本发明的一个实施例的用于自动提取图像中目标区域的方法的待提取图像100。
图2示出在图1中的待提取图像100中选取区域120的方法实施例。
图3示出根据本发明的一个实施例的用于自动提取图像中目标区域的方法300。
图4示出根据本发明的一个实施例的用于自动提取图像中目标区域的系统400。
具体实施方式
应理解,本发明的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。此 外,除非另外特别指明,术语“包括”、“包含”、“具有”以及类似表述意在表示不排他的包含。
首先,根据本发明设计思想,提供了用于自动提取图像中目标区域的方法。该方法包括以下步骤:从待处理的图像中选取至少一个区域,所述区域包含目标区域的至少一部分,目标区域被限定为至少与图像像元色彩相关;获取所述区域所具有的特征信息,特征信息至少包括图像像元色彩信息;比较所获取的特征信息与对应的预设标准,将所述区域中达到预设标准的部分识别为目标区域;以及获取目标区域的特征信息。在实际应用时,可以将根据本发明的技术方案应用于诸如石油、化工、汽车、生物、医药等众多技术领域。
在下文中,将参考附图详细地描述根据本发明的各示例性实施例。请参阅图1,根据本发明的一个实施例的用于自动提取图像中目标区域的方法可用于从如图1所示的待处理图像100中提取目标区域。
其中,待处理图像100是对经烟酸皮肤测试的受测区域进行拍照所获取的。研究证明,在一定条件下,烟酸试剂(例如,烟酸酯类溶液)与人类皮肤接触后,会在一些受测者的皮肤上引起视觉变化,例如红肿反应,而这些变化可能与受测者的情绪健康、精神健康状态等有关。因此,有必要对受测者的受测区域进行图像分析。为此,期望从所获取的受测区域图像(例如,图1中的待处理图像100)中提取目标区域(例如,红肿区域110、111、112和113)。常规上,使用人工识别的方式来从待处理图像中提取这样的目标区域。然而,人工提取的方式具有耗时、耗力、不精确等问题。因而,期望一种自动提取此类目标区域的方法,从而获取这些目标区域的特征信息,例如颜色、面积、周长或形状等。
应当注意的是,出于便于生产、测试等因素的考虑,烟酸皮肤测试的目标区域可能基本上呈圆形。并且,由于测试的具体情况,目标区域可能还会有毛边。但是,本发明所要求保护的目标区域不限于这样的形状,而可能依情况而定是任何适当的形状,例如椭圆形、三角形、矩形等,甚至在一些应用情形下允许采用合适的不规则形状。
还应当注意的是,出于同时进行不同浓度的试剂测试的目的,或者出于记录不同时长的测试结果的目的,可能在大致相同的时间段内进行多次测试。例如,可以利用一个皮肤测试反应器对一位受测者同时进行4个浓度的烟酸皮肤测试,因而得到如图1所示的包含4个目标区域的待 处理图像100。但本发明所要求保护的技术方案中,目标区域的个数并不限于此,也可能是1个、3个、6个或其他任何适当的个数。
下面以目标区域110为例,参考图3详细描述根据本申请的一个实施例来自动提取待处理图像100中目标区域110的方法300。
在步骤S11,从待处理图像100中选取一个区域120。其中,区域120包含目标区域110。应当注意的是,在其他实施例中,所选取的区域也可能只包括目标区域110的一部分。
可选地,选取区域120是基于待处理图像100中用于定位识别的特征图像来实现的。作为一种可选情形,例如图2所示,特征图像可以包括位于待处理图像100的边缘区域处且彼此间隔开的两个边框图像130和131。
具体来说,参考图2,选取区域120可以通过以下方法实现。首先,可以利用滑窗140从待处理图像100下边缘中点向上遍历。作为一种可选情形,滑窗140是正方形,其边长可以可选地设置成是边框图像130和131之间间隙的2-3倍,滑动步长可以可选地设置成是1/6。当在滑窗140内检测到两条平行线(即,边框图像130和131的下侧边)时,将后检测到的线确定为边框图像131的下侧边。通过类似方法确定边框图像131的上侧边、左侧边和右侧边,在此不再赘述。接着,根据待处理图像100中所含目标区域的个数,对边框图像131内的区域进行等分,从而获得区域120。在图2所示的实施例中,待处理图像100中所含目标区域的个数为4,因此将边框图像131内的区域等分为四份,从而从左至右依次获得区域120、区域121、区域122和区域123。
回到图3,在S12,获取目标区域110所具有的特征信息。这里所述的特征信息包括图像像元色彩信息。例如,基于区域120构建颜色空间模型,用以根据该颜色空间模型来获取区域120的图像像元色彩信息。颜色空间模型可以是HIS颜色模型,此时所获取的图像像元色彩信息为HIS颜色模型中的H分量。颜色空间模型也可以是Lab颜色模型,此时所获取的图像像元色彩信息为Lab颜色模型中的a分量。
在S13,比较所获取的特征信息(例如,H分量或者a分量)与对应的预设标准,将区域120中达到预设标准的部分识别为目标区域110。通过获取、比较H分量或a分量这样的特征信息可从上述待处理图像100中有效地自动提取出目标区域110,其准确度可超过人工识别。
此处,比较操作是针对区域120中的每个像素分别进行的。应当理解的是,本发明中的比较操作并不限于此,其也可以是针对区域中的部分像素进行的。例如,可以将该区域中的像素均匀地分为100个群组,对每个群组中的一个像素进行比较操作,将达到预设标准的像素所在的群组识别为目标区域。
可选地,预设标准可以是预设的阈值,并且可以基于阈值分割进行S13中的比较操作和识别操作。
可选地,还可针对达到预设标准的部分进行形态学处理。例如,可以对达到预设标准的部分的图像依次进行开运算和闭运算,然后保留其中的最大连通区域;接下来,可以对前述最大连通区域进行开运算,然后保留其中的最大连通区域作为最终的目标区域。通过对该部分的形态学处理,可以有效降低最终的目标区域中的噪声。
在S14,获取所识别的目标区域110的特征信息。可选地,此处所获取的特征信息包括颜色、面积、周长或形状等。
由此,采用根据本发明的用于自动提取图像中目标区域的方法,能够有效实现对目标区域的自动提取,从而可以大大节省提取成本,提高了提取效率,使得批量化、智能化的识别提取处理成为可能。
应当理解,可以使用类似方法自动提取图1中的其他目标区域(红肿区域111、112或113),在此不再赘述。还应当注意的是,虽然图1所示的其他目标区域111、112或113的形状与目标区域110一致,但本发明所要求保护的技术方案并不限于此,其他目标区域也可能根据实际情况而与目标区域110具有不同的形状。
需要说明的是,在根据本发明的一个实施例的用于自动提取图像中目标区域的方法中,可在S12中构建不止一个颜色空间模型。例如,针对图1中所示的区域120分别构建HIS颜色模型和Lab颜色模型,并分别获取HIS颜色模型中的H分量和Lab颜色模型中的a分量。在S13中,例如通过阈值分割来分别对H分量和a分量进行处理。在S14中,根据阈值分割的结果针对H分量识别出第一候选区域,针对a分量识别出第二候选区域,分别计算出第一候选区域和第二候选区域的周长与面积的比值,并将二者中比值较小者确定为目标区域。通过对两个候选区域的比较,使得最终确定的目标区域更加准确。需要注意的是,本发明的构 思中还可以通过其他方式来利用多个颜色空间模型优化目标区域的确定,而不限于上述方式。
作为进一步示例说明,根据本发明的一个实施例的用于自动提取图像中目标区域的方法中,在预设标准为预设的阈值的情况下,可通过机器学习对该阈值进行修正。具体地,通过以下公式来修正所使用的阈值。
T 1=(x-y×S)×T 0
其中,x和y分别为第一系数和第二系数,
S为通过机器学习对区域(例如,区域120)进行等级评分处理所得到的分数,
T 0为初始阈值,以及
T 1为经修正后的阈值,用于取代初始阈值T 0进行上述比较。
在以上公式中,第一系数和第二系数是经验因子,它们可以通过诸如实验测试、第三方提供(例如,研究机构等)等方式来获得。作为举例,在如图1所示的实施例中,第一系数x可在[1.2,1.6]中取值,第二系数y可在[0.05,0.2]中取值。通过对上述阈值的修正,可以消除或至少缓解系统固有偏置对提取结果的影响,从而有助于提高对目标区域提取的精确性。
可选地,上述机器学习基于AlexNet架构的深度学习网络,并且该机器学习的训练样本库包括来自于由人工完成等级评分处理的输入数据。
可选地,由人工完成的等级评分处理包括四级,例如,0级对应于目标区域的面积为零、1级对应于目标区域的面积小于预设的面积阈值、2级对应于目标区域的面积等于面积阈值、3级对应于目标区域的面积大于面积阈值。应当理解的是,本发明中的等级评分处理不限于0、1、2、3这样的评分机制,其也可以是任何合适的评分机制。
可选地,机器学习还可以根据上述四级中各等级的对应权重值进行加权求和处理来确定等级分数。这里的权重值可根据实际应用情况来进行选择设置和调整。例如,0级的权重值为50%,1级的权重值为18%,2级的权重值为18%,3级的权重值为14%。
图4示出根据本发明的一个实施例的用于自动提取图像中目标区域 的系统400。系统400包括检测装置401,其设置成用于检测目标对象上的感兴趣区域。可选地,检测装置401是上述皮肤测试反应器,感兴趣区域是受测者的皮肤。应当理解的是,根据各种不同的应用需求,检测装置也可以是其他能够引起目标对象上感兴趣区域发生图像可识别变化的任何合适的装置,例如光照测试反应检测装置、声音测试反应检测装置等。还应当理解的是,感兴趣的区域也可以是目标对象的发生了图像可识别变化的任何合适的区域,例如眼睛、头部、四肢等。
如图4所示,系统400还包括成像装置402,设置成用于将经由所述检测装置检测后的感兴趣区域摄制成图像。例如,成像装置是可以是CCD或COMS摄像头、数码相机等,其将经由皮肤测试反应器检测后的受测者皮肤摄制成图像。
进一步,系统400还可包括处理器403。成像装置402可与处理器403相连,从而向处理器403提供所摄制成的图像。在具体应用时,处理器403可采用任何可行的芯片、单元、模块等硬件来实现,当然也允许通过软硬件结合等方式来实现。
进一步,系统400还可包括存储器404,以用于存储指令。处理器403可用于在所述指令被执行时,实现根据本申请的上述任一实施例的方法,从而从所摄制成的图像中自动提取目标区域。
其中,处理器403可以包括一个或多个处理装置,并且,存储器404可以包括一个或多个有形非暂时性机器可读介质。通过示例的方式,这样的机器可读介质能够包括RAM、ROM、EPROM、EEPROM或光盘存储设备、磁盘存储设备或其它磁存储装置或能够用于承载或存储呈机器可执行指令或数据结构的形式的期望的程序代码并且能够被处理器403或被其它基于处理器的装置访问的任何其它介质。
进一步,所述系统还可包括通信装置405,其设置成用于将待处理的图像(例如,拍照获取的待处理图像100)、所识别的目标区域(例如,目标区域110)、所获取的特征信息(例如,目标区域110的颜色、面积、周长、形状等信息)等数据通过有线、无线等方式发送到系统400之外(例如,位于本地、远程和/或云端的存储器、处理器、服务器等)来存储或处理。
需要指出的是,在图4中所示的一些方框图仅是用来示意性地表示 功能实体,不一定必须与物理或逻辑上独立的实体相对应。可以采用软件形式来实现这些功能实体,或者在一个或多个硬件模块或集成电路中实现这些功能实体,或者在不同网络和/或处理器装置和/或微控制器装置中实现这些功能实体。
还应当理解,在一些备选实施例中,方法中所包括的功能/步骤可以不按流程图所示的次序来发生。例如,依次示出的两个功能/步骤可以基本同时执行或甚至逆序执行。这具体取决于所涉及的功能/步骤。
以上尽管只对其中一些本发明的实施方式进行了描述,但是本领域普通技术人员应当了解,本发明可以在不偏离其主旨与范围内以许多其他的形式实施。因此,所展示的例子与实施方式被视为示意性的而非限制性的,在不脱离如所附各权利要求所定义的本发明精神及范围的情况下,本发明可能涵盖各种的修改与替换。

Claims (16)

  1. 一种用于自动提取图像中目标区域的方法,其特征在于,所述方法包括步骤:
    A.从待处理的图像中选取至少一个区域,所述区域包含目标区域的至少一部分,所述目标区域被限定为至少与图像像元色彩相关;
    B.获取所述区域所具有的特征信息,所述特征信息至少包括图像像元色彩信息;
    C.比较所获取的所述特征信息与对应的预设标准,将所述区域中达到所述预设标准的部分识别为所述目标区域;以及
    D.获取所述目标区域的特征信息。
  2. 根据权利要求1所述的方法,其特征在于,在步骤A中,基于所述待处理的图像中用于定位识别的特征图像来选取所述区域,并且/或者所述目标区域基本上呈圆形。
  3. 根据权利要求2所述的方法,其特征在于,所述特征图像包括位于所述待处理的图像的边缘区域处且彼此间隔开的至少两个边框图像,并且在步骤A中:
    确定所述待处理的图像的边缘区域图像;
    从所述边缘区域图像中识别出所述至少两个边框图像;以及
    基于所识别出的所述至少两个边框图像,确定所述待处理的图像上的至少一个特征位置,所述至少一个特征位置与所述目标区域相关联。
  4. 根据权利要求1所述的方法,其特征在于,在步骤B中,基于所述区域构建至少一个颜色空间模型,用以根据所述颜色空间模型来获取所述区域的所述图像像元色彩信息。
  5. 根据权利要求4所述的方法,其特征在于,
    在步骤B中,基于所述区域分别构建HIS颜色模型和Lab颜色模型,并且分别获取所述HIS颜色模型中的H分量和所述Lab颜色模型中的a分量;以及
    在步骤C中,比较所获取的所述H分量与H分量阈值,将达到所述H 分量阈值的部分识别为第一候选区域,并且比较所获取的所述a分量与a分量阈值,将达到所述a分量阈值的部分识别为第二候选区域,分别计算出所述第一候选区域和所述第二候选区域的周长与面积的比值,并将二者中所述比值较小者确定为所述目标区域。
  6. 根据权利要求1所述的方法,其特征在于,所述预设标准为预设的阈值,通过以下公式来修正所使用的阈值:
    T 1=(x-y×S)×T 0
    其中,x和y分别为第一系数和第二系数,
    S为通过机器学习对所述区域进行等级评分处理所得到的分数,
    T 0为初始阈值,以及
    T 1为经修正后的阈值,用于取代所述初始阈值T 0进行所述比较。
  7. 根据权利要求6所述的方法,其特征在于,所述机器学习基于AlexNet架构的深度学习网络,并且所述机器学习的训练样本库包括来自于由人工完成等级评分处理的输入数据。
  8. 根据权利要求7所述的方法,其特征在于,由人工完成的等级评分处理包括四级,其各自等级分数分别对应于所述目标区域的面积为零、所述目标区域的面积小于预设的面积阈值、所述目标区域的面积等于所述面积阈值、所述目标区域的面积大于所述面积阈值的情况;并且/或者所述机器学习的等级评分处理包括所述四级,或者所述机器学习根据所述四级中各等级的对应权重值进行加权求和处理来确定等级分数。
  9. 根据权利要求1所述的方法,其特征在于,在步骤C中,还针对所述达到所述预设标准的所述部分进行形态学处理。
  10. 根据权利要求9所述的方法,其特征在于,所述形态学处理包括:
    对所述部分的图像依次进行开运算和闭运算,然后保留其中的最大连通区域;以及
    对所述最大连通区域进行开运算,然后保留其中的最大连通区域作为所述目标区域。
  11. 根据权利要求1-10中任一项所述的方法,其特征在于,所述特 征信息包括颜色、面积、周长和/或形状,并且/或者所述待处理的图像中的所述目标区域至少有四个。
  12. 一种用于自动提取图像中目标区域的系统,其特征在于,所述系统包括:
    存储器,其设置成用于存储指令;以及
    处理器,其设置成用于在所述指令被执行时实现如权利要求1-11中任一项所述的用于自动提取图像中目标区域的方法。
  13. 根据权利要求12所述的系统,其特征在于,所述系统还包括:
    检测装置,其设置成用于检测目标对象上的感兴趣区域;以及
    成像装置,其设置成用于将经由所述检测装置检测后的所述感兴趣区域摄制成图像,并且所述成像装置与所述处理器相连以向其提供所摄制成的图像。
  14. 根据权利要求13所述的系统,其特征在于,所述检测装置包括烟酸皮肤反应器,所述感兴趣区域包括所述目标对象的皮肤。
  15. 根据权利要求13或14所述的系统,其特征在于,所述系统还包括:
    通信装置,其连接至所述处理器,并且设置成向所述系统之外发送数据以用于处理或存储所述数据,
    所述数据包括所述待处理的图像、所识别的目标区域和/或所获取的特征信息。
  16. 一种计算机可读存储介质,其用于存储指令,其特征在于,所述指令在被执行时实现如权利要求1-11中任一项所述的用于自动提取图像中目标区域的方法。
PCT/CN2021/129447 2021-06-25 2021-11-09 用于自动提取图像中目标区域的方法、系统及存储介质 WO2022267300A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110707754.0 2021-06-25
CN202110707754.0A CN113223041B (zh) 2021-06-25 2021-06-25 用于自动提取图像中目标区域的方法、系统及存储介质

Publications (1)

Publication Number Publication Date
WO2022267300A1 true WO2022267300A1 (zh) 2022-12-29

Family

ID=77080937

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/129447 WO2022267300A1 (zh) 2021-06-25 2021-11-09 用于自动提取图像中目标区域的方法、系统及存储介质

Country Status (2)

Country Link
CN (1) CN113223041B (zh)
WO (1) WO2022267300A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113223041B (zh) * 2021-06-25 2024-01-12 上海添音生物科技有限公司 用于自动提取图像中目标区域的方法、系统及存储介质
CN113608512B (zh) * 2021-10-08 2022-02-22 齐鲁工业大学 中药水丸制作控制方法、系统、装置及终端
CN117122271A (zh) * 2022-05-18 2023-11-28 上海添音生物科技有限公司 用于皮肤测试的可穿戴设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170148154A1 (en) * 2015-11-24 2017-05-25 Keyence Corporation Positioning Method, Positioning Apparatus, Program, And Computer Readable Recording Medium
CN110148121A (zh) * 2019-05-09 2019-08-20 腾讯科技(深圳)有限公司 一种皮肤图像处理方法、装置、电子设备及介质
CN110443811A (zh) * 2019-07-26 2019-11-12 广州中医药大学(广州中医药研究院) 一种复杂背景叶片图像的全自动分割方法
CN111557672A (zh) * 2020-05-15 2020-08-21 上海市精神卫生中心(上海市心理咨询培训中心) 一种烟酸皮肤反应图像分析方法和设备
CN113223041A (zh) * 2021-06-25 2021-08-06 上海添音生物科技有限公司 用于自动提取图像中目标区域的方法、系统及存储介质

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101403703B (zh) * 2008-11-07 2010-11-03 清华大学 一种皮棉中异性纤维实时检测方法
CN102999916B (zh) * 2012-12-12 2015-07-29 清华大学深圳研究生院 一种彩色图像的边缘提取方法
CN103839283A (zh) * 2014-03-11 2014-06-04 浙江省特种设备检验研究院 一种小型不规则物体的面积周长无损测量方法
JP6552613B2 (ja) * 2015-05-21 2019-07-31 オリンパス株式会社 画像処理装置、画像処理装置の作動方法、及び画像処理プログラム
CN105761283B (zh) * 2016-02-14 2018-12-25 广州神马移动信息科技有限公司 一种图片主色提取方法及装置
CN107506738A (zh) * 2017-08-30 2017-12-22 深圳云天励飞技术有限公司 特征提取方法、图像识别方法、装置及电子设备
WO2019108888A1 (en) * 2017-11-30 2019-06-06 The Research Foundation For The State University Of New York SYSTEM AND METHOD TO QUANTIFY TUMOR-INFILTRATING LYMPHOCYTES (TILs) FOR CLINICAL PATHOLOGY ANALYSIS
CN109978810B (zh) * 2017-12-26 2024-03-12 南通罗伯特医疗科技有限公司 痣的检测方法、系统、设备及存储介质
CN108388833A (zh) * 2018-01-15 2018-08-10 阿里巴巴集团控股有限公司 一种图像识别方法、装置及设备
US10991067B2 (en) * 2019-09-19 2021-04-27 Zeekit Online Shopping Ltd. Virtual presentations without transformation-induced distortion of shape-sensitive areas
CN111079741A (zh) * 2019-12-02 2020-04-28 腾讯科技(深圳)有限公司 图像边框位置检测方法、装置、电子设备及存储介质
CN111414877B (zh) * 2020-03-26 2023-06-20 遥相科技发展(北京)有限公司 去除颜色边框的表格裁切方法、图像处理设备和存储介质
CN111680681B (zh) * 2020-06-10 2022-06-21 中建三局第一建设工程有限责任公司 排除非正常识别目标的图像后处理方法及系统及计数方法
CN111695540B (zh) * 2020-06-17 2023-05-30 北京字节跳动网络技术有限公司 视频边框识别方法及裁剪方法、装置、电子设备及介质
CN112752023B (zh) * 2020-12-29 2022-07-15 深圳市天视通视觉有限公司 一种图像调整方法、装置、电子设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170148154A1 (en) * 2015-11-24 2017-05-25 Keyence Corporation Positioning Method, Positioning Apparatus, Program, And Computer Readable Recording Medium
CN110148121A (zh) * 2019-05-09 2019-08-20 腾讯科技(深圳)有限公司 一种皮肤图像处理方法、装置、电子设备及介质
CN110443811A (zh) * 2019-07-26 2019-11-12 广州中医药大学(广州中医药研究院) 一种复杂背景叶片图像的全自动分割方法
CN111557672A (zh) * 2020-05-15 2020-08-21 上海市精神卫生中心(上海市心理咨询培训中心) 一种烟酸皮肤反应图像分析方法和设备
CN113223041A (zh) * 2021-06-25 2021-08-06 上海添音生物科技有限公司 用于自动提取图像中目标区域的方法、系统及存储介质

Also Published As

Publication number Publication date
CN113223041B (zh) 2024-01-12
CN113223041A (zh) 2021-08-06

Similar Documents

Publication Publication Date Title
WO2022267300A1 (zh) 用于自动提取图像中目标区域的方法、系统及存储介质
WO2021047232A1 (zh) 交互行为识别方法、装置、计算机设备和存储介质
AU2020200835B2 (en) System and method for reviewing and analyzing cytological specimens
Hore et al. Finding contours of hippocampus brain cell using microscopic image analysis
JP2021524630A (ja) マルチ分解能登録を介したマルチサンプル全体スライド画像処理
US10395091B2 (en) Image processing apparatus, image processing method, and storage medium identifying cell candidate area
CN106408566B (zh) 一种胎儿超声图像质量控制方法及系统
US20180342078A1 (en) Information processing device, information processing method, and information processing system
CN110287862B (zh) 基于深度学习的防偷拍检测方法
Rossi et al. FishAPP: A mobile App to detect fish falsification through image processing and machine learning techniques
CN111126143A (zh) 一种基于深度学习的运动评判指导方法及系统
Chen et al. AI-PLAX: AI-based placental assessment and examination using photos
Lee et al. Image analysis using machine learning for automated detection of hemoglobin H inclusions in blood smears-a method for morphologic detection of rare cells
US9785848B2 (en) Automated staining and segmentation quality control
Shah et al. Automatic detection and classification of tuberculosis bacilli from camera-enabled Smartphone microscopic images
Punitha et al. Detection of malarial parasite in blood using image processing
WO2017145172A1 (en) System and method for extraction and analysis of samples under a microscope
Aris et al. Fast k-means clustering algorithm for malaria detection in thick blood smear
CN116229236A (zh) 一种基于改进YOLO v5模型的结核杆菌检测方法
CN105184244B (zh) 视频人脸检测方法及装置
CN115035086A (zh) 一种基于深度学习的结核皮试智能筛查分析方法和装置
CN111768439A (zh) 一种确定实验评分的方法、装置、电子设备及介质
TWI602155B (zh) 利用影像內容不連續性增強物件偵測之方法
Hayashi et al. Significant feature descriptors for dementia evaluation using simple graphics
WO2013118436A1 (ja) 生体画像解析システム、生体画像解析方法および生体画像解析プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21946782

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE