WO2017128977A1 - 图片处理方法和装置 - Google Patents

图片处理方法和装置 Download PDF

Info

Publication number
WO2017128977A1
WO2017128977A1 PCT/CN2017/071256 CN2017071256W WO2017128977A1 WO 2017128977 A1 WO2017128977 A1 WO 2017128977A1 CN 2017071256 W CN2017071256 W CN 2017071256W WO 2017128977 A1 WO2017128977 A1 WO 2017128977A1
Authority
WO
WIPO (PCT)
Prior art keywords
picture
sub
sample
target
pictures
Prior art date
Application number
PCT/CN2017/071256
Other languages
English (en)
French (fr)
Inventor
林桐
Original Assignee
阿里巴巴集团控股有限公司
林桐
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司, 林桐 filed Critical 阿里巴巴集团控股有限公司
Priority to JP2018557180A priority Critical patent/JP6937782B2/ja
Priority to MYPI2018702589A priority patent/MY192394A/en
Priority to KR1020187024465A priority patent/KR102239588B1/ko
Priority to SG11201806345QA priority patent/SG11201806345QA/en
Priority to EP17743592.2A priority patent/EP3410389A4/en
Publication of WO2017128977A1 publication Critical patent/WO2017128977A1/zh
Priority to US16/043,636 priority patent/US10706555B2/en
Priority to PH12018501579A priority patent/PH12018501579A1/en
Priority to US16/719,474 priority patent/US10769795B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images

Definitions

  • the present application relates to the field of image processing technologies, and in particular, to a picture processing method and apparatus.
  • the present application provides a picture processing method and apparatus.
  • a picture processing method comprising:
  • the interference factor in the original picture is removed in a plurality of different ways to obtain a plurality of sample pictures
  • the determining the target sub-picture in the sample sub-picture of the same attribute includes:
  • a sample sub-picture of the same attribute is divided into a plurality of picture sets by using a clustering algorithm, where each picture set includes one or more sample sub-pictures;
  • the target sub-picture is determined in the set of pictures including the most sample sub-pictures.
  • the determining the mathematical parameters of each sample sub-picture includes:
  • determining the target sub-picture includes:
  • the sample sub-picture corresponding to the center point in the picture set after clustering is determined as the target sub-picture.
  • the combining multiple target sub-pictures of different attributes into the target picture includes:
  • a picture processing device comprising:
  • the interference removal unit removes the interference factor in the original picture in a plurality of different manners to obtain a plurality of sample pictures
  • the picture segmentation unit divides each sample picture into a plurality of sample sub-pictures according to a preset segmentation rule
  • a target determining unit that determines a target sub-picture in a sample sub-picture of the same attribute
  • the target synthesis unit combines multiple target sub-pictures of different attributes into a target picture.
  • the target determining unit includes:
  • each picture set includes one or more sample sub-pictures
  • the target determining sub-unit determines the target sub-picture in the picture set including the most sample sub-pictures.
  • the parameter determining subunit generates an RGB vector for the sample sub-picture as a mathematical parameter of the sample sub-picture according to the RGB information of each pixel point in the sample sub-picture.
  • the target determining subunit determines, in the group of pictures that includes the most sample sub-pictures, the sample sub-picture corresponding to the center point in the picture set after the clustering as the target sub-picture.
  • the target synthesizing unit combines multiple target sub-pictures of different attributes into a target picture according to position coordinates of each pixel point in the target sub-picture.
  • the application can first remove the interference factor in the original picture in a plurality of different manners to obtain a plurality of sample pictures, and then divide the plurality of sample pictures into multiple samples according to a preset segmentation rule. Sub-picture, and determining the target sub-picture in the sample sub-picture of the same attribute, thereby being able to determine the target sub-picture closest to the real image in the plurality of sample sub-pictures of the same attribute, and merging the plurality of target sub-pictures of different attributes For the target image, the obtained target image can highly restore the real image, thereby improving the accuracy of subsequent image recognition.
  • FIG. 1 is a schematic flow chart of a picture processing method according to an exemplary embodiment of the present application.
  • FIG. 2 is a schematic diagram showing a segmentation of a sample picture according to an exemplary embodiment of the present application.
  • FIG. 3 is a schematic flow chart of determining a target sub-picture in a sample sub-picture of the same attribute according to an exemplary embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a picture processing apparatus according to an exemplary embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a picture processing apparatus according to an exemplary embodiment of the present application.
  • first, second, third, etc. may be used to describe various information in this application, such information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as the second information without departing from the scope of the present application.
  • second information may also be referred to as the first information.
  • word "if” as used herein may be interpreted as "when” or “when” or “in response to a determination.”
  • image processing algorithms or image processing tools such as Photoshop, may be used to remove the texture or watermark in the original image.
  • image processing algorithms or image processing tools such as Photoshop
  • the image obtained after descreening or watermarking cannot often restore the image in the original image, thereby affecting the accuracy of subsequent image recognition.
  • FIG. 1 is a schematic flow chart of a picture processing method according to an exemplary embodiment of the present application.
  • the image processing method may be applied to a terminal, and the terminal may include a smart device such as a smart phone, a tablet computer, or a PDA (Personal Digital Assistant).
  • the image processing method can also be applied to the server. This application does not specifically limit this.
  • the image processing method may include the following steps:
  • step 101 the interference factor in the original picture is removed in a plurality of different manners to obtain a plurality of sample pictures.
  • the original picture is usually a picture to be identified, and the original picture includes an interference factor, and the interference factor is usually an interference pattern such as a texture, a watermark, or the like added subsequently on the basis of the real image.
  • the interference factor in the original picture may be removed by using multiple different de-interference factors provided in the related art to obtain multiple sample pictures after the interference cancellation factor, for example, by photoshop, etc.
  • the image processing software removes interference factors and the like in the original picture.
  • Step 102 Divide each sample picture into multiple sample sub-pictures according to a preset segmentation rule.
  • each sample picture may be separately divided into a plurality of sub-pictures according to a preset segmentation rule.
  • the segmentation may be performed.
  • the resulting sub-picture is called a sample sub-picture.
  • the preset segmentation rule may be set by a developer, and the preset segmentation rule may be in units of a sample sub-picture size, or may be a unit of a sample sub-picture. There are no special restrictions on this.
  • the preset segmentation rule may be to divide the sample picture into 25 sample sub-pictures, for example, divide the sample picture into 25 sample sub-pictures according to a rule of 5 by 5.
  • each sample picture is divided into M sample sub-pictures, and a total of N ⁇ M sample sub-pictures can be obtained.
  • M and N are both natural numbers greater than one.
  • Step 103 Determine a target sub-picture in a sample sub-picture of the same attribute.
  • each sample sub-picture includes a corresponding attribute, and the attribute is used to indicate location information of the sample sub-picture in the belonging sample picture.
  • picture A is a sample picture obtained after the original picture is de-intercepted.
  • the sample picture can be divided into 9 sample sub-pictures of 3 by 3, and these 9 samples
  • the attributes of the sub-picture are: A11, A12, A13, A21, ..., A33.
  • the segmentation rule shown in FIG. 2 is still taken as an example. It is assumed that the interference factors in the original picture are removed in N different ways, and N sample pictures are obtained, so that a total of N ⁇ 9 sample sub-pictures can be obtained. , wherein the sample sub-pictures with attributes A11 to A33 each have N sheets.
  • the target sub-picture with the attribute A11 can be determined in the N sample sub-pictures with the attribute A11, and the genus is determined in the N sample sub-pictures with the attribute A12.
  • the target sub-picture of A12, and so on, can determine 9 target sub-pictures with attributes A11 to A33.
  • Step 104 Combine multiple target sub-pictures of different attributes into a target picture.
  • multiple target sub-pictures of different attributes may be merged into the target picture, for example, the multiple pictures may be according to the attributes of each target sub-picture.
  • the target sub-pictures are merged into the target picture, and the multiple target sub-pictures may be merged into the target picture according to the position coordinates of each pixel in each target sub-picture. This application does not specifically limit this.
  • 9 target sub-pictures with attributes A11 to A33 can be combined into one target picture.
  • the application may first remove the interference factor in the original picture in a plurality of different manners to obtain a plurality of sample pictures, and then divide the plurality of sample pictures into multiple sheets according to a preset segmentation rule.
  • a sample sub-picture, and determining a target sub-picture in a sample sub-picture of the same attribute, thereby being able to determine a target sub-picture closest to the real image in a plurality of sample sub-pictures of the same attribute, and multiple target sub-pictures of different attributes By merging into the target image, the obtained target image can highly restore the real image, thereby improving the accuracy of subsequent image recognition.
  • the process of determining a target sub-picture in a sample sub-picture of the same attribute may include the following steps:
  • Step 301 determining mathematical parameters of each sample sub-picture.
  • the mathematical parameters of each sample sub-picture can be determined for subsequent calculation.
  • an RGB vector may be generated for the sample sub-picture according to the RGB information of each pixel in the sample sub-picture as a mathematical parameter of the sample sub-picture.
  • RGB information of each pixel in the sample sub-picture such as an RGB value
  • the sample sub-picture includes K pixel points, wherein the RGB value of the i-th pixel point is R i , and the value of i is 1 to K, and the RGB vector of the sample sub-picture is ⁇ R 1 , R 2 ,..., R K ⁇ .
  • Step 302 According to the mathematical parameter, use a clustering algorithm to divide the sample sub-picture of the same attribute into a plurality of picture sets, where each picture set includes one or more sample sub-pictures.
  • the plurality of sample sub-pictures may be divided into a plurality of picture sets by using a clustering algorithm.
  • the clustering algorithm may include: a DBSCAN clustering algorithm (Density-Based Spatial Clustering of Applications with Noise), a K-means clustering algorithm, etc., which is not specifically limited in this application.
  • the scan radius (eps) and the minimum included point number (minPts) may be preset, and each sample sub-picture corresponds to a point in the clustering process, and the minimum included number of points is divided.
  • correlation calculation may be performed based on the mathematical parameters of the sample sub-picture, for example, the distance between the RGB vectors of the two sample sub-pictures may be used as the distance between the two sample sub-pictures, and the like.
  • Step 303 Determine a target sub-picture in a picture set including a sample sub-picture.
  • step 302 After dividing the sample sub-picture of the same attribute into a plurality of picture sets, determining the number of sample sub-pictures included in each picture set, and then determining the target sub-in the picture set including the most sample sub-pictures image.
  • the sample sub-picture corresponding to the center point in the picture set after the clustering may be determined as the target sub-picture.
  • the clustering algorithm may be used to determine the target sub-picture in the sample sub-picture of the same attribute, thereby ensuring that the determined target sub-picture is closer to the real image.
  • the present application also provides an embodiment of a picture processing apparatus.
  • the embodiment of the picture processing apparatus of the present application can be applied to a terminal or a server.
  • the device embodiment may be implemented by software, or may be implemented by hardware or a combination of hardware and software.
  • the processor is located in the terminal or the server of the server to read the corresponding computer program instructions in the non-volatile memory into the memory.
  • FIG. 4 a hardware structure diagram of a terminal or a server end of the image processing apparatus of the present application, except for the processor, the memory, the network interface, and the non-volatile memory shown in FIG.
  • the terminal or the server where the device is located in the embodiment may also include other hardware according to the actual function of the terminal or the server, and details are not described herein.
  • FIG. 5 is a schematic structural diagram of a picture processing apparatus according to an exemplary embodiment of the present application.
  • the image processing apparatus 400 may be applied to the terminal or the server shown in FIG. 4, and includes an interference removal unit 401, a picture division unit 402, a target determination unit 403, and a target merging unit 404.
  • the target determining unit 403 may further include: a parameter determining subunit 4031, a set dividing subunit 4032, and a target determining subunit 4033.
  • the interference removal unit 401 removes interference factors in the original picture in a plurality of different manners to obtain multiple sample pictures.
  • the picture segmentation unit 402 divides each sample picture into a plurality of sample sub-pictures according to a preset segmentation rule.
  • the target determining unit 403 determines a target sub-picture in a sample sub-picture of the same attribute.
  • the target synthesis unit 404 combines multiple target sub-pictures of different attributes into a target picture.
  • the parameter determining subunit 4031 determines a mathematical parameter of each sample subpicture.
  • the set dividing sub-unit 4032 uses a clustering algorithm to divide the sample sub-picture of the same attribute into a plurality of picture sets, wherein each picture set includes one or more sample sub-pictures .
  • the target determining sub-unit 4033 determines a target sub-picture in a picture set including a sample sub-picture.
  • the parameter determining subunit 4031 generates an RGB vector for the sample sub-picture as a mathematical parameter of the sample sub-picture according to the RGB information of each pixel point in the sample sub-picture.
  • the target determining sub-unit 4033 determines, in the group of pictures that includes the most sample sub-pictures, the sample sub-picture corresponding to the center point in the picture set after the clustering as the target sub-picture.
  • the target synthesizing unit 404 combines multiple target sub-pictures of different attributes into a target picture according to position coordinates of each pixel point in the target sub-picture.
  • the device embodiment since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment.
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the objectives of the present application. Those of ordinary skill in the art can understand and implement without any creative effort.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computing Systems (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)

Abstract

本申请提供一种图片处理方法和装置。所述方法包括:采用多种不同的方式去除原始图片中的干扰因子,得到多张样本图片;根据预设的分割规则,分别将每张样本图片分割为多张样本子图片;在相同属性的样本子图片中确定目标子图片;将不同属性的多张目标子图片合并为目标图片。本申请能够在相同属性的多张样本子图片中确定最接近真实图像的目标子图片,并将不同属性的多张目标子图片合并为目标图片,得到的目标图片能够高度还原真实图像,从而可以提高后续图像识别的准确度。

Description

图片处理方法和装置 技术领域
本申请涉及图片处理技术领域,尤其涉及一种图片处理方法和装置。
背景技术
随着互联网技术的快速发展,越来越多的网络业务涉及到图片的识别,比如:对人脸图片的识别、对证件图片的识别等。然而,目前很多图片会被添加网纹、水印等干扰因子,导致图片识别的效率下降,并增加了图片识别的难度。
发明内容
有鉴于此,本申请提供一种图片处理方法和装置。
具体地,本申请是通过如下技术方案实现的:
一种图片处理方法,所述方法包括:
采用多种不同的方式去除原始图片中的干扰因子,得到多张样本图片;
根据预设的分割规则,分别将每张样本图片分割为多张样本子图片;
在相同属性的样本子图片中确定目标子图片;
将不同属性的多张目标子图片合并为目标图片。
可选的,所述在相同属性的样本子图片中确定目标子图片,包括:
确定每个样本子图片的数学参数;
根据所述数学参数,采用聚类算法将所述相同属性的样本子图片划分为多个图片集合,其中,每个图片集合中包括有一张或者多张样本子图片;
在包括样本子图片最多的图片集合中,确定目标子图片。
可选的,所述确定每个样本子图片的数学参数,包括:
根据所述样本子图片中各像素点的RGB信息,为所述样本子图片生成RGB向量,作为所述样本子图片的数学参数。
可选的,所述在包括样本子图片最多的图片集合中,确定目标子图片,包括:
在包括样本子图片最多的图片集合中,将聚类后所述图片集合中的中心点对应的样本子图片确定为所述目标子图片。
可选的,所述将不同属性的多张目标子图片合并为目标图片,包括:
根据所述目标子图片中各像素点的位置坐标,将不同属性的多张目标子图片合并为目标图片。
一种图片处理装置,所述装置包括:
干扰去除单元,采用多种不同的方式去除原始图片中的干扰因子,得到多张样本图片;
图片分割单元,根据预设的分割规则,分别将每张样本图片分割为多张样本子图片;
目标确定单元,在相同属性的样本子图片中确定目标子图片;
目标合成单元,将不同属性的多张目标子图片合并为目标图片。
可选的,所述目标确定单元,包括:
参数确定子单元,确定每个样本子图片的数学参数;
集合划分子单元,根据所述数学参数,采用聚类算法将所述相同属性的样本子图片划分为多个图片集合,其中,每个图片集合中包括有一张或者多张样本子图片;
目标确定子单元,在包括样本子图片最多的图片集合中,确定目标子图片。
可选的,所述参数确定子单元,根据所述样本子图片中各像素点的RGB信息,为所述样本子图片生成RGB向量,作为所述样本子图片的数学参数。
可选的,所述目标确定子单元,在包括样本子图片最多的图片集合中,将聚类后所述图片集合中的中心点对应的样本子图片确定为所述目标子图片。
可选的,所述目标合成单元,根据所述目标子图片中各像素点的位置坐标,将不同属性的多张目标子图片合并为目标图片。
由此可以看出,本申请可以先采用多种不同的方式去除原始图片中的干扰因子,以得到多张样本图片,然后根据预设的分割规则,将多张样本图片分别分割为多张样本子图片,并在相同属性的样本子图片中确定目标子图片,从而能够在相同属性的多张样本子图片中确定最接近真实图像的目标子图片,并将不同属性的多张目标子图片合并为目标图片,得到的目标图片能够高度还原真实图像,从而可以提高后续图像识别的准确度。
附图说明
图1是本申请一示例性实施例示出的一种图片处理方法的流程示意图。
图2是本申请一示例性实施例示出的一种样本图片的分割示意图。
图3是本申请一示例性实施例示出的一种在相同属性的样本子图片中确定目标子图片的流程示意图。
图4是本申请一示例性实施例示出的一种用于图片处理装置的一结构示意图。
图5是本申请一示例性实施例示出的一种图片处理装置的结构示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同 或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
在本申请使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本申请。在本申请和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本申请可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本申请范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
相关技术中,可以采用一些图片处理算法或者图片处理工具,比如:Photoshop等,去除原始图片中的网纹或水印。然而,在这样的实现方式中,去网纹或水印后得到的图片往往无法真实还原原始图片中的图像,进而影响后续图片识别的准确度。
图1是本申请一示例性实施例示出的一种图片处理方法的流程示意图。
请参考图1,所述图片处理方法可以应用在终端中,所述终端可以包括智能手机、平板电脑、PDA(Personal Digital Assistant,掌上电脑)PC机等智能设备。所述图片处理方法也可以应用在服务端中,本申请对此不作特殊限制。所述图片处理方法可以包括以下步骤:
步骤101,采用多种不同的方式去除原始图片中的干扰因子,得到多张样本图片。
在本实施例中,所述原始图片通常为待识别的图片,所述原始图片中包括有干扰因子,所述干扰因子通常为在真实图像的基础上后续添加的网纹、水印等干扰图案。
在本实施例中,可以采用相关技术中提供的多种不同的去干扰因子的方法去除所述原始图片中的干扰因子,以得到去干扰因子后的多张样本图片,比如:可以通过photoshop等图像处理软件去除所述原始图片中的干扰因子等。
步骤102,根据预设的分割规则,分别将每张样本图片分割为多张样本子图片。
基于前述步骤101,在得到多张去干扰的样本图片后,可以根据预设的分割规则,分别将每张样本图片分割为多张子图片,为便于描述,在本申请中,可以将分割后得到的所述子图片称为样本子图片。
在本实施例中,所述预设的分割规则可以由开发人员进行设置,所述预设的分割规则可以以样本子图片的尺寸为单位,也可以以样本子图片的数量为单位,本申请对此不作特殊限制。举例来说,所述预设的分割规则可以为将样本图片分割为25张样本子图片,比如:按照5乘5的规则,将所述样本图片分割为25张样本子图片。
在本实施例中,假设在前述步骤101中,采用N种不同的方式去除原始图片中的干扰因子,可以得到N张样本图片。又假设,在本步骤中,将每张样本图片分割为M张样本子图片,则一共可以得到N×M张样本子图片。其中,M和N均为大于1的自然数。
步骤103,在相同属性的样本子图片中确定目标子图片。
在本实施例中,每张样本子图片均包括有对应的属性,所述属性用来表示所述样本子图片在所属样本图片中的位置信息。请参考图2,假设图片A为原始图片去干扰后得到的一张样本图片,根据预设的分割规则,可以将所述样本图片分割为3乘3的9张样本子图片,这9张样本子图片的属性分别为:A11、A12、A13、A21、…、A33。
在本实施例中,仍然以图2所示的分割规则为例,假设采用N种不同的方式去除原始图片中的干扰因子,得到N张样本图片,则一共可以得到N×9张样本子图片,其中,属性为A11至A33的样本子图片各有N张。在本步骤中,可以在N张属性为A11的样本子图片中确定属性为A11的目标子图片,在N张属性为A12的样本子图片中确定属 性为A12的目标子图片,以此类推,可以确定属性为A11至A33的9张目标子图片。
步骤104,将不同属性的多张目标子图片合并为目标图片。
基于前述步骤103,在相同属性的样本子图片中确定目标子图片后,可以将不同属性的多张目标子图片合并为目标图片,比如:可以根据每张目标子图片的属性将所述多张目标子图片合并为目标图片,也可以根据每张目标子图片中各像素点的位置坐标将所述多张目标子图片合并为目标图片,本申请对此不作特殊限制。
举例来说,仍以图2所示的分割规则为例,在本步骤中,可以将属性为A11至A33的9张目标子图片合并为一张目标图片。
由以上描述可以看出,本申请可以先采用多种不同的方式去除原始图片中的干扰因子,以得到多张样本图片,然后根据预设的分割规则,将多张样本图片分别分割为多张样本子图片,并在相同属性的样本子图片中确定目标子图片,从而能够在相同属性的多张样本子图片中确定最接近真实图像的目标子图片,并将不同属性的多张目标子图片合并为目标图片,得到的目标图片能够高度还原真实图像,从而可以提高后续图像识别的准确度。
可选的,在本申请一个例子中,请参考图3,在相同属性的样本子图片中确定目标子图片的过程可以包括以下步骤:
步骤301,确定每个样本子图片的数学参数。
在本实施例中,在将样本图片分割为多张样本子图片后,可以确定每张样本子图片的数学参数,以便后续计算。
可选的,在本申请一个例子中,可以根据所述样本子图片中各像素点的RGB信息,为所述样本子图片生成RGB向量,作为所述样本子图片的数学参数。比如:可以先获取所述样本子图片中各像素点的RGB信息,诸如:RGB值,然后根据各像素点RGB信息生成RGB向量。假设,所述样本子图片中包括有K个像素点,其中,第i个像素点的RGB值为Ri,i的取值为1至K,则所述样本子图片的RGB向量为 {R1,R2,…,RK}。
步骤302,根据所述数学参数,采用聚类算法将所述相同属性的样本子图片划分为多个图片集合,其中,每个图片集合中包括有一张或者多张样本子图片。
在本实施例中,针对相同属性的多个样本子图片,基于所述样本子图片的数学参数,可以采用聚类算法将所述多个样本子图片划分到多个图片集合中。所述聚类算法可以包括:DBSCAN聚类算法(Density-Based Spatial Clustering of Applications with Noise)、K-means聚类算法等,本申请对此不作特殊限制。
举例来说,当采用DBSCAN聚类算法时,可以预先设置扫描半径(eps)和最小包含点数(minPts),每个样本子图片都对应聚类过程中的一个点,所述最小包含点数为划分后的图片集合中所能包含的最少样本子图片数量。在聚类的过程中,可以基于所述样本子图片的数学参数进行相关的计算,比如:可以将两个样本子图片的RGB向量之间的距离作为两个样本子图片之间的距离等。
步骤303,在包括样本子图片最多的图片集合中,确定目标子图片。
基于前述步骤302,在将相同属性的样本子图片划分为多个图片集合之后,确定每个图片集合包括的样本子图片的数量,然后可以在包括样本子图片最多的图片集合中,确定目标子图片。
可选的,在本申请一个例子中,在包括样本子图片最多的图片集合中,可以将聚类后所述图片集合中的中心点对应的样本子图片确定为所述目标子图片。
在本实施例中,可以采用聚类算法在相同属性的样本子图片中确定目标子图片,从而确保确定的目标子图片更加接近真实图像。
与前述图片处理方法的实施例相对应,本申请还提供了图片处理装置的实施例。
本申请图片处理装置的实施例可以应用在终端或者服务端上。装置实施例可以通过软件实现,也可以通过硬件或者软硬件结合的方式实现。 以软件实现为例,作为一个逻辑意义上的装置,是通过其所在终端或服务端的处理器将非易失性存储器中对应的计算机程序指令读取到内存中运行形成的。从硬件层面而言,如图4所示,为本申请图片处理装置所在终端或服务端的一种硬件结构图,除了图4所示的处理器、内存、网络接口、以及非易失性存储器之外,实施例中装置所在的终端或服务端通常根据该终端或服务端的实际功能,还可以包括其他硬件,对此不再赘述。
图5是本申请一示例性实施例示出的一种图片处理装置的结构示意图。
请参考图5,所述图片处理装置400可以应用在图4所示的终端或服务端中,包括有:干扰去除单元401、图片分割单元402、目标确定单元403以及目标合并单元404。其中,所述目标确定单元403还可以包括:参数确定子单元4031、集合划分子单元4032以及目标确定子单元4033。
其中,所述干扰去除单元401,采用多种不同的方式去除原始图片中的干扰因子,得到多张样本图片。
所述图片分割单元402,根据预设的分割规则,分别将每张样本图片分割为多张样本子图片。
所述目标确定单元403,在相同属性的样本子图片中确定目标子图片。
所述目标合成单元404,将不同属性的多张目标子图片合并为目标图片。
所述参数确定子单元4031,确定每个样本子图片的数学参数。
所述集合划分子单元4032,根据所述数学参数,采用聚类算法将所述相同属性的样本子图片划分为多个图片集合,其中,每个图片集合中包括有一张或者多张样本子图片。
所述目标确定子单元4033,在包括样本子图片最多的图片集合中,确定目标子图片。
可选的,所述参数确定子单元4031,根据所述样本子图片中各像素点的RGB信息,为所述样本子图片生成RGB向量,作为所述样本子图片的数学参数。
可选的,所述目标确定子单元4033,在包括样本子图片最多的图片集合中,将聚类后所述图片集合中的中心点对应的样本子图片确定为所述目标子图片。
可选的,所述目标合成单元404,根据所述目标子图片中各像素点的位置坐标,将不同属性的多张目标子图片合并为目标图片。
上述装置中各个单元的功能和作用的实现过程具体详见上述方法中对应步骤的实现过程,在此不再赘述。
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本申请方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
以上所述仅为本申请的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。

Claims (10)

  1. 一种图片处理方法,其特征在于,所述方法包括:
    采用多种不同的方式去除原始图片中的干扰因子,得到多张样本图片;
    根据预设的分割规则,分别将每张样本图片分割为多张样本子图片;
    在相同属性的样本子图片中确定目标子图片;
    将不同属性的多张目标子图片合并为目标图片。
  2. 根据权利要求1所述的方法,其特征在于,所述在相同属性的样本子图片中确定目标子图片,包括:
    确定每个样本子图片的数学参数;
    根据所述数学参数,采用聚类算法将所述相同属性的样本子图片划分为多个图片集合,其中,每个图片集合中包括有一张或者多张样本子图片;
    在包括样本子图片最多的图片集合中,确定目标子图片。
  3. 根据权利要求2所述的方法,其特征在于,所述确定每个样本子图片的数学参数,包括:
    根据所述样本子图片中各像素点的RGB信息,为所述样本子图片生成RGB向量,作为所述样本子图片的数学参数。
  4. 根据权利要求2所述的方法,其特征在于,所述在包括样本子图片最多的图片集合中,确定目标子图片,包括:
    在包括样本子图片最多的图片集合中,将聚类后所述图片集合中的中心点对应的样本子图片确定为所述目标子图片。
  5. 根据权利要求1所述的方法,其特征在于,所述将不同属性的多张目标子图片合并为目标图片,包括:
    根据所述目标子图片中各像素点的位置坐标,将不同属性的多张目标子图片合并为目标图片。
  6. 一种图片处理装置,其特征在于,所述装置包括:
    干扰去除单元,采用多种不同的方式去除原始图片中的干扰因子,得到多张样本图片;
    图片分割单元,根据预设的分割规则,分别将每张样本图片分割为多张样本子图片;
    目标确定单元,在相同属性的样本子图片中确定目标子图片;
    目标合成单元,将不同属性的多张目标子图片合并为目标图片。
  7. 根据权利要求1所述的装置,其特征在于,所述目标确定单元,包括:
    参数确定子单元,确定每个样本子图片的数学参数;
    集合划分子单元,根据所述数学参数,采用聚类算法将所述相同属性的样本子图片划分为多个图片集合,其中,每个图片集合中包括有一张或者多张样本子图片;
    目标确定子单元,在包括样本子图片最多的图片集合中,确定目标子图片。
  8. 根据权利要求7所述的装置,其特征在于,
    所述参数确定子单元,根据所述样本子图片中各像素点的RGB信息,为所述样本子图片生成RGB向量,作为所述样本子图片的数学参数。
  9. 根据权利要求7所述的装置,其特征在于,
    所述目标确定子单元,在包括样本子图片最多的图片集合中,将聚类后所述图片集合中的中心点对应的样本子图片确定为所述目标子图片。
  10. 根据权利要求6所述的装置,其特征在于,
    所述目标合成单元,根据所述目标子图片中各像素点的位置坐标,将不同属性的多张目标子图片合并为目标图片。
PCT/CN2017/071256 2016-01-25 2017-01-16 图片处理方法和装置 WO2017128977A1 (zh)

Priority Applications (8)

Application Number Priority Date Filing Date Title
JP2018557180A JP6937782B2 (ja) 2016-01-25 2017-01-16 画像処理方法及びデバイス
MYPI2018702589A MY192394A (en) 2016-01-25 2017-01-16 Image processing method and device
KR1020187024465A KR102239588B1 (ko) 2016-01-25 2017-01-16 이미지 처리 방법 및 장치
SG11201806345QA SG11201806345QA (en) 2016-01-25 2017-01-16 Image processing method and device
EP17743592.2A EP3410389A4 (en) 2016-01-25 2017-01-16 METHOD AND DEVICE FOR PROCESSING IMAGES
US16/043,636 US10706555B2 (en) 2016-01-25 2018-07-24 Image processing method and device
PH12018501579A PH12018501579A1 (en) 2016-01-25 2018-07-25 Image processing method and device
US16/719,474 US10769795B2 (en) 2016-01-25 2019-12-18 Image processing method and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610049672.0A CN106997580B (zh) 2016-01-25 2016-01-25 图片处理方法和装置
CN201610049672.0 2016-01-25

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/043,636 Continuation US10706555B2 (en) 2016-01-25 2018-07-24 Image processing method and device

Publications (1)

Publication Number Publication Date
WO2017128977A1 true WO2017128977A1 (zh) 2017-08-03

Family

ID=59397393

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/071256 WO2017128977A1 (zh) 2016-01-25 2017-01-16 图片处理方法和装置

Country Status (10)

Country Link
US (2) US10706555B2 (zh)
EP (1) EP3410389A4 (zh)
JP (1) JP6937782B2 (zh)
KR (1) KR102239588B1 (zh)
CN (1) CN106997580B (zh)
MY (1) MY192394A (zh)
PH (1) PH12018501579A1 (zh)
SG (1) SG11201806345QA (zh)
TW (1) TWI711004B (zh)
WO (1) WO2017128977A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106997580B (zh) * 2016-01-25 2020-09-01 阿里巴巴集团控股有限公司 图片处理方法和装置
CN113516328B (zh) * 2020-07-13 2022-09-02 阿里巴巴集团控股有限公司 数据处理方法、服务提供方法、装置、设备和存储介质
KR102473506B1 (ko) * 2022-06-16 2022-12-05 김태진 뉴럴 네트워크를 이용하여 워터마크가 삽입된 데이터를 제공하는 방법 및 장치
CN118400205B (zh) * 2024-06-28 2024-09-10 北京国信城研科学技术研究院 基于大数据的计算机信息安全处理系统及装置
CN118628367A (zh) * 2024-07-31 2024-09-10 深圳市一恒科电子科技有限公司 热敏打印的图片处理方法、装置、电子设备及存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100092075A1 (en) * 2005-12-07 2010-04-15 Drvision Technologies Llc Method of directed pattern enhancement for flexible recognition
CN102436589A (zh) * 2010-09-29 2012-05-02 中国科学院电子学研究所 一种基于多类基元自主学习的复杂目标自动识别方法
CN103839275A (zh) * 2014-03-27 2014-06-04 中国科学院遥感与数字地球研究所 高光谱图像的道路提取方法及装置

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6075875A (en) * 1996-09-30 2000-06-13 Microsoft Corporation Segmentation of image features using hierarchical analysis of multi-valued image data and weighted averaging of segmentation results
JPH1196297A (ja) * 1997-09-17 1999-04-09 Hitachi Ltd 帳票画像処理方法及び帳票画像処理装置
JP4077094B2 (ja) * 1998-12-11 2008-04-16 富士通株式会社 カラー文書画像認識装置
JP2002190945A (ja) * 2000-10-12 2002-07-05 Canon Inc 情報処理装置及びその制御方法及び記憶媒体
AU2003903728A0 (en) * 2003-07-21 2003-07-31 Hao Hang Zheng Method and system for performing combined image classification storage and rapid retrieval on a computer database
US7586646B2 (en) * 2006-04-12 2009-09-08 Xerox Corporation System for processing and classifying image data using halftone noise energy distribution
TW201002048A (en) * 2008-06-30 2010-01-01 Avermedia Information Inc Document camera and image processing method thereof
US20120065518A1 (en) * 2010-09-15 2012-03-15 Schepens Eye Research Institute Systems and methods for multilayer imaging and retinal injury analysis
KR20130021616A (ko) * 2011-08-23 2013-03-06 삼성전자주식회사 다중 측위를 이용한 단말의 측위 장치 및 방법
JP5830338B2 (ja) * 2011-10-07 2015-12-09 株式会社日立情報通信エンジニアリング 帳票認識方法および帳票認識装置
CN105450411B (zh) * 2014-08-14 2019-01-08 阿里巴巴集团控股有限公司 利用卡片特征进行身份验证的方法、装置及系统
CN106997580B (zh) 2016-01-25 2020-09-01 阿里巴巴集团控股有限公司 图片处理方法和装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100092075A1 (en) * 2005-12-07 2010-04-15 Drvision Technologies Llc Method of directed pattern enhancement for flexible recognition
CN102436589A (zh) * 2010-09-29 2012-05-02 中国科学院电子学研究所 一种基于多类基元自主学习的复杂目标自动识别方法
CN103839275A (zh) * 2014-03-27 2014-06-04 中国科学院遥感与数字地球研究所 高光谱图像的道路提取方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3410389A4 *

Also Published As

Publication number Publication date
KR20180105210A (ko) 2018-09-27
US10706555B2 (en) 2020-07-07
MY192394A (en) 2022-08-18
TWI711004B (zh) 2020-11-21
JP6937782B2 (ja) 2021-09-22
EP3410389A1 (en) 2018-12-05
JP2019504430A (ja) 2019-02-14
KR102239588B1 (ko) 2021-04-15
SG11201806345QA (en) 2018-08-30
US20190005651A1 (en) 2019-01-03
US10769795B2 (en) 2020-09-08
CN106997580A (zh) 2017-08-01
CN106997580B (zh) 2020-09-01
US20200126238A1 (en) 2020-04-23
TW201732733A (zh) 2017-09-16
EP3410389A4 (en) 2019-08-21
PH12018501579A1 (en) 2019-04-08

Similar Documents

Publication Publication Date Title
WO2017128977A1 (zh) 图片处理方法和装置
US20200356818A1 (en) Logo detection
WO2019011249A1 (zh) 一种图像中物体姿态的确定方法、装置、设备及存储介质
Liu et al. Real-time robust vision-based hand gesture recognition using stereo images
WO2021082801A1 (zh) 增强现实处理方法及装置、系统、存储介质和电子设备
WO2016155377A1 (zh) 图片展示方法和装置
US20100250588A1 (en) Image searching system and image searching method
WO2019237745A1 (zh) 人脸图像处理方法、装置、电子设备及计算机可读存储介质
JP5868816B2 (ja) 画像処理装置、画像処理方法、及びプログラム
US20130236068A1 (en) Calculating facial image similarity
CN111008935B (zh) 一种人脸图像增强方法、装置、系统及存储介质
US10198533B2 (en) Registration of multiple laser scans
US9865061B2 (en) Constructing a 3D structure
US9531952B2 (en) Expanding the field of view of photograph
WO2023098045A1 (zh) 图像对齐方法、装置、计算机设备和存储介质
US20230043154A1 (en) Restoring a video for improved watermark detection
US20150185017A1 (en) Image-based geo-hunt
CN108492284B (zh) 用于确定图像的透视形状的方法和装置
US20230298143A1 (en) Object removal during video conferencing
CN111836058A (zh) 用于实时视频播放方法、装置、设备以及存储介质
JP6202938B2 (ja) 画像認識装置および画像認識方法
WO2021051580A1 (zh) 基于分组批量的图片检测方法、装置及存储介质
KR20210120599A (ko) 아바타 서비스 제공 방법 및 시스템
CN108446737B (zh) 用于识别对象的方法和装置
JP2015219756A (ja) 画像比較方法、装置並びにプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17743592

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018557180

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 11201806345Q

Country of ref document: SG

ENP Entry into the national phase

Ref document number: 20187024465

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 1020187024465

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: 2017743592

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2017743592

Country of ref document: EP

Effective date: 20180827