WO2021068330A1 - 智能图像分割及分类方法、装置及计算机可读存储介质 - Google Patents

智能图像分割及分类方法、装置及计算机可读存储介质 Download PDF

Info

Publication number
WO2021068330A1
WO2021068330A1 PCT/CN2019/117343 CN2019117343W WO2021068330A1 WO 2021068330 A1 WO2021068330 A1 WO 2021068330A1 CN 2019117343 W CN2019117343 W CN 2019117343W WO 2021068330 A1 WO2021068330 A1 WO 2021068330A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
gray
pixel
fourier transform
original
Prior art date
Application number
PCT/CN2019/117343
Other languages
English (en)
French (fr)
Inventor
赵远
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021068330A1 publication Critical patent/WO2021068330A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • This application relates to the field of artificial intelligence technology, and in particular to a method, device and computer-readable storage medium for intelligent image segmentation and classification.
  • Image segmentation and classification is the use of a computer to perform a series of analysis of the image, to obtain the effect of segmenting the image and classifying it into multiple images with preset features.
  • Most of the existing image segmentation and classification methods are based on pre-set rules.
  • the artificial segmentation and classification rules are set in advance to classify the image segmentation into images containing people and images without people, so the purpose of intelligence cannot be achieved.
  • This application provides an intelligent image segmentation and classification method, device, and computer-readable storage medium, the main purpose of which is to provide an intelligent intelligent image segmentation and classification scheme that does not require human operations.
  • an intelligent image segmentation and classification method includes:
  • this application also provides an intelligent image segmentation and classification device, which includes a memory and a processor, and the memory stores an intelligent image segmentation and classification program that can run on the processor, When the intelligent image segmentation and classification program is executed by the processor, the following steps are implemented:
  • this application also provides a computer-readable storage medium with an intelligent image segmentation and classification program stored on the computer-readable storage medium.
  • the intelligent image segmentation and classification program can be configured by one or more
  • the processor executes to implement the steps of the intelligent image segmentation and classification method as described above.
  • This application performs Fourier transform and degenerate function processing on the received original image and image classification number, which improves the purity of the data.
  • it uses encoding compression and image enhancement to enlarge image features, and separates multiple types based on region detection processing and threshold segmentation.
  • Image features improve the utilization of image features, and achieve the final classification goal according to the classification probability model. Therefore, the intelligent image segmentation and classification method, device, and computer-readable storage medium proposed in this application can achieve the purpose of image segmentation and classification with high accuracy.
  • FIG. 1 is a schematic flowchart of an intelligent image segmentation and classification method provided by an embodiment of this application;
  • FIG. 2 is a schematic diagram of the internal structure of an intelligent image segmentation and classification device provided by an embodiment of the application;
  • FIG. 3 is a schematic diagram of modules of an intelligent image segmentation and classification program in an intelligent image segmentation and classification device provided by an embodiment of the application.
  • This application provides an intelligent image segmentation and classification method.
  • FIG. 1 it is a schematic flowchart of an intelligent image segmentation and classification method provided by an embodiment of this application.
  • the method can be executed by a device, and the device can be implemented by software and/or hardware.
  • the intelligent image segmentation and classification method includes:
  • S1 Receive an original image input by a user, perform Fourier transform on the original image to obtain a transformed image, and perform noise reduction processing on the transformed image based on a degradation function to obtain a denoised image.
  • the original image is composed of several pixels, and the content of the original image includes various scenes, such as landscapes, pets playing, NBA sports competitions and so on.
  • the Fourier transform includes: traversing the original pixels of the original image, calculating the two-dimensional discrete Fourier transform function of the original pixels, and according to the two-dimensional discrete Fourier transform
  • the leaf transform function solves the inverse Fourier transform function of the original image, and replaces the original pixel with the function value of the inverse Fourier transform function to obtain the transformed image.
  • the two-dimensional discrete Fourier transform function includes:
  • the inverse Fourier transform function includes:
  • F(u,v) is the two-dimensional discrete Fourier transform function
  • f(x,y) is the inverse Fourier transform function
  • (u,v) is the coordinates of the original pixel
  • (x, y) is the pixel coordinates after the Fourier transform, versus They are called transformation kernel and inverse transformation kernel respectively
  • j is a preset adjustment coefficient
  • M and N are the image specifications of the original image.
  • the noise reduction process is to remove the noise of the original image (such as Gaussian noise, salt and pepper noise, etc.), and protect the details of the image as much as possible.
  • performing noise reduction processing on the transformed image based on a degradation function to obtain a noise-reduced image includes: solving the pixel variance and noise variance of the transformed image, and according to the degradation function, the pixel variance and the noise variance, Use the following method to solve the noise reduction image.
  • t(x',y') is the denoised image
  • (x',y') is the pixel of the denoised image
  • f(x,y) is the inverse Fourier transform function
  • ⁇ 2 is the pixel variance
  • calculating the gray-scale probability of the noise-reduced image includes: traversing the gray value of each pixel of the noise-reduced image to obtain a gray value set, and traversing the gray value of each gray value in the gray value set. The number of times obtains a correspondence table between the gray value and the number of appearances, and the gray scale probability table is obtained by dividing each number of appearances in the corresponding table by the number of pixels of the noise reduction image.
  • the noise reduction image has 6 pixels A, B, C, D, E, and F, and the corresponding gray values of the pixels A, B, C, D, E, and F are 2, 3, and F respectively. 6, 7, 2, 2, then the gray probability of gray value 2 is:
  • the encoding and compression of the noise-reduction image according to the gray-scale probability in this application to obtain a compressed image includes: sorting the gray-scale probabilities in the gray-scale probability table from large to small, and sorting the gray-scale probabilities. The two values with the smallest degree probability are added together to obtain a new gray-scale probability, and so on until the gray-scale probability of the gray-scale probability table reaches a specified number threshold, the noise-reduction image is redistributed according to the gray-scale probability table. The gray value obtains the compressed image.
  • the linear stretching is to expand the contrast of the compressed image
  • the linear stretching method includes:
  • D n is the compressed image
  • D 1 is the linearly stretched image
  • a is the slope of the linear stretch
  • b is the intercept of the linear stretch, when a>1
  • the contrast of the linearly stretched image is stronger than that of the compressed image.
  • the contrast of the linearly stretched image is weaker than that of the compressed image.
  • the image enhancement in the preferred embodiment of the present application adopts a multi-threshold brightness enhancement method.
  • the image enhancement includes: preset one or more brightness threshold segments, traverse each brightness point of the compressed image, determine the threshold segment to which each brightness point belongs, and preset according to the threshold segment to which it belongs.
  • the enhancement processing method performs enhancement processing until the traversal is completed to obtain an enhanced image.
  • the brightness threshold segment is divided into [0,20], [20,40], [40,80], [80,120], [120, infinity], if the brightness point of the compressed image is 27, then it is in [ 20,40] segment, if the enhancement processing method of [20,40] segment is to enlarge 2 times, so the brightness point is 27 changed to 54.
  • the region detection processing includes: randomly dividing the enhanced image into s small image blocks of the same size, sequentially calculating the center point pixel values of the s small image blocks to obtain a center pixel value set, based on the color characteristics and Euclidean distance calculates the similarity of the central point pixel value set to obtain the image similarity set.
  • the threshold segmentation includes: removing similarity values greater than the preset similarity threshold from the similarity set based on a preset similarity threshold, and extracting corresponding small image blocks according to the removed similarity set Get the segmented image set.
  • the calculation of the pixel values of the center points of the s small image blocks described in this application may adopt an average value method, a center point expansion method, and the like.
  • the average value method is to add all the pixel values in the small image block and then take the average value, the average value is the pixel value of the center point; the center point expansion method selects the position center point of the small image block, and the distance is The closer the center point is, the greater the weight of the pixel point, until the maximum weight of the center point, and the average value of each weight is taken to obtain the center point pixel value.
  • the calculation of the similarity of the central point pixel value set based on the color feature is based on the following method:
  • p i, p j i represent the center point of the pixel values of the small block and a mini-block j
  • d (p i, p j ) is the i-j mini-block and the mini-block similarity
  • similarity adjustment parameter c represents the Euclidean distance and said mini-block i j mini-block
  • d color (p i, p j) indicates the color of the mini-block i
  • the mapping of the original segmented image set into an undirected atlas includes: traversing each original segmented image in the original segmented image set, randomly selecting two adjacent pixels of the original segmented image, and constructing the two The connecting lines of adjacent pixels obtain divided images based on the connecting lines, and the divided images are called undirected images.
  • E(A) is the segmented image set
  • A represents the binary vector of the pixel set
  • is the adjustment parameter
  • R(A) is the pixel label
  • R p (A p ) represents the cost of the pixel p assigned to the pixel binary vector A
  • B(A) represents the boundary term of the undirected graph
  • q is the discontinuity value between pixels p and q
  • Ap represents the area term of pixel p
  • the extracting the boundary features of each segmented image in the segmented image set to obtain a feature set includes: sequentially traversing the segmented images in the segmented image set, and randomly selecting a point from the image edge position of the segmented image As the origin of the coordinates, starting from the origin of the coordinates, the horizontal and vertical coordinates are divided into equally spaced grids, and then the average value of the pixel values in each grid is calculated according to preset rules (such as counterclockwise or clockwise) ) In turn, the pixel value averages are connected to obtain the boundary feature, and after the traversal of the segmented image set is completed, the feature set corresponding to the segmented image set is obtained.
  • preset rules such as counterclockwise or clockwise
  • the classification probability model is:
  • x) is the classification probability model
  • w i is the image classification number
  • i is the image classification number label
  • x is the boundary feature of the segmented image k
  • d is the feature number of the feature set number.
  • the invention also provides an intelligent image segmentation and classification device.
  • FIG. 2 it is a schematic diagram of the internal structure of an intelligent image segmentation and classification device provided by an embodiment of this application.
  • the smart image segmentation and classification device 1 may be a PC (Personal Computer, personal computer), or a terminal device such as a smart phone, a tablet computer, or a portable computer, or a server.
  • the intelligent image segmentation and classification device 1 at least includes a memory 11, a processor 12, a communication bus 13, and a network interface 14.
  • the memory 11 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, and the like.
  • the memory 11 may be an internal storage unit of the intelligent image segmentation and classification device 1, for example, the hard disk of the intelligent image segmentation and classification device 1.
  • the memory 11 may also be an external storage device of the smart image segmentation and classification device 1, such as a plug-in hard disk or a smart memory card (Smart Media Card, SMC) equipped on the smart image segmentation and classification device 1. Secure Digital (SD) card, flash card (Flash Card), etc.
  • SD Secure Digital
  • flash card Flash Card
  • the memory 11 may also include both the internal storage unit of the intelligent image segmentation and classification device 1 and an external storage device.
  • the memory 11 can be used not only to store application software and various data installed in the intelligent image segmentation and classification device 1, such as the code of the intelligent image segmentation and classification program 01, etc., but also to temporarily store data that has been output or will be output. .
  • the processor 12 may be a central processing unit (CPU), controller, microcontroller, microprocessor, or other data processing chip, for running program codes or processing stored in the memory 11 Data, such as execution of intelligent image segmentation and classification program 01, etc.
  • CPU central processing unit
  • controller microcontroller
  • microprocessor or other data processing chip, for running program codes or processing stored in the memory 11 Data, such as execution of intelligent image segmentation and classification program 01, etc.
  • the communication bus 13 is used to realize the connection and communication between these components.
  • the network interface 14 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface), and is usually used to establish a communication connection between the apparatus 1 and other electronic devices.
  • the device 1 may also include a user interface.
  • the user interface may include a display (Display) and an input unit such as a keyboard (Keyboard).
  • the optional user interface may also include a standard wired interface and a wireless interface.
  • the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode, organic light-emitting diode) touch device, etc.
  • the display can also be appropriately called a display screen or a display unit, which is used to display the information processed in the intelligent image segmentation and classification device 1 and to display a visualized user interface.
  • Figure 2 only shows the smart image segmentation and classification device 1 with components 11-14 and the smart image segmentation and classification program 01.
  • the structure shown in Figure 1 does not constitute a smart image segmentation
  • the definition of the classification device 1 may include fewer or more components than shown, or a combination of some components, or a different component arrangement.
  • the smart image segmentation and classification program 01 is stored in the memory 11; when the processor 12 executes the smart image segmentation and classification program 01 stored in the memory 11, the following steps are implemented:
  • Step 1 Receive an original image input by a user, perform Fourier transform on the original image to obtain a transformed image, and perform noise reduction processing on the transformed image based on a degradation function to obtain a noise-reduced image.
  • the original image is composed of several pixels, and the content of the original image includes various scenes, such as landscapes, pets playing, NBA sports competitions and so on.
  • the Fourier transform includes: traversing the original pixels of the original image, calculating the two-dimensional discrete Fourier transform function of the original pixels, and according to the two-dimensional discrete Fourier transform
  • the leaf transform function solves the inverse Fourier transform function of the original image, and replaces the original pixel with the function value of the inverse Fourier transform function to obtain the transformed image.
  • the two-dimensional discrete Fourier transform function includes:
  • the inverse Fourier transform function includes:
  • F(u,v) is the two-dimensional discrete Fourier transform function
  • f(x,y) is the inverse Fourier transform function
  • (u,v) is the coordinates of the original pixel
  • (x, y) is the pixel coordinates after the Fourier transform, versus They are called transformation kernel and inverse transformation kernel respectively
  • j is a preset adjustment coefficient
  • M and N are the image specifications of the original image.
  • the noise reduction process is to remove the noise of the original image (such as Gaussian noise, salt and pepper noise, etc.), and protect the details of the image as much as possible.
  • performing noise reduction processing on the transformed image based on a degradation function to obtain a noise-reduced image includes: solving the pixel variance and noise variance of the transformed image, and according to the degradation function, the pixel variance and the noise variance, Use the following method to solve the noise reduction image.
  • t(x',y') is the denoised image
  • (x',y') is the pixel of the denoised image
  • f(x,y) is the inverse Fourier transform function
  • ⁇ 2 is the pixel variance
  • Step 2 Calculate the gray-scale probability of the noise-reduced image, and encode and compress the noise-reduced image according to the gray-scale probability to obtain a compressed image.
  • calculating the gray-scale probability of the noise-reduced image includes: traversing the gray value of each pixel of the noise-reduced image to obtain a gray value set, and traversing the gray value of each gray value in the gray value set. The number of times obtains a correspondence table between the gray value and the number of appearances, and the gray scale probability table is obtained by dividing each number of appearances in the corresponding table by the number of pixels of the noise reduction image.
  • the noise reduction image has 6 pixels A, B, C, D, E, and F, and the corresponding gray values of the pixels A, B, C, D, E, and F are 2, 3, and F respectively. 6, 7, 2, 2, then the gray probability of gray value 2 is:
  • the encoding and compression of the noise-reduction image according to the gray-scale probability in this application to obtain a compressed image includes: sorting the gray-scale probabilities in the gray-scale probability table from large to small, and sorting the gray-scale probabilities. The two values with the smallest degree probability are added together to obtain a new gray-scale probability, and so on until the gray-scale probability of the gray-scale probability table reaches a specified number threshold, the noise-reduction image is redistributed according to the gray-scale probability table. The gray value obtains the compressed image.
  • Step 3 Perform linear stretching, image enhancement, and region detection processing on the compressed image to obtain an image similarity set.
  • the linear stretching is to expand the contrast of the compressed image
  • the linear stretching method includes:
  • D n is the compressed image
  • D 1 is the linearly stretched image
  • a is the slope of the linear stretch
  • b is the intercept of the linear stretch, when a>1
  • the contrast of the linearly stretched image is stronger than that of the compressed image.
  • the contrast of the linearly stretched image is weaker than that of the compressed image.
  • the image enhancement in the preferred embodiment of the present application adopts a multi-threshold brightness enhancement method.
  • the image enhancement includes: preset one or more brightness threshold segments, traverse each brightness point of the compressed image, determine the threshold segment to which each brightness point belongs, and preset according to the threshold segment to which it belongs.
  • the enhancement processing method performs enhancement processing until the traversal is completed to obtain an enhanced image.
  • the brightness threshold segment is divided into [0,20], [20,40], [40,80], [80,120], [120, infinity], if the brightness point of the compressed image is 27, then it is in [ 20,40] segment, if the enhancement processing method of [20,40] segment is to enlarge 2 times, so the brightness point is 27 changed to 54.
  • the region detection processing includes: randomly dividing the enhanced image into s small image blocks of the same size, sequentially calculating the center point pixel values of the s small image blocks to obtain a center pixel value set, based on the color characteristics and Euclidean distance calculates the similarity of the central point pixel value set to obtain the image similarity set.
  • Step 4 Perform threshold segmentation on the image similarity set to obtain an original segmented image set, map the original segmented image set into an undirected atlas, and optimize the undirected atlas according to a pre-built objective function to obtain a segmented image set .
  • the threshold segmentation includes: removing similarity values greater than the preset similarity threshold from the similarity set based on a preset similarity threshold, and extracting corresponding small image blocks according to the removed similarity set Get the segmented image set.
  • the calculation of the pixel values of the center points of the s small image blocks described in this application may adopt an average value method, a center point expansion method, and the like.
  • the average value method is to add all the pixel values in the small image block and then take the average value, the average value is the pixel value of the center point; the center point expansion method selects the position center point of the small image block, and the distance is The closer the center point is, the greater the weight of the pixel point, until the maximum weight of the center point, and the average value of each weight is taken to obtain the center point pixel value.
  • the calculation of the similarity of the central point pixel value set based on the color feature is based on the following method:
  • p i, p j i represent the center point of the pixel values of the small block and a mini-block j
  • d (p i, p j ) is the i-j mini-block and the mini-block similarity
  • similarity adjustment parameter c represents the Euclidean distance and said mini-block i j mini-block
  • d color (p i, p j) indicates the color of the mini-block i
  • the mapping of the original segmented image set into an undirected atlas includes: traversing each original segmented image in the original segmented image set, randomly selecting two adjacent pixels of the original segmented image, and constructing the two The connecting lines of adjacent pixels obtain divided images based on the connecting lines, and the divided images are called undirected images.
  • E(A) is the segmented image set
  • A represents the binary vector of the pixel set
  • is the adjustment parameter
  • R(A) is the pixel label
  • R p (A p ) represents the cost of the pixel p assigned to the pixel binary vector A
  • B(A) represents the boundary term of the undirected graph
  • q is the discontinuity value between pixels p and q
  • Ap represents the area term of pixel p
  • Step 5 Extract the boundary features of each segmented image in the segmented image set to obtain a feature set, establish a classification probability model based on the feature set and the number of image classifications, and complete the classification of the segmented image set according to the classification probability model .
  • the extracting the boundary features of each segmented image in the segmented image set to obtain a feature set includes: sequentially traversing the segmented images in the segmented image set, and randomly selecting a point from the image edge position of the segmented image As the origin of the coordinates, starting from the origin of the coordinates, the horizontal and vertical coordinates are divided into equally spaced grids, and then the average value of the pixel values in each grid is calculated according to preset rules (such as counterclockwise or clockwise) ) In turn, the pixel value averages are connected to obtain the boundary feature, and after the traversal of the segmented image set is completed, the feature set corresponding to the segmented image set is obtained.
  • preset rules such as counterclockwise or clockwise
  • the classification probability model is:
  • x) is the classification probability model
  • w i is the image classification number
  • i is the image classification number label
  • x is the boundary feature of the segmented image k
  • d is the feature number of the feature set number.
  • the intelligent image segmentation and classification program can also be segmented into one or more modules, and the one or more modules are stored in the memory 11 and run by one or more processors (in this embodiment). For example, it is executed by the processor 12) to complete this application.
  • the module referred to in this application refers to a series of computer program instruction segments that can complete specific functions, and is used to describe the intelligent image segmentation and classification program in the intelligent image segmentation and classification device The implementation process.
  • FIG. 3 a schematic diagram of the program modules of the smart image segmentation and classification program in an embodiment of the smart image segmentation and classification device of this application.
  • the smart image segmentation and classification program can be divided into The image data receiving and processing module 10, the image enhancement module 20, the threshold segmentation module 30, and the image classification result output module 40 are exemplary:
  • the image data receiving and processing module 10 is configured to: receive the original image and the image classification number input by the user, perform Fourier transform on the original image to obtain a transformed image, and perform noise reduction processing on the transformed image based on the degradation function Get a noise-reduced image.
  • the requested image enhancement module 20 is used to calculate the gray-scale probability of the noise-reduced image, encode and compress the noise-reduced image according to the gray-scale probability to obtain a compressed image, and linearly stretch the compressed image,
  • the image enhancement and region detection process obtains the image similarity set.
  • the threshold segmentation module 30 is configured to: perform threshold segmentation on the image similarity set to obtain an original segmented image set, map the original segmented image set into an undirected atlas, and optimize the undirected image set according to a pre-built objective function The atlas gets the segmented image set.
  • the classification result output 40 of the image is used to: extract the boundary features of each segmented image in the segmented image set to obtain a feature set, establish a classification probability model based on the feature set and the image classification number, and according to the classification probability model Classify the segmented image set and output the classification result of the original image.
  • image data receiving and processing module 10 image enhancement module 20, threshold segmentation module 30, image classification result output module 40 and other program modules that implement functions or operation steps when executed are substantially the same as those in the above-mentioned embodiment, and will not be omitted here. Go into details.
  • an embodiment of the present application also proposes a computer-readable storage medium with an intelligent image segmentation and classification program stored on the computer-readable storage medium, and the intelligent image segmentation and classification program can be executed by one or more processors To achieve the following operations:
  • Receive the original image and the image classification number input by the user perform Fourier transform on the original image to obtain a transformed image, and perform noise reduction processing on the transformed image based on the degradation function to obtain a noise-reduced image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

一种智能图像分割及分类方法,装置以及计算机可读存储介质,其中方法包括:接收原始图像和图像分类数,对所述原始图像进行傅里叶变换、降噪处理、编码压缩、线性拉伸、图像增强及区域检测处理得到图像相似度集合,将所述图像相似度集合进行阈值分割得到原始分割图像集,将所述原始分割图像集映射成无向图集,根据预先构建的目标函数优化所述无向图集得到分割图像集(S4),提取所述分割图像集的边界特征得到特征集,基于所述特征集和所述图像分类数建立分类概率模型,根据所述分类概率模型对所述分割图像集进行分类并输出分类结果(S5)。该方法可以实现精准的智能图像分割及分类。

Description

智能图像分割及分类方法、装置及计算机可读存储介质
本申请基于巴黎公约申明享有2019年10月12日递交的申请号为CN 201910972271.6、名称为“智能图像分割及分类方法、装置及计算机可读存储介质”的中国专利申请的优先权,该中国专利申请的整体内容以参考的方式结合在本申请中。
技术领域
本申请涉及人工智能技术领域,尤其涉及一种智能图像分割及分类的方法、装置及计算机可读存储介质。
背景技术
图像分割及分类是采用计算机对图像进行一系列分析后,得到将图片分割并以预设特征分类为多个图像的效果,现有的图像分割及分类方法多以预先设定的规则为前提,如预先设定以人为分割分类规则将图像分割分类为含有人的图像和不含人的图像,因此无法达到智能化的目的。
发明内容
本申请提供一种智能图像分割及分类方法、装置及计算机可读存储介质,其主要目的在于提供一种不需人为操作的、智能化的智能图像分割及分类方案。
为实现上述目的,本申请提供的一种智能图像分割及分类方法,包括:
接收用户输入的原始图像和图像分类数,对所述原始图像进行傅里叶变换得到变换图像,并基于退化函数对所述变换图像进行降噪处理得到降噪图像;
计算所述降噪图像的灰度概率,根据所述灰度概率将所述降噪图像进行编码压缩得到压缩图像;
将所述压缩图像进行线性拉伸、图像增强及区域检测处理得到图像相似度集合;
将所述图像相似度集合进行阈值分割得到原始分割图像集,将所述原始 分割图像集映射成无向图集,根据预先构建的目标函数优化所述无向图集得到分割图像集;
提取所述分割图像集内各分割图像的边界特征得到特征集,基于所述特征集和所述图像分类数建立分类概率模型,根据所述分类概率模型对所述分割图像集进行分类并输出所述原始图像的分类结果。
此外,为实现上述目的,本申请还提供一种智能图像分割及分类装置,该装置包括存储器和处理器,所述存储器中存储有可在所述处理器上运行的智能图像分割及分类程序,所述智能图像分割及分类程序被所述处理器执行时实现如下步骤:
接收用户输入的原始图像和图像分类数,对所述原始图像进行傅里叶变换得到变换图像,并基于退化函数对所述变换图像进行降噪处理得到降噪图像;
计算所述降噪图像的灰度概率,根据所述灰度概率将所述降噪图像进行编码压缩得到压缩图像;
将所述压缩图像进行线性拉伸、图像增强及区域检测处理得到图像相似度集合;
将所述图像相似度集合进行阈值分割得到原始分割图像集,将所述原始分割图像集映射成无向图集,根据预先构建的目标函数优化所述无向图集得到分割图像集;
提取所述分割图像集内各分割图像的边界特征得到特征集,基于所述特征集和所述图像分类数建立分类概率模型,根据所述分类概率模型对所述分割图像集进行分类并输出所述原始图像的分类结果。
此外,为实现上述目的,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有智能图像分割及分类程序,所述智能图像分割及分类程序可被一个或者多个处理器执行,以实现如上所述的智能图像分割及分类方法的步骤。
本申请将接受的原始图像和图像分类数进行傅里叶变换和退化函数处理,提高了数据的纯洁性,同时通过编码压缩及图像增强放大图像特征,并基于区域检测处理和阈值分割分离多种图像特征,提高对图像特征的利用率,并 根据分类概率模型达到最终分类目的。因此本申请提出的智能图像分割及分类方法、装置及计算机可读存储介质,可以实现高准确率的图像分割及分类目的。
附图说明
图1为本申请一实施例提供的智能图像分割及分类方法的流程示意图;
图2为本申请一实施例提供的智能图像分割及分类装置的内部结构示意图;
图3为本申请一实施例提供的智能图像分割及分类装置中智能图像分割及分类程序的模块示意图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请提供一种智能图像分割及分类方法。参照图1所示,为本申请一实施例提供的智能图像分割及分类方法的流程示意图。该方法可以由一个装置执行,该装置可以由软件和/或硬件实现。
在本实施例中,智能图像分割及分类方法包括:
S1、接收用户输入的原始图像,对所述原始图像进行傅里叶变换得到变换图像,并基于退化函数对所述变换图像进行降噪处理得到降噪图像。
较佳地,所述原始图像由若干个像素点组成,所述原始图像的内容包括各个场景,如山水、宠物嬉戏、NBA体育竞赛等。
本申请较佳实施例中,所述傅里叶变换包括:遍历所述原始图像的原始像素点,计算所述原始像素点的二维离散傅里叶变换函数,根据所述二维离散傅里叶变换函数求解所述原始图像的傅里叶逆变换函数,将所述傅里叶逆变换函数的函数值替换所述原始像素点得到所述变换图像。
进一步地,所述二维离散傅里叶变换函数包括:
Figure PCTCN2019117343-appb-000001
所述傅里叶逆变换函数包括:
Figure PCTCN2019117343-appb-000002
其中,F(u,v)为所述二维离散傅里叶变换函数,f(x,y)为所述傅里叶逆变换函数,(u,v)为所述原始像素点的坐标,(x,y)为所述傅里叶变换后的像素点坐标,
Figure PCTCN2019117343-appb-000003
Figure PCTCN2019117343-appb-000004
分别称为变换核和逆变换核,j为预设调节系数,M,N为所述原始图像的图像规格。
所述降噪处理是为了去除原始图像的噪点(如高斯噪点,椒盐噪点等),最大可能的保护图像细节。
优选地,基于退化函数对所述变换图像进行降噪处理得到降噪图像,包括:求解所述变换图像的像素方差和噪声方差,根据所述退化函数、所述像素方差和所述噪声方差,利用下述方法求解得到降噪图像。
Figure PCTCN2019117343-appb-000005
其中,t(x’,y’)为所述降噪图像,(x’,y’)为所述降噪图像的像素点,f(x,y)为所述傅里叶逆变换函数,
Figure PCTCN2019117343-appb-000006
为所述退化函数,δ 2为所述像素方差,
Figure PCTCN2019117343-appb-000007
为所述噪声方差,
Figure PCTCN2019117343-appb-000008
为所述傅里叶变换后的原始图像的像素灰度均值。
S2、计算所述降噪图像的灰度概率,根据所述灰度概率将所述降噪图像进行编码压缩得到压缩图像。
优选地,计算所述降噪图像的灰度概率,包括:遍历所述降噪图像每个像素点的灰度值得到灰度值集,遍历所述灰度值集中每个灰度值出现的次数得到灰度值与出现次数的对应表,将所述对应表中每个出现次数除以所述降噪图像的像素点个数得到灰度概率表。比如:所述降噪图像有6个像素点A、B、C、D、E、F,所述像素点A、B、C、D、E、F对应的灰度值分别为2、3、6、7、2、2,则灰度值为2的灰度概率为:
Figure PCTCN2019117343-appb-000009
较佳地,本申请所述根据所述灰度概率将所述降噪图像进行编码压缩得到压缩图像,包括:将所述灰度概率表内的灰度概率由大到小进行排序,将灰度概率最小的两个值相加得到新的灰度概率,依次类推直到所述灰度概率表的灰度概率达到指定数量阈值时,根据所述灰度概率表重新分配所述降噪图像的灰度值得到所述压缩图像。
S3、将所述压缩图像进行线性拉伸、图像增强及区域检测处理得到图像相似度集合。
较佳地,所述线性拉伸是为了扩大所述压缩图像的对比度,所述线性拉伸的方法包括:
D l=a*D n+b
其中,D n为所述压缩图像,D l为所述线性拉伸后的图像,a为所述线性拉伸的斜率,b为所述线性拉伸的截距,当所述a>1,此时所述线性拉伸后的图像对比度相比所述压缩图像是增强的,当所述a≤1,此时所述线性拉伸后的图像对比度相比所述压缩图像是削弱的。
优选地,本申请较佳实施例中所述图像增强采用多阈值亮度增强法。所述图像增强包括:预设一个或多个亮度阈值段,遍历所述压缩图像的每个亮度点,判断所述每个亮度点属于的阈值段,并根据所述属于的阈值段预设的增强处理方法进行增强处理,直至所述遍历完成得到增强图像。
如所述亮度阈值段分为[0,20],[20,40],[40,80],[80,120],[120,无穷大],若所述压缩图像的亮度点为27,则在[20,40]段,若[20,40]段的增强处理方法为放大2倍,因此所述亮度点为27变为了54。
优选地,所述区域检测处理包括:将所述增强图像随机分成s个尺寸相同的小图像块,依次计算所述s个小图像块的中心点像素值得到中心像素值集合,基于颜色特征和欧式距离计算所述中心点像素值集合的相似度得到图像相似度集合。
S4、将所述图像相似度集合进行阈值分割得到原始分割图像集,将所述原始分割图像集映射成无向图集,根据预先构建的目标函数优化所述无向图集得到分割图像集。
优选地,所述阈值分割包括:基于预设相似度阈值从所述相似度集合中去除大于所述预设相似度阈值的相似度数值,根据去除后的相似度集合提取 出对应的小图像块得到分割图像集。
进一步地,本申请所述计算所述s个小图像块的中心点像素值可采用平均值法,中心点扩大法等。所述平均值法即将小图像块中所有的像素值相加后取平均值,所述平均值即为中心点像素值;所述中心点扩大法即选取小图像块的位置中心点,距离所述中心点越近的像素点权重越大,直到所述中心点最大的权重,根据各个权重再取平均值得到中心点像素值。
较佳地,所述基于颜色特征计算所述中心点像素值集合的相似度根据如下方法:
Figure PCTCN2019117343-appb-000010
其中,p i,p j分别表示i小图像块和j小图像块的中心点像素值,d(p i,p j)为所述i小图像块和所述j小图像块的相似度,c相似度调节参数,d position(p i,p j)表示所述i小图像块和所述j小图像块的欧式距离,d color(p i,p j)表示所述i小图像块颜色特征和所述j小图像块颜色特征的差值,所述颜色特征即为小图像块的RGB像素值经过预设处理方法得到。
优选地,所述原始分割图像集映射成无向图集包括:遍历所述原始分割图像集内每个原始分割图像,随机选取所述原始分割图像的两个相邻像素,构建所述两个相邻像素的连接线,基于所述连接线得到分割后的图像,所述分割后的图像称为无向图。
更进一步地,所述目标函数为:
E(A)=ρR(A)+B(A)
Figure PCTCN2019117343-appb-000011
Figure PCTCN2019117343-appb-000012
其中,E(A)为分割图像集,A表示像素集合的二进制向量,ρ为调节参数,R(A)为像素标签,R p(A p)表示像素p分配给像素二进制向量A的代价,B(A)表示所述无向图的边界项,B p,q为像素p,q之间的不连续代价值,A p表示像素p的区 域项,
Figure PCTCN2019117343-appb-000013
为像素p的区域项与像素q的区域项相同的概率函数。
S5、提取所述分割图像集内各分割图像的边界特征得到特征集,基于所述特征集和所述图像分类数建立分类概率模型,根据所述分类概率模型完成所述分割图像集的分类。
较佳地,所述提取所述分割图像集内各分割图像的边界特征得到特征集,包括:依次遍历所述分割图像集内的分割图像,从所述分割图像的图像边缘位置中随机选择一点作为坐标原点,从所述坐标原点开始,将水平方向坐标和垂直方向坐标分成等间隔的网格,然后计算每个网格中的像素值均值,按照预设规则(如逆时针或顺时针方向)依次将所述像素值均值连接起来得到所述边界特征,当遍历所述分割图像集完成后得到与所述分割图像集对应的特征集。
本申请较佳实施例中,所述分类概率模型为:
Figure PCTCN2019117343-appb-000014
其中,P(w i|x)为所述分类概率模型,w i为所述图像分类数,i为图像分类数标号,x为分割图像k的边界特征,d为所述特征集的特征个数。
发明还提供一种智能图像分割及分类装置。参照图2所示,为本申请一实施例提供的智能图像分割及分类装置的内部结构示意图。
在本实施例中,所述智能图像分割及分类装置1可以是PC(Personal Computer,个人电脑),或者是智能手机、平板电脑、便携计算机等终端设备,也可以是一种服务器等。该智能图像分割及分类装置1至少包括存储器11、处理器12,通信总线13,以及网络接口14。
其中,存储器11至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、磁性存储器、磁盘、光盘等。存储器11在一些实施例中可以是智能图像分割及分类装置1的内部存储单元,例如该智能图像分割及分类装置1的硬盘。存储器11在另一些实施例中也可以是智能图像分割及分类装置1的外部存储设备,例如智能图像分割及分类装置1上配备的插接式硬盘,智能存储卡(Smart  Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,存储器11还可以既包括智能图像分割及分类装置1的内部存储单元也包括外部存储设备。存储器11不仅可以用于存储安装于智能图像分割及分类装置1的应用软件及各类数据,例如智能图像分割及分类程序01的代码等,还可以用于暂时地存储已经输出或者将要输出的数据。
处理器12在一些实施例中可以是一中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器或其他数据处理芯片,用于运行存储器11中存储的程序代码或处理数据,例如执行智能图像分割及分类程序01等。
通信总线13用于实现这些组件之间的连接通信。
网络接口14可选的可以包括标准的有线接口、无线接口(如WI-FI接口),通常用于在该装置1与其他电子设备之间建立通信连接。
可选地,该装置1还可以包括用户接口,用户接口可以包括显示器(Display)、输入单元比如键盘(Keyboard),可选的用户接口还可以包括标准的有线接口、无线接口。可选地,在一些实施例中,显示器可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。其中,显示器也可以适当的称为显示屏或显示单元,用于显示在智能图像分割及分类装置1中处理的信息以及用于显示可视化的用户界面。
图2仅示出了具有组件11-14以及智能图像分割及分类程序01的智能图像分割及分类装置1,本领域技术人员可以理解的是,图1示出的结构并不构成对智能图像分割及分类装置1的限定,可以包括比图示更少或者更多的部件,或者组合某些部件,或者不同的部件布置。
在图2所示的装置1实施例中,存储器11中存储有智能图像分割及分类程序01;处理器12执行存储器11中存储的智能图像分割及分类程序01时实现如下步骤:
步骤一、接收用户输入的原始图像,对所述原始图像进行傅里叶变换得到变换图像,并基于退化函数对所述变换图像进行降噪处理得到降噪图像。
较佳地,所述原始图像由若干个像素点组成,所述原始图像的内容包括各个场景,如山水、宠物嬉戏、NBA体育竞赛等。
本申请较佳实施例中,所述傅里叶变换包括:遍历所述原始图像的原始 像素点,计算所述原始像素点的二维离散傅里叶变换函数,根据所述二维离散傅里叶变换函数求解所述原始图像的傅里叶逆变换函数,将所述傅里叶逆变换函数的函数值替换所述原始像素点得到所述变换图像。
进一步地,所述二维离散傅里叶变换函数包括:
Figure PCTCN2019117343-appb-000015
所述傅里叶逆变换函数包括:
Figure PCTCN2019117343-appb-000016
其中,F(u,v)为所述二维离散傅里叶变换函数,f(x,y)为所述傅里叶逆变换函数,(u,v)为所述原始像素点的坐标,(x,y)为所述傅里叶变换后的像素点坐标,
Figure PCTCN2019117343-appb-000017
Figure PCTCN2019117343-appb-000018
分别称为变换核和逆变换核,j为预设调节系数,M,N为所述原始图像的图像规格。
所述降噪处理是为了去除原始图像的噪点(如高斯噪点,椒盐噪点等),最大可能的保护图像细节。
优选地,基于退化函数对所述变换图像进行降噪处理得到降噪图像,包括:求解所述变换图像的像素方差和噪声方差,根据所述退化函数、所述像素方差和所述噪声方差,利用下述方法求解得到降噪图像。
Figure PCTCN2019117343-appb-000019
其中,t(x’,y’)为所述降噪图像,(x’,y’)为所述降噪图像的像素点,f(x,y)为所述傅里叶逆变换函数,
Figure PCTCN2019117343-appb-000020
为所述退化函数,δ 2为所述像素方差,
Figure PCTCN2019117343-appb-000021
为所述噪声方差,
Figure PCTCN2019117343-appb-000022
为所述傅里叶变换后的原始图像的像素灰度均值。
步骤二、计算所述降噪图像的灰度概率,根据所述灰度概率将所述降噪图像进行编码压缩得到压缩图像。
优选地,计算所述降噪图像的灰度概率,包括:遍历所述降噪图像每个像素点的灰度值得到灰度值集,遍历所述灰度值集中每个灰度值出现的次数 得到灰度值与出现次数的对应表,将所述对应表中每个出现次数除以所述降噪图像的像素点个数得到灰度概率表。比如:所述降噪图像有6个像素点A、B、C、D、E、F,所述像素点A、B、C、D、E、F对应的灰度值分别为2、3、6、7、2、2,则灰度值为2的灰度概率为:
Figure PCTCN2019117343-appb-000023
较佳地,本申请所述根据所述灰度概率将所述降噪图像进行编码压缩得到压缩图像,包括:将所述灰度概率表内的灰度概率由大到小进行排序,将灰度概率最小的两个值相加得到新的灰度概率,依次类推直到所述灰度概率表的灰度概率达到指定数量阈值时,根据所述灰度概率表重新分配所述降噪图像的灰度值得到所述压缩图像。
步骤三、将所述压缩图像进行线性拉伸、图像增强及区域检测处理得到图像相似度集合。
较佳地,所述线性拉伸是为了扩大所述压缩图像的对比度,所述线性拉伸的方法包括:
D l=a*D n+b
其中,D n为所述压缩图像,D l为所述线性拉伸后的图像,a为所述线性拉伸的斜率,b为所述线性拉伸的截距,当所述a>1,此时所述线性拉伸后的图像对比度相比所述压缩图像是增强的,当所述a≤1,此时所述线性拉伸后的图像对比度相比所述压缩图像是削弱的。
优选地,本申请较佳实施例中所述图像增强采用多阈值亮度增强法。所述图像增强包括:预设一个或多个亮度阈值段,遍历所述压缩图像的每个亮度点,判断所述每个亮度点属于的阈值段,并根据所述属于的阈值段预设的增强处理方法进行增强处理,直至所述遍历完成得到增强图像。
如所述亮度阈值段分为[0,20],[20,40],[40,80],[80,120],[120,无穷大],若所述压缩图像的亮度点为27,则在[20,40]段,若[20,40]段的增强处理方法为放大2倍,因此所述亮度点为27变为了54。
优选地,所述区域检测处理包括:将所述增强图像随机分成s个尺寸相同的小图像块,依次计算所述s个小图像块的中心点像素值得到中心像素值集合,基于颜色特征和欧式距离计算所述中心点像素值集合的相似度得到图像相似度集合。
步骤四、将所述图像相似度集合进行阈值分割得到原始分割图像集,将 所述原始分割图像集映射成无向图集,根据预先构建的目标函数优化所述无向图集得到分割图像集。
优选地,所述阈值分割包括:基于预设相似度阈值从所述相似度集合中去除大于所述预设相似度阈值的相似度数值,根据去除后的相似度集合提取出对应的小图像块得到分割图像集。
进一步地,本申请所述计算所述s个小图像块的中心点像素值可采用平均值法,中心点扩大法等。所述平均值法即将小图像块中所有的像素值相加后取平均值,所述平均值即为中心点像素值;所述中心点扩大法即选取小图像块的位置中心点,距离所述中心点越近的像素点权重越大,直到所述中心点最大的权重,根据各个权重再取平均值得到中心点像素值。
较佳地,所述基于颜色特征计算所述中心点像素值集合的相似度根据如下方法:
Figure PCTCN2019117343-appb-000024
其中,p i,p j分别表示i小图像块和j小图像块的中心点像素值,d(p i,p j)为所述i小图像块和所述j小图像块的相似度,c相似度调节参数,d position(p i,p j)表示所述i小图像块和所述j小图像块的欧式距离,d color(p i,p j)表示所述i小图像块颜色特征和所述j小图像块颜色特征的差值,所述颜色特征即为小图像块的RGB像素值经过预设处理方法得到。
优选地,所述原始分割图像集映射成无向图集包括:遍历所述原始分割图像集内每个原始分割图像,随机选取所述原始分割图像的两个相邻像素,构建所述两个相邻像素的连接线,基于所述连接线得到分割后的图像,所述分割后的图像称为无向图。
更进一步地,所述目标函数为:
E(A)=ρR(A)+B(A)
Figure PCTCN2019117343-appb-000025
Figure PCTCN2019117343-appb-000026
其中,E(A)为分割图像集,A表示像素集合的二进制向量,ρ为调节参数,R(A)为像素标签,R p(A p)表示像素p分配给像素二进制向量A的代价,B(A)表示所述无向图的边界项,B p,q为像素p,q之间的不连续代价值,A p表示像素p的区域项,
Figure PCTCN2019117343-appb-000027
为像素p的区域项与像素q的区域项相同的概率函数。
步骤五、提取所述分割图像集内各分割图像的边界特征得到特征集,基于所述特征集和所述图像分类数建立分类概率模型,根据所述分类概率模型完成所述分割图像集的分类。
较佳地,所述提取所述分割图像集内各分割图像的边界特征得到特征集,包括:依次遍历所述分割图像集内的分割图像,从所述分割图像的图像边缘位置中随机选择一点作为坐标原点,从所述坐标原点开始,将水平方向坐标和垂直方向坐标分成等间隔的网格,然后计算每个网格中的像素值均值,按照预设规则(如逆时针或顺时针方向)依次将所述像素值均值连接起来得到所述边界特征,当遍历所述分割图像集完成后得到与所述分割图像集对应的特征集。
本申请较佳实施例中,所述分类概率模型为:
Figure PCTCN2019117343-appb-000028
其中,P(w i|x)为所述分类概率模型,w i为所述图像分类数,i为图像分类数标号,x为分割图像k的边界特征,d为所述特征集的特征个数。
可选地,在其他实施例中,智能图像分割及分类程序还可以被分割为一个或者多个模块,一个或者多个模块被存储于存储器11中,并由一个或多个处理器(本实施例为处理器12)所执行以完成本申请,本申请所称的模块是指能够完成特定功能的一系列计算机程序指令段,用于描述智能图像分割及分类程序在智能图像分割及分类装置中的执行过程。
例如,参照图3所示,为本申请智能图像分割及分类装置一实施例中的智能图像分割及分类程序的程序模块示意图,该实施例中,所述智能图像分 割及分类程序可以被分割为图像数据接收及处理模块10、图像增强模块20、阈值分割模块30、图像的分类结果输出模块40示例性地:
所述图像数据接收及处理模块10用于:接收用户输入的原始图像和图像分类数,对所述原始图像进行傅里叶变换得到变换图像,并基于退化函数对所述变换图像进行降噪处理得到降噪图像。
所述请求图像增强模块20用于:计算所述降噪图像的灰度概率,根据所述灰度概率将所述降噪图像进行编码压缩得到压缩图像,将所述压缩图像进行线性拉伸、图像增强及区域检测处理得到图像相似度集合。
所述阈值分割模块30用于:将所述图像相似度集合进行阈值分割得到原始分割图像集,将所述原始分割图像集映射成无向图集,根据预先构建的目标函数优化所述无向图集得到分割图像集。
所述图像的分类结果输出40用于:提取所述分割图像集内各分割图像的边界特征得到特征集,基于所述特征集和所述图像分类数建立分类概率模型,根据所述分类概率模型对所述分割图像集进行分类并输出所述原始图像的分类结果。
上述图像数据接收及处理模块10、图像增强模块20、阈值分割模块30、图像的分类结果输出模块40等程序模块被执行时所实现的功能或操作步骤与上述实施例大体相同,在此不再赘述。
此外,本申请实施例还提出一种计算机可读存储介质,所述计算机可读存储介质上存储有智能图像分割及分类程序,所述智能图像分割及分类程序可被一个或多个处理器执行,以实现如下操作:
接收用户输入的原始图像和图像分类数,对所述原始图像进行傅里叶变换得到变换图像,并基于退化函数对所述变换图像进行降噪处理得到降噪图像。
计算所述降噪图像的灰度概率,根据所述灰度概率将所述降噪图像进行编码压缩得到压缩图像,将所述压缩图像进行线性拉伸、图像增强及区域检测处理得到图像相似度集合。
将所述图像相似度集合进行阈值分割得到原始分割图像集,将所述原始分割图像集映射成无向图集,根据预先构建的目标函数优化所述无向图集得 到分割图像集。
提取所述分割图像集内各分割图像的边界特征得到特征集,基于所述特征集和所述图像分类数建立分类概率模型,根据所述分类概率模型对所述分割图像集进行分类并输出所述原始图像的分类结果。
需要说明的是,上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。并且本文中的术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种智能图像分割及分类方法,其特征在于,所述方法包括:
    接收用户输入的原始图像和图像分类数,对所述原始图像进行傅里叶变换得到变换图像,并基于退化函数对所述变换图像进行降噪处理得到降噪图像;
    计算所述降噪图像的灰度概率,根据所述灰度概率将所述降噪图像进行编码压缩得到压缩图像;
    将所述压缩图像进行线性拉伸、图像增强及区域检测处理得到图像相似度集合;
    将所述图像相似度集合进行阈值分割得到原始分割图像集,将所述原始分割图像集映射成无向图集,根据预先构建的目标函数优化所述无向图集得到分割图像集;
    提取所述分割图像集内各分割图像的边界特征得到特征集,基于所述特征集和所述图像分类数建立分类概率模型,根据所述分类概率模型对所述分割图像集进行分类并输出所述原始图像的分类结果。
  2. 如权利要求1所述的智能图像分割及分类方法,其特征在于,所述傅里叶变换包括:
    遍历所述原始图像的原始像素点,计算所述原始像素点的二维离散傅里叶变换函数;
    根据所述二维离散傅里叶变换函数求解所述原始图像的傅里叶逆变换函数;
    将所述傅里叶逆变换函数的函数值替换所述原始像素点得到所述变换图像。
  3. 如权利要求2所述的智能图像分割及分类方法,其特征在于,所述二维离散傅里叶变换函数包括:
    Figure PCTCN2019117343-appb-100001
    所述傅里叶逆变换函数包括:
    Figure PCTCN2019117343-appb-100002
    其中,F(u,v)为所述二维离散傅里叶变换函数,f(x,y)为所述傅里叶逆变换函数,(u,v)为所述原始像素点的坐标,(x,y)为所述傅里叶变换后的像素点坐标,
    Figure PCTCN2019117343-appb-100003
    Figure PCTCN2019117343-appb-100004
    分别为变换核和逆变换核,j为预设调节系数,M,N为所述原始图像的图像规格。
  4. 如权利要求1所述的智能图像分割及分类方法,其特征在于,所述基于退化函数对所述变换图像进行降噪处理得到降噪图像,包括:
    求解所述变换图像的像素方差和噪声方差;
    根据所述退化函数、所述像素方差和所述噪声方差利用下述方法得到降噪图像:
    Figure PCTCN2019117343-appb-100005
    其中,t(x’,y’)为所述降噪图像,(x’,y’)为所述降噪图像的像素点,f(x,y)为所述傅里叶逆变换函数,
    Figure PCTCN2019117343-appb-100006
    为所述退化函数,δ 2为所述像素方差,
    Figure PCTCN2019117343-appb-100007
    为所述噪声方差,
    Figure PCTCN2019117343-appb-100008
    为所述傅里叶变换后的原始图像的像素灰度均值。
  5. 如权利要求1所述的智能图像分割及分类方法,其特征在于,所述预先构建的目标函数为:
    E(A)=ρR(A)+B(A)
    Figure PCTCN2019117343-appb-100009
    Figure PCTCN2019117343-appb-100010
    其中,E(A)为分割图像集,A表示像素集合的二进制向量,ρ为调节参数,R(A)为像素标签,R p(A p)表示像素p分配给像素二进制向量A的代价,B(A)表示所述无向图的边界项,B p,q为像素p,q之间的不连续代价值,A p表示像素p的区域项,
    Figure PCTCN2019117343-appb-100011
    为像素p的区域项与像素q的区域项相同的概率函数。
  6. 如权利要求1所述的智能图像分割及分类方法,其特征在于,所述计算所述降噪图像的灰度概率包括:
    遍历所述降噪图像每个像素点的灰度值得到灰度值集,遍历所述灰度值 集中每个灰度值出现的次数得到灰度值与出现次数的对应表,将所述对应表中每个出现次数除以所述降噪图像的像素点个数得到灰度概率表。
  7. 如权利要求1所述的智能图像分割及分类方法,其特征在于,所述根据所述灰度概率将所述降噪图像进行编码压缩得到压缩图像包括:
    将所述灰度概率表内的灰度概率由大到小进行排序,将灰度概率最小的两个值相加得到新的灰度概率,依次类推直到所述灰度概率表的灰度概率达到指定数量阈值时,根据所述灰度概率表重新分配所述降噪图像的灰度值得到所述压缩图像。
  8. 一种智能图像分割及分类装置,其特征在于,所述装置包括存储器和处理器,所述存储器上存储有可在所述处理器上运行的智能图像分割及分类程序,所述智能图像分割及分类程序被所述处理器执行时实现如下步骤:
    接收用户输入的原始图像和图像分类数,对所述原始图像进行傅里叶变换得到变换图像,并基于退化函数对所述变换图像进行降噪处理得到降噪图像;
    计算所述降噪图像的灰度概率,根据所述灰度概率将所述降噪图像进行编码压缩得到压缩图像;
    将所述压缩图像进行线性拉伸、图像增强及区域检测处理得到图像相似度集合;
    将所述图像相似度集合进行阈值分割得到原始分割图像集,将所述原始分割图像集映射成无向图集,根据预先构建的目标函数优化所述无向图集得到分割图像集;
    提取所述分割图像集内各分割图像的边界特征得到特征集,基于所述特征集和所述图像分类数建立分类概率模型,根据所述分类概率模型对所述分割图像集进行分类并输出所述原始图像的分类结果。
  9. 如权利要求8所述的智能图像分割及分类装置,其特征在于,所述傅里叶变换包括:
    遍历所述原始图像的原始像素点,计算所述原始像素点的二维离散傅里叶变换函数;
    根据所述二维离散傅里叶变换函数求解所述原始图像的傅里叶逆变换函数;
    将所述傅里叶逆变换函数的函数值替换所述原始像素点得到所述变换图像。
  10. 如权利要求9所述的智能图像分割及分类装置,其特征在于,所述二维离散傅里叶变换函数包括:
    Figure PCTCN2019117343-appb-100012
    所述傅里叶逆变换函数包括:
    Figure PCTCN2019117343-appb-100013
    其中,F(u,v)为所述二维离散傅里叶变换函数,f(x,y)为所述傅里叶逆变换函数,(u,v)为所述原始像素点的坐标,(x,y)为所述傅里叶变换后的像素点坐标,
    Figure PCTCN2019117343-appb-100014
    Figure PCTCN2019117343-appb-100015
    分别为变换核和逆变换核,j为预设调节系数,M,N为所述原始图像的图像规格。
  11. 如权利要求10所述的智能图像分割及分类装置,其特征在于,所述基于退化函数对所述变换图像进行降噪处理得到降噪图像,包括:
    求解所述变换图像的像素方差和噪声方差;
    根据所述退化函数、所述像素方差和所述噪声方差利用下述方法得到降噪图像:
    Figure PCTCN2019117343-appb-100016
    其中,t(x’,y’)为所述降噪图像,(x’,y’)为所述降噪图像的像素点,f(x,y)为所述傅里叶逆变换函数,
    Figure PCTCN2019117343-appb-100017
    为所述退化函数,δ 2为所述像素方差,
    Figure PCTCN2019117343-appb-100018
    为所述噪声方差,
    Figure PCTCN2019117343-appb-100019
    为所述傅里叶变换后的原始图像的像素灰度均值。
  12. 如权利要求8所述的智能图像分割及分类装置,其特征在于,所述预先构建的目标函数为:
    E(A)=ρR(A)+B(A)
    Figure PCTCN2019117343-appb-100020
    Figure PCTCN2019117343-appb-100021
    其中,E(A)为分割图像集,A表示像素集合的二进制向量,ρ为调节参数,R(A)为像素标签,R p(A p)表示像素p分配给像素二进制向量A的代价,B(A)表示所述无向图的边界项,B p,q为像素p,q之间的不连续代价值,A p表示像素p的区域项,
    Figure PCTCN2019117343-appb-100022
    为像素p的区域项与像素q的区域项相同的概率函数。
  13. 如权利要求8所述的智能图像分割及分类装置,其特征在于,所述计算所述降噪图像的灰度概率包括:
    遍历所述降噪图像每个像素点的灰度值得到灰度值集,遍历所述灰度值集中每个灰度值出现的次数得到灰度值与出现次数的对应表,将所述对应表中每个出现次数除以所述降噪图像的像素点个数得到灰度概率表。
  14. 如权利要求8所述的智能图像分割及分类装置,其特征在于,所述根据所述灰度概率将所述降噪图像进行编码压缩得到压缩图像包括:
    将所述灰度概率表内的灰度概率由大到小进行排序,将灰度概率最小的两个值相加得到新的灰度概率,依次类推直到所述灰度概率表的灰度概率达到指定数量阈值时,根据所述灰度概率表重新分配所述降噪图像的灰度值得到所述压缩图像。
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有智能图像分割及分类程序,所述智能图像分割及分类程序可被一个或者多个处理器执行时,实现如下步骤:
    接收用户输入的原始图像和图像分类数,对所述原始图像进行傅里叶变换得到变换图像,并基于退化函数对所述变换图像进行降噪处理得到降噪图像;
    计算所述降噪图像的灰度概率,根据所述灰度概率将所述降噪图像进行编码压缩得到压缩图像;
    将所述压缩图像进行线性拉伸、图像增强及区域检测处理得到图像相似度集合;
    将所述图像相似度集合进行阈值分割得到原始分割图像集,将所述原始分割图像集映射成无向图集,根据预先构建的目标函数优化所述无向图集得到分割图像集;
    提取所述分割图像集内各分割图像的边界特征得到特征集,基于所述特征集和所述图像分类数建立分类概率模型,根据所述分类概率模型对所述分割图像集进行分类并输出所述原始图像的分类结果。
  16. 如权利要求15所述的计算机可读存储介质,其特征在于,所述傅里叶变换包括:
    遍历所述原始图像的原始像素点,计算所述原始像素点的二维离散傅里叶变换函数;
    根据所述二维离散傅里叶变换函数求解所述原始图像的傅里叶逆变换函数;
    将所述傅里叶逆变换函数的函数值替换所述原始像素点得到所述变换图像。
  17. 如权利要求16所述的计算机可读存储介质,其特征在于,所述二维离散傅里叶变换函数包括:
    Figure PCTCN2019117343-appb-100023
    所述傅里叶逆变换函数包括:
    Figure PCTCN2019117343-appb-100024
    其中,F(u,v)为所述二维离散傅里叶变换函数,f(x,y)为所述傅里叶逆变换函数,(u,v)为所述原始像素点的坐标,(x,y)为所述傅里叶变换后的像素点坐标,
    Figure PCTCN2019117343-appb-100025
    Figure PCTCN2019117343-appb-100026
    分别为变换核和逆变换核,j为预设调节系数,M,N为所述原始图像的图像规格。
  18. 如权利要求15所述的计算机可读存储介质,其特征在于,所述基于退化函数对所述变换图像进行降噪处理得到降噪图像,包括:
    求解所述变换图像的像素方差和噪声方差;
    根据所述退化函数、所述像素方差和所述噪声方差利用下述方法得到降噪图像:
    Figure PCTCN2019117343-appb-100027
    其中,t(x’,y’)为所述降噪图像,(x’,y’)为所述降噪图像的像素点,f(x,y)为所述傅里叶逆变换函数,
    Figure PCTCN2019117343-appb-100028
    为所述退化函数,δ 2为所述像素方差,
    Figure PCTCN2019117343-appb-100029
    为所述噪声方差,
    Figure PCTCN2019117343-appb-100030
    为所述傅里叶变换后的原始图像的像素灰度均值。
  19. 如权利要求15所述的计算机可读存储介质,其特征在于,所述计算所述降噪图像的灰度概率包括:
    遍历所述降噪图像每个像素点的灰度值得到灰度值集,遍历所述灰度值集中每个灰度值出现的次数得到灰度值与出现次数的对应表,将所述对应表中每个出现次数除以所述降噪图像的像素点个数得到灰度概率表。
  20. 如权利要求15所述的计算机可读存储介质,其特征在于,所述根据所述灰度概率将所述降噪图像进行编码压缩得到压缩图像包括:
    将所述灰度概率表内的灰度概率由大到小进行排序,将灰度概率最小的两个值相加得到新的灰度概率,依次类推直到所述灰度概率表的灰度概率达到指定数量阈值时,根据所述灰度概率表重新分配所述降噪图像的灰度值得到所述压缩图像。
PCT/CN2019/117343 2019-10-12 2019-11-12 智能图像分割及分类方法、装置及计算机可读存储介质 WO2021068330A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910972271.6 2019-10-12
CN201910972271.6A CN110853047B (zh) 2019-10-12 2019-10-12 智能图像分割及分类方法、装置及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2021068330A1 true WO2021068330A1 (zh) 2021-04-15

Family

ID=69596254

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/117343 WO2021068330A1 (zh) 2019-10-12 2019-11-12 智能图像分割及分类方法、装置及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN110853047B (zh)
WO (1) WO2021068330A1 (zh)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113221919A (zh) * 2021-05-19 2021-08-06 神华神东煤炭集团有限责任公司 一种煤泥颗粒图像的特征提取方法及电子设备
CN114283155A (zh) * 2021-11-23 2022-04-05 赣州好朋友科技有限公司 矿石图像的分割方法、装置及计算机可读存储介质
CN114298985A (zh) * 2021-12-16 2022-04-08 苏州凌云视界智能设备有限责任公司 缺陷检测方法、装置、设备及存储介质
CN114581345A (zh) * 2022-05-07 2022-06-03 广州骏天科技有限公司 一种基于自适应线性灰度化的图像增强方法及系统
CN115908458A (zh) * 2023-03-09 2023-04-04 国家海洋局南海标准计量中心 一种深海区域干涉条纹提取方法、装置及存储介质
CN116612138A (zh) * 2023-07-14 2023-08-18 威海职业学院(威海市技术学院) 基于图像处理的电气设备在线监测方法
CN117237383A (zh) * 2023-11-15 2023-12-15 山东智赢门窗科技有限公司 一种基于室内环境的智能门窗控制方法及系统
CN117314940A (zh) * 2023-11-30 2023-12-29 诺伯特智能装备(山东)有限公司 基于人工智能的激光切割零件轮廓快速分割方法
CN117392465A (zh) * 2023-12-08 2024-01-12 聚真宝(山东)技术有限公司 一种基于视觉的垃圾分类数字化管理方法

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476760B (zh) * 2020-03-17 2024-05-10 平安科技(深圳)有限公司 医学图像的生成方法、装置、电子设备及介质
CN111815535B (zh) * 2020-07-14 2023-11-10 抖音视界有限公司 图像处理方法、装置、电子设备和计算机可读介质
CN113177592B (zh) * 2021-04-28 2022-07-08 上海硕恩网络科技股份有限公司 一种图像分割方法、装置、计算机设备及存储介质
CN113553938B (zh) * 2021-07-19 2024-05-14 黑芝麻智能科技(上海)有限公司 安全带检测方法、装置、计算机设备和存储介质
CN113887737B (zh) * 2021-09-23 2024-05-17 北京工商大学 一种基于机器学习的样本集自动生成方法
CN113689435B (zh) * 2021-09-28 2023-06-20 平安科技(深圳)有限公司 图像分割方法、装置、电子设备及存储介质
CN116760952B (zh) * 2023-08-17 2023-10-20 山东欣晖电力科技有限公司 基于无人机的电力铁塔维护巡检方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5751850A (en) * 1993-06-30 1998-05-12 International Business Machines Corporation Method for image segmentation and classification of image elements for documents processing
US20100296709A1 (en) * 2009-05-19 2010-11-25 Algotec Systems Ltd. Method and system for blood vessel segmentation and classification
CN103985114A (zh) * 2014-03-21 2014-08-13 南京大学 一种监控视频人物前景分割与分类的方法
CN108288265A (zh) * 2018-01-09 2018-07-17 东北大学 一种面向hcc病理图像细胞核的分割与分类方法
CN109214428A (zh) * 2018-08-13 2019-01-15 平安科技(深圳)有限公司 图像分割方法、装置、计算机设备及计算机存储介质
CN109886273A (zh) * 2019-02-26 2019-06-14 四川大学华西医院 一种cmr图像分割分类系统

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6751354B2 (en) * 1999-03-11 2004-06-15 Fuji Xerox Co., Ltd Methods and apparatuses for video segmentation, classification, and retrieval using image class statistical models
JP2005141453A (ja) * 2003-11-06 2005-06-02 Nippon Telegr & Teleph Corp <Ntt> 指紋画像処理方法,指紋画像処理装置,指紋画像処理プログラム記録媒体および指紋画像処理プログラム
JP4434868B2 (ja) * 2004-07-15 2010-03-17 日立ソフトウエアエンジニアリング株式会社 画像分割処理システム
JP4376145B2 (ja) * 2004-07-22 2009-12-02 日立ソフトウエアエンジニアリング株式会社 画像分類学習処理システム及び画像識別処理システム
US8503801B2 (en) * 2010-09-21 2013-08-06 Adobe Systems Incorporated System and method for classifying the blur state of digital image pixels
CN110222571B (zh) * 2019-05-06 2023-04-07 平安科技(深圳)有限公司 黑眼圈智能判断方法、装置及计算机可读存储介质
CN110309709A (zh) * 2019-05-20 2019-10-08 平安科技(深圳)有限公司 人脸识别方法、装置及计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5751850A (en) * 1993-06-30 1998-05-12 International Business Machines Corporation Method for image segmentation and classification of image elements for documents processing
US20100296709A1 (en) * 2009-05-19 2010-11-25 Algotec Systems Ltd. Method and system for blood vessel segmentation and classification
CN103985114A (zh) * 2014-03-21 2014-08-13 南京大学 一种监控视频人物前景分割与分类的方法
CN108288265A (zh) * 2018-01-09 2018-07-17 东北大学 一种面向hcc病理图像细胞核的分割与分类方法
CN109214428A (zh) * 2018-08-13 2019-01-15 平安科技(深圳)有限公司 图像分割方法、装置、计算机设备及计算机存储介质
CN109886273A (zh) * 2019-02-26 2019-06-14 四川大学华西医院 一种cmr图像分割分类系统

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113221919B (zh) * 2021-05-19 2023-12-26 神华神东煤炭集团有限责任公司 一种煤泥颗粒图像的特征提取方法及电子设备
CN113221919A (zh) * 2021-05-19 2021-08-06 神华神东煤炭集团有限责任公司 一种煤泥颗粒图像的特征提取方法及电子设备
CN114283155B (zh) * 2021-11-23 2023-07-04 赣州好朋友科技有限公司 矿石图像的分割方法、装置及计算机可读存储介质
CN114283155A (zh) * 2021-11-23 2022-04-05 赣州好朋友科技有限公司 矿石图像的分割方法、装置及计算机可读存储介质
CN114298985A (zh) * 2021-12-16 2022-04-08 苏州凌云视界智能设备有限责任公司 缺陷检测方法、装置、设备及存储介质
CN114298985B (zh) * 2021-12-16 2023-12-22 苏州凌云光工业智能技术有限公司 缺陷检测方法、装置、设备及存储介质
CN114581345B (zh) * 2022-05-07 2022-07-05 广州骏天科技有限公司 一种基于自适应线性灰度化的图像增强方法及系统
CN114581345A (zh) * 2022-05-07 2022-06-03 广州骏天科技有限公司 一种基于自适应线性灰度化的图像增强方法及系统
CN115908458B (zh) * 2023-03-09 2023-05-12 国家海洋局南海标准计量中心 一种深海区域干涉条纹提取方法、装置及存储介质
CN115908458A (zh) * 2023-03-09 2023-04-04 国家海洋局南海标准计量中心 一种深海区域干涉条纹提取方法、装置及存储介质
CN116612138A (zh) * 2023-07-14 2023-08-18 威海职业学院(威海市技术学院) 基于图像处理的电气设备在线监测方法
CN116612138B (zh) * 2023-07-14 2023-09-19 威海职业学院(威海市技术学院) 基于图像处理的电气设备在线监测方法
CN117237383A (zh) * 2023-11-15 2023-12-15 山东智赢门窗科技有限公司 一种基于室内环境的智能门窗控制方法及系统
CN117237383B (zh) * 2023-11-15 2024-02-02 山东智赢门窗科技有限公司 一种基于室内环境的智能门窗控制方法及系统
CN117314940A (zh) * 2023-11-30 2023-12-29 诺伯特智能装备(山东)有限公司 基于人工智能的激光切割零件轮廓快速分割方法
CN117314940B (zh) * 2023-11-30 2024-02-02 诺伯特智能装备(山东)有限公司 基于人工智能的激光切割零件轮廓快速分割方法
CN117392465A (zh) * 2023-12-08 2024-01-12 聚真宝(山东)技术有限公司 一种基于视觉的垃圾分类数字化管理方法
CN117392465B (zh) * 2023-12-08 2024-03-22 聚真宝(山东)技术有限公司 一种基于视觉的垃圾分类数字化管理方法

Also Published As

Publication number Publication date
CN110853047A (zh) 2020-02-28
CN110853047B (zh) 2023-09-15

Similar Documents

Publication Publication Date Title
WO2021068330A1 (zh) 智能图像分割及分类方法、装置及计算机可读存储介质
WO2019109526A1 (zh) 人脸图像的年龄识别方法、装置及存储介质
WO2021008019A1 (zh) 姿态跟踪方法、装置及计算机可读存储介质
WO2016082277A1 (zh) 一种视频认证方法及装置
CN113283446B (zh) 图像中目标物识别方法、装置、电子设备及存储介质
WO2020253508A1 (zh) 异常细胞检测方法、装置及计算机可读存储介质
WO2018090937A1 (zh) 图像处理方法、终端及存储介质
CN110717497B (zh) 图像相似度匹配方法、装置及计算机可读存储介质
CN110765860A (zh) 摔倒判定方法、装置、计算机设备及存储介质
US20210174135A1 (en) Method of matching image and apparatus thereof, device, medium and program product
KR101912748B1 (ko) 확장성을 고려한 특징 기술자 생성 및 특징 기술자를 이용한 정합 장치 및 방법
JP2020515983A (ja) 対象人物の検索方法および装置、機器、プログラム製品ならびに媒体
WO2023082784A1 (zh) 一种基于局部特征注意力的行人重识别方法和装置
WO2019033570A1 (zh) 嘴唇动作分析方法、装置及存储介质
WO2020248848A1 (zh) 智能化异常细胞判断方法、装置及计算机可读存储介质
CN112001302B (zh) 基于人脸感兴趣区域分割的人脸识别方法
CN106503112B (zh) 视频检索方法和装置
CN111935487B (zh) 一种基于视频流检测的图像压缩方法及系统
CN112651953A (zh) 图片相似度计算方法、装置、计算机设备及存储介质
US9311523B1 (en) Method and apparatus for supporting object recognition
CN114494775A (zh) 视频切分方法、装置、设备及存储介质
WO2021115130A1 (zh) 脑出血点智能检测方法、装置、电子设备及存储介质
CN110705547B (zh) 图像内文字识别方法、装置及计算机可读存储介质
CN112348008A (zh) 证件信息的识别方法、装置、终端设备及存储介质
CN113228105A (zh) 一种图像处理方法、装置和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19948342

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19948342

Country of ref document: EP

Kind code of ref document: A1