TWI786330B - Image processing method, electronic device, and storage medium - Google Patents

Image processing method, electronic device, and storage medium Download PDF

Info

Publication number
TWI786330B
TWI786330B TW108133166A TW108133166A TWI786330B TW I786330 B TWI786330 B TW I786330B TW 108133166 A TW108133166 A TW 108133166A TW 108133166 A TW108133166 A TW 108133166A TW I786330 B TWI786330 B TW I786330B
Authority
TW
Taiwan
Prior art keywords
instance
image
instance segmentation
data
segmentation model
Prior art date
Application number
TW108133166A
Other languages
Chinese (zh)
Other versions
TW202013311A (en
Inventor
李嘉輝
胡志强
Original Assignee
大陸商北京市商湯科技開發有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大陸商北京市商湯科技開發有限公司 filed Critical 大陸商北京市商湯科技開發有限公司
Publication of TW202013311A publication Critical patent/TW202013311A/en
Application granted granted Critical
Publication of TWI786330B publication Critical patent/TWI786330B/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Holo Graphy (AREA)

Abstract

The embodiment of the present application discloses an image processing method, an electronic device, and a storage medium, wherein the method includes: acquiring N sets of segmentation output data, wherein the N sets of segmentation output data are respectively performed by N instance segmentation models Processing the obtained instance segmentation output result, and the N sets of instance segmentation output data have different data structures, the N being an integer greater than 1; and segmenting the output data based on the N sets of instances to obtain integrated semantic data of the image And integrating central region data, wherein the integrated semantic data indicates pixel points in the image region of the image, the integrated central region data indicating pixel points in the image center region of the image; integration based on the image Semantic data and integration of central region data, to obtain the example segmentation results of the image, can achieve the complementary advantages of each instance segmentation model, and achieve higher precision in the instance segmentation problem.

Description

一種圖像處理方法、電子設備及存儲介質An image processing method, electronic device and storage medium

本申請涉及電腦視覺技術領域,具體涉及一種圖像處理方法、電子設備及儲存介質。The present application relates to the technical field of computer vision, in particular to an image processing method, electronic equipment and a storage medium.

影像處理又稱為圖像處理,是用電腦對圖像進行分析,以達到所需結果的技術。圖像處理一般指數位圖像處理,數位圖像是指用工業相機、攝像機、掃描儀等設備經過拍攝得到的一個大的二維數組,該數組的元素稱為像素點,其值稱為灰度值。圖像處理在許多領域起著十分重要的作用。Image processing, also known as image processing, is a technology that uses computers to analyze images to achieve the desired results. Image processing generally refers to digital image processing. Digital image refers to a large two-dimensional array obtained by shooting with industrial cameras, video cameras, scanners and other equipment. The elements of this array are called pixels, and their values are called gray degree value. Image processing plays a very important role in many fields.

本申請實施例提供了一種圖像處理方法、電子設備及儲存介質。Embodiments of the present application provide an image processing method, electronic equipment, and a storage medium.

本申請實施例第一方面提供一種圖像處理方法,包括:獲取N組實例分割輸出數據,其中,所述N組實例分割輸出數據分別為N個實例分割模型對圖像進行處理獲得的實例分割輸出結果,且所述N組實例分割輸出數據具有不同的數據結構,所述N為大於1的整數;基於所述N組實例分割輸出數據,得到所述圖像的集成語義數據和集成中心區域數據,其中,所述集成語義數據指示所述圖像中位於實例區域的像素點,所述集成中心區域數據指示所述圖像中位於實例中心區域的像素點;基於所述圖像的集成語義數據和集成中心區域數據,獲得所述圖像的實例分割結果。The first aspect of the embodiments of the present application provides an image processing method, including: acquiring N sets of instance segmentation output data, wherein the N sets of instance segmentation output data are the instance segmentation obtained by processing the image with N instance segmentation models Output the result, and the N sets of instance segmentation output data have different data structures, and the N is an integer greater than 1; based on the N sets of instance segmentation output data, the integrated semantic data and the integrated central area of the image are obtained data, wherein the integrated semantic data indicates the pixel points located in the instance area in the image, and the integrated central area data indicates the pixel points located in the instance central area in the image; based on the integrated semantics of the image Data and integration center region data to obtain the instance segmentation result of the image.

在一種可選的實施方式中,所述基於所述N組實例分割輸出數據,得到所述圖像的集成語義數據和集成中心區域數據,包括:針對所述N個實例分割模型中每個實例分割模型,基於所述實例分割模型的實例分割輸出數據,得到所述實例分割模型的語義數據和中心區域數據;基於所述N個實例分割模型中每個實例分割模型的語義數據和中心區域數據,得到所述圖像的集成語義數據和集成中心區域數據。In an optional implementation manner, said segmenting the output data based on the N groups of instances to obtain the integrated semantic data and integrated central area data of the image includes: segmenting each instance in the model for the N instances Segmentation model, based on the instance segmentation output data of the instance segmentation model, the semantic data and central area data of the instance segmentation model are obtained; based on the semantic data and central area data of each instance segmentation model in the N instance segmentation models , to obtain the integrated semantic data and integrated central area data of the image.

在一種可選的實施方式中,所述基於所述實例分割模型的實例分割輸出數據,得到所述實例分割模型的語義數據和中心區域數據,包括:基於所述實例分割模型的實例分割輸出數據,確定在所述實例分割模型中所述圖像的多個像素點中每個像素點對應的實例標識信息;基於所述實例分割模型中所述多個像素點中每個像素點對應的實例標識信息,得到所述每個像素點在所述實例分割模型中的語義預測值,其中,所述實例分割模型的語義數據包括所述圖像的多個像素點中每個像素點的語義預測值。In an optional implementation manner, the instance segmentation output data based on the instance segmentation model, obtaining the semantic data and central region data of the instance segmentation model includes: the instance segmentation output data based on the instance segmentation model , determine the instance identification information corresponding to each pixel in the plurality of pixels of the image in the instance segmentation model; based on the instance corresponding to each pixel in the plurality of pixels in the instance segmentation model identification information to obtain the semantic prediction value of each pixel in the instance segmentation model, wherein the semantic data of the instance segmentation model includes the semantic prediction of each pixel in the plurality of pixels of the image value.

在一種可選的實施方式中,所述基於所述實例分割模型的實例分割輸出數據,得到所述實例分割模型的語義數據和中心區域數據,還包括:基於所述實例分割模型的實例分割輸出數據,確定在所述實例分割模型中,所述圖像中位於實例區域的至少兩個像素點;基於所述實例分割模型中位於實例區域的至少兩個像素點的位置信息,確定所述實例分割模型的實例中心位置;基於所述實例分割模型的實例中心位置和所述至少兩個像素點的位置信息,確定所述實例分割模型的實例中心區域。In an optional implementation manner, the instance segmentation output data based on the instance segmentation model is used to obtain the semantic data and central region data of the instance segmentation model, and further includes: the instance segmentation output based on the instance segmentation model data, determining in the instance segmentation model, at least two pixels located in the instance region in the image; based on the position information of at least two pixels located in the instance region in the instance segmentation model, determining the instance The instance center position of the segmentation model; based on the instance center position of the instance segmentation model and the position information of the at least two pixel points, determine the instance center area of the instance segmentation model.

在一種可選的實施方式中,在基於所述實例分割模型的實例分割輸出數據,確定在所述實例分割模型中,所述圖像中位於實例區域的至少兩個像素點之前,還包括:對所述實例分割模型的實例分割輸出數據進行腐蝕處理,得到實例分割模型的腐蝕數據。在此情況下,所述基於所述實例分割模型的實例分割輸出數據,確定在所述實例分割模型中,所述圖像中位於實例區域的至少兩個像素點,包括:基於所述實例分割模型的腐蝕數據,確定在所述實例分割模型中,所述圖像中位於實例區域的至少兩個像素點。In an optional implementation manner, after determining, based on the instance segmentation output data of the instance segmentation model, that in the instance segmentation model, before at least two pixel points located in the instance region in the image, further includes: Erosion processing is performed on the instance segmentation output data of the instance segmentation model to obtain erosion data of the instance segmentation model. In this case, the determining, in the instance segmentation model based on the instance segmentation output data of the instance segmentation model, at least two pixels located in the instance region in the image, includes: based on the instance segmentation Corrosion data of the model, determining at least two pixel points located in the instance region in the image in the instance segmentation model.

在一種可選的實施方式中,所述基於所述實例分割模型中位於實例區域的至少兩個像素點的位置信息,確定所述實例分割模型的實例中心位置,包括:將所述位於實例區域的至少兩個像素點的位置的平均值,作為所述實例分割模型的實例中心位置。In an optional implementation manner, the determining the instance center position of the instance segmentation model based on the position information of at least two pixel points located in the instance area in the instance segmentation model includes: The average value of the positions of at least two pixel points is used as the instance center position of the instance segmentation model.

在一種可選的實施方式中,所述基於所述實例分割模型的實例中心位置和所述至少兩個像素點的位置信息,確定所述實例分割模型的實例中心區域,包括:基於所述實例分割模型的實例中心位置和所述至少兩個像素點的位置信息,確定所述至少兩個像素點與所述實例中心位置的最大距離;基於所述最大距離,確定第一閾值;將所述至少兩個像素點中與所述實例中心位置之間的距離小於或等於所述第一閾值的像素點確定為實例中心區域的像素點。In an optional implementation manner, the determining the instance center area of the instance segmentation model based on the instance center position of the instance segmentation model and the position information of the at least two pixels includes: based on the instance Segmenting the instance center position of the model and the position information of the at least two pixel points, determining the maximum distance between the at least two pixel points and the instance center position; based on the maximum distance, determining a first threshold; Among the at least two pixel points, a pixel point whose distance from the central position of the instance is less than or equal to the first threshold is determined as a pixel point in the central region of the instance.

在一種可選的實施方式中,所述基於所述N個實例分割模型中每個實例分割模型的語義數據和中心區域數據,得到所述圖像的集成語義數據和集成中心區域數據,包括:基於所述N個實例分割模型中每個實例分割模型的語義數據,確定所述圖像的多個像素點中每個像素點的語義投票值;對所述多個像素點中每個像素點的語義投票值進行二值化處理,得到所述圖像中每個像素點的集成語義值,其中,所述圖像的集成語義數據包括所述多個像素點中每個像素點的集成語義值。In an optional implementation manner, the integrated semantic data and integrated central area data of the image are obtained based on the semantic data and central area data of each instance segmentation model in the N instance segmentation models, including: Based on the semantic data of each instance segmentation model in the N instance segmentation models, determine the semantic voting value of each pixel in the plurality of pixels of the image; for each pixel in the plurality of pixels The semantic voting value of the image is binarized to obtain the integrated semantic value of each pixel in the image, wherein the integrated semantic data of the image includes the integrated semantic value of each pixel in the plurality of pixels value.

在一種可選的實施方式中,所述對所述多個像素點中每個像素點的語義投票值進行二值化處理,得到所述圖像中每個像素點的集成語義值,包括:基於所述多個實例分割模型的個數N,確定第二閾值;基於所述第二閾值,對所述多個像素點中每個像素點的語義投票值進行二值化處理,得到所述圖像中每個像素點的集成語義值。In an optional implementation manner, performing binarization processing on the semantic voting value of each pixel in the plurality of pixels to obtain an integrated semantic value of each pixel in the image includes: Based on the number N of the plurality of instance segmentation models, determine a second threshold; based on the second threshold, perform binarization processing on the semantic voting value of each pixel in the plurality of pixels, to obtain the The integrated semantic value of each pixel in the image.

在一種可選的實施方式中,所述第二閾值為N/2的向上取整結果。In an optional implementation manner, the second threshold is a rounded-up result of N/2.

在一種可選的實施方式中,所述基於所述圖像的集成語義數據和集成中心區域數據,獲得所述圖像的實例分割結果,包括:基於所述圖像的集成中心區域數據,得到所述圖像的至少一個實例中心區域;基於所述至少一個實例中心區域和所述圖像的集成語義數據,確定所述圖像的多個像素點中每個像素點所屬的實例。In an optional implementation manner, the obtaining the instance segmentation result of the image based on the integrated semantic data and the integrated central area data of the image includes: obtaining the instance segmentation result based on the integrated central area data of the image At least one instance central area of the image; based on the at least one instance central area and the integrated semantic data of the image, determine the instance to which each pixel of the multiple pixels of the image belongs.

在一種可選的實施方式中,所述基於所述至少一個實例中心區域和所述圖像的集成語義數據,確定所述圖像的多個像素點中每個像素點所屬的實例,包括:基於所述圖像的多個像素點中每個像素點的集成語義值和所述至少一個實例中心區域,進行隨機遊走,得到所述每個像素點所屬的實例。In an optional implementation manner, the determining the instance to which each pixel of the plurality of pixels of the image belongs based on the integrated semantic data of the at least one instance central area and the image includes: Based on the integrated semantic value of each pixel in the multiple pixels of the image and the central area of the at least one instance, random walk is performed to obtain the instance to which each pixel belongs.

本申請實施例第二方面提供一種電子設備,包括獲取模塊、轉換模塊和分割模塊,其中:所述獲取模塊,用於獲取N組實例分割輸出數據,其中,所述N組實例分割輸出數據分別為N個實例分割模型對圖像進行處理獲得的實例分割輸出結果,且所述N組實例分割輸出數據具有不同的數據結構,所述N為大於1的整數;所述轉換模塊,用於基於所述N組實例分割輸出數據,得到所述圖像的集成語義數據和集成中心區域數據,其中,所述集成語義數據指示所述圖像中位於實例區域的像素點,所述集成中心區域數據指示所述圖像中位於實例中心區域的像素點;所述分割模塊,用於基於所述圖像的集成語義數據和集成中心區域數據,獲得所述圖像的實例分割結果。The second aspect of the embodiment of the present application provides an electronic device, including an acquisition module, a conversion module, and a segmentation module, wherein: the acquisition module is used to acquire N sets of instance segmentation output data, wherein the N sets of instance segmentation output data are respectively Instance segmentation output results obtained by processing images for N instance segmentation models, and the N sets of instance segmentation output data have different data structures, and the N is an integer greater than 1; the conversion module is used for based on The N sets of instance segmentation output data to obtain the integrated semantic data and integrated central area data of the image, wherein the integrated semantic data indicates the pixels located in the instance area in the image, and the integrated central area data Indicating the pixels located in the central area of the instance in the image; the segmentation module is configured to obtain an instance segmentation result of the image based on the integrated semantic data and the integrated central area data of the image.

在一種可選的實施方式中,所述轉換模塊包括第一轉換單元和第二轉換單元,其中:所述第一轉換單元,用於針對所述N個實例分割模型中每個實例分割模型,基於所述實例分割模型的實例分割輸出數據,得到所述實例分割模型的語義數據和中心區域數據;所述第二轉換單元,用於基於所述N個實例分割模型中每個實例分割模型的語義數據和中心區域數據,得到所述圖像的集成語義數據和集成中心區域數據。In an optional implementation manner, the conversion module includes a first conversion unit and a second conversion unit, wherein: the first conversion unit is configured to segment a model for each of the N instance segmentation models, Based on the instance segmentation output data of the instance segmentation model, the semantic data and central region data of the instance segmentation model are obtained; the second conversion unit is configured to obtain the instance segmentation model based on each of the N instance segmentation models Semantic data and central area data, the integrated semantic data and integrated central area data of the image are obtained.

在一種可選的實施方式中,所述第一轉換單元具體用於:基於所述實例分割模型的實例分割輸出數據,確定在所述實例分割模型中所述圖像的多個像素點中每個像素點對應的實例標識信息;基於所述實例分割模型中所述多個像素點中每個像素點對應的實例標識信息,得到所述每個像素點在所述實例分割模型中的語義預測值,其中,所述實例分割模型的語義數據包括所述圖像的多個像素點中每個像素點的語義預測值。In an optional implementation manner, the first conversion unit is specifically configured to: determine, based on the instance segmentation output data of the instance segmentation model, that each of the multiple pixel points of the image in the instance segmentation model Instance identification information corresponding to pixels; based on the instance identification information corresponding to each pixel in the plurality of pixels in the instance segmentation model, the semantic prediction of each pixel in the instance segmentation model is obtained value, wherein the semantic data of the instance segmentation model includes the semantic prediction value of each pixel in the plurality of pixels of the image.

在一種可選的實施方式中,所述第一轉換單元具體還用於:基於所述實例分割模型的實例分割輸出數據,確定在所述實例分割模型中,所述圖像中位於實例區域的至少兩個像素點;基於所述實例分割模型中位於實例區域的至少兩個像素點的位置信息,確定所述實例分割模型的實例中心位置;基於所述實例分割模型的實例中心位置和所述至少兩個像素點的位置信息,確定所述實例分割模型的實例中心區域。In an optional implementation manner, the first conversion unit is specifically further configured to: determine, in the instance segmentation model, based on the instance segmentation output data of the instance segmentation model, the At least two pixel points; based on the position information of at least two pixel points located in the instance area in the instance segmentation model, determine the instance center position of the instance segmentation model; based on the instance center position of the instance segmentation model and the The position information of at least two pixel points determines the instance central region of the instance segmentation model.

在一種可選的實施方式中,所述轉換模塊還包括腐蝕處理單元,用於對所述實例分割模型的實例分割輸出數據進行腐蝕處理,得到實例分割模型的腐蝕數據;所述第一轉換單元具體用於,基於所述實例分割模型的腐蝕數據,確定在所述實例分割模型中,所述圖像中位於實例區域的至少兩個像素點。In an optional implementation manner, the conversion module further includes an erosion processing unit, configured to perform erosion processing on the instance segmentation output data of the instance segmentation model to obtain erosion data of the instance segmentation model; the first conversion unit Specifically, based on the erosion data of the instance segmentation model, determine at least two pixel points located in the instance region in the image in the instance segmentation model.

在一種可選的實施方式中,所述第一轉換單元具體用於,將所述位於實例區域的至少兩個像素點的位置的平均值,作為所述實例分割模型的實例中心位置。In an optional implementation manner, the first conversion unit is specifically configured to use the average value of the positions of at least two pixel points located in the instance region as the instance center position of the instance segmentation model.

在一種可選的實施方式中,所述第一轉換單元具體還用於:基於所述實例分割模型的實例中心位置和所述至少兩個像素點的位置信息,確定所述至少兩個像素點與所述實例中心位置的最大距離;基於所述最大距離,確定第一閾值;將所述至少兩個像素點中與所述實例中心位置之間的距離小於或等於所述第一閾值的像素點確定為實例中心區域的像素點。In an optional implementation manner, the first conversion unit is further configured to: determine the at least two pixel points based on the instance center position of the instance segmentation model and the position information of the at least two pixel points The maximum distance from the central position of the instance; based on the maximum distance, determine a first threshold; divide the at least two pixel points from the pixels whose distance from the central position of the instance is less than or equal to the first threshold The point is determined as the pixel point in the central area of the instance.

在一種可選的實施方式中,所述轉換模塊,具體用於:基於所述實例分割模型的語義數據,確定所述圖像的多個像素點中每個像素點的語義投票值;對所述多個像素點中每個像素點的語義投票值進行二值化處理,得到所述圖像中每個像素點的集成語義值,其中,所述圖像的集成語義數據包括所述多個像素點中每個像素點的集成語義值。In an optional implementation manner, the conversion module is specifically configured to: determine, based on the semantic data of the instance segmentation model, the semantic voting value of each pixel in the multiple pixels of the image; The semantic voting value of each pixel in the plurality of pixels is binarized to obtain the integrated semantic value of each pixel in the image, wherein the integrated semantic data of the image includes the multiple The integrated semantic value of each pixel in the pixel.

在一種可選的實施方式中,所述轉換模塊,具體還用於:基於所述多個實例分割模型的個數N,確定第二閾值;基於所述第二閾值,對所述多個像素點中每個像素點的語義投票值進行二值化處理,得到所述圖像中每個像素點的集成語義值。In an optional implementation manner, the conversion module is further configured to: determine a second threshold based on the number N of the plurality of instance segmentation models; The semantic voting value of each pixel in the point is binarized to obtain the integrated semantic value of each pixel in the image.

在一種可選的實施方式中,所述第二閾值為N/2的向上取整結果。In an optional implementation manner, the second threshold is a rounded-up result of N/2.

本申請實施例第三方面提供另一種電子設備,包括處理器以及記憶體,所述記憶體用於儲存電腦程式,所述電腦程式被配置成由所述處理器執行,所述處理器用於執行如本申請實施例第一方面任一方法中所描述的部分或全部步驟。The third aspect of the embodiment of the present application provides another electronic device, including a processor and a memory, the memory is used to store a computer program, the computer program is configured to be executed by the processor, and the processor is used to execute Part or all of the steps described in any method of the first aspect of the embodiment of the present application.

本申請實施例第四方面提供一種電腦可讀儲存介質,所述電腦可讀儲存介質用於儲存電腦程式,其中,所述電腦程式使得電腦執行如本申請實施例第三方面任一方法中所描述的部分或全部步驟。The fourth aspect of the embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium is used to store a computer program, wherein the computer program enables the computer to execute any method described in the third aspect of the embodiment of the present application. Some or all of the steps described.

本申請實施例獲取N組實例分割輸出數據,其中,上述N組實例分割輸出數據分別為N個實例分割模型對圖像進行處理獲得的實例分割輸出結果,且上述N組實例分割輸出數據具有不同的數據結構,上述N為大於1的整數,再基於上述N組實例分割輸出數據,得到上述圖像的集成語義數據和集成中心區域數據,其中,上述集成語義數據指示上述圖像中位於實例區域的像素點,上述集成中心區域數據指示上述圖像中位於實例中心區域的像素點,進而基於上述圖像的集成語義數據和集成中心區域數據,獲得上述圖像的實例分割結果,可以在圖像處理的實例分割問題中,實現各個實例分割模型的優勢互補,不再要求各個模型具有相同結構或含義的數據輸出,在實例分割問題中取得更高的精度。The embodiment of the present application obtains N sets of instance segmentation output data, wherein the above N sets of instance segmentation output data are the instance segmentation output results obtained by processing images with N instance segmentation models, and the above N sets of instance segmentation output data have different The above N is an integer greater than 1, and then the output data is segmented based on the above N groups of instances to obtain the integrated semantic data and integrated central area data of the above image, wherein the above integrated semantic data indicates that the image is located in the instance area The pixel points of the above-mentioned integrated center area data indicate the pixels located in the center area of the instance in the above-mentioned image, and then based on the integrated semantic data and integrated central area data of the above-mentioned image, the instance segmentation result of the above-mentioned image is obtained, which can be obtained in the image In the instance segmentation problem to be dealt with, the complementary advantages of each instance segmentation model are realized, and the data output of each model is no longer required to have the same structure or meaning, and higher accuracy is achieved in the instance segmentation problem.

下面將結合本申請實施例中的附圖,對本申請實施例中的技術方案進行清楚、完整地描述,顯然,所描述的實施例僅僅是本申請一部分實施例,而不是全部的實施例。基於本申請中的實施例,本領域普通技術人員在沒有作出創造性勞動前提下所獲得的所有其他實施例,都屬本申請保護的範圍。The following will clearly and completely describe the technical solutions in the embodiments of the application with reference to the drawings in the embodiments of the application. Apparently, the described embodiments are only some of the embodiments of the application, not all of them. Based on the embodiments in this application, all other embodiments obtained by persons of ordinary skill in the art without creative efforts fall within the protection scope of this application.

本申請的說明書和申請專利範圍及上述附圖中的術語“第一”、“第二”等是用於區別不同對象,而不是用於描述特定順序。此外,術語“包括”和“具有”以及它們任何變形,意圖在於覆蓋不排他的包含。例如包含了一系列步驟或單元的過程、方法、系統、產品或設備沒有限定於已列出的步驟或單元,而是可選地還包括沒有列出的步驟或單元,或可選地還包括對於這些過程、方法、產品或設備固有的其他步驟或單元。The terms "first", "second" and the like in the specification and patent scope of the present application and the above drawings are used to distinguish different objects, rather than to describe a specific order. Furthermore, the terms "include" and "have", as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, product or device comprising a series of steps or units is not limited to the listed steps or units, but optionally also includes unlisted steps or units, or optionally further includes For other steps or units inherent in these processes, methods, products or devices.

在本文中提及“實施例”意味著,結合實施例描述的特定特徵、結構或特性可以包含在本申請的至少一個實施例中。在說明書中的各個位置出現該短語並不一定均是指相同的實施例,也不是與其它實施例互斥的獨立的或備選的實施例。本領域技術人員顯式地和隱式地理解的是,本文所描述的實施例可以與其它實施例相結合。Reference herein to an "embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the present application. The occurrences of this phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is understood explicitly and implicitly by those skilled in the art that the embodiments described herein can be combined with other embodiments.

本申請實施例所涉及到的電子設備可以允許多個其他終端設備進行訪問。上述電子設備包括終端設備,具體實現中,上述終端設備包括但不限於諸如具有觸摸敏感表面(例如,觸摸屏顯示器和/或觸摸板)的移動電話、膝上型電腦或平板電腦之類的其它便攜式設備。還應當理解的是,在某些實施例中,所述終端設備並非便攜式通信設備,而是具有觸摸敏感表面(例如,觸摸屏顯示器和/或觸摸板)的臺式電腦。The electronic device involved in the embodiment of the present application may allow multiple other terminal devices to access. Such electronic devices include terminal devices, and in particular implementations, such terminal devices include, but are not limited to, other portable equipment. It should also be understood that, in some embodiments, the terminal device is not a portable communication device, but a desktop computer with a touch-sensitive surface (eg, a touch screen display and/or a touchpad).

深度學習是機器學習中一種基於對數據進行表徵學習的方法。觀測值(例如一幅圖像)可以使用多種方式來表示,如每個像素點強度值的向量,或者更抽象地表示成一系列邊、特定形狀的區域等。而使用某些特定的表示方法更容易從實例中學習任務(例如,人臉識別或面部表情識別)。深度學習的好處是用非監督式或半監督式的特徵學習和分層特徵提取高效算法來替代手工獲取特徵。深度學習是機器學習研究中的一個新的領域,其動機在於建立、模擬人腦進行分析學習的神經網路,從而可以模仿人腦的機制來解釋數據,例如圖像,聲音和文本。Deep learning is a method based on representation learning of data in machine learning. Observations (such as an image) can be represented in a variety of ways, such as a vector of intensity values for each pixel, or more abstractly represented as a series of edges, regions of a specific shape, etc. Instead, it is easier to learn tasks from examples (e.g., face recognition or facial expression recognition) using certain representations. The advantage of deep learning is to use unsupervised or semi-supervised feature learning and hierarchical feature extraction efficient algorithms to replace manual feature acquisition. Deep learning is a new field in machine learning research. Its motivation is to establish and simulate the neural network of human brain for analysis and learning, so that it can imitate the mechanism of human brain to explain data, such as images, sounds and texts.

同機器學習方法一樣,深度機器學習方法也有監督學習與無監督學習之分。不同的學習框架下建立的學習模型很是不同。例如,卷積神經網路(Convolutional neural network,CNN)就是一種深度的監督學習下的機器學習模型,也可稱為基於深度學習的網路結構模型,而深度置信網(Deep Belief Net,DBN)就是一種無監督學習下的機器學習模型。Like machine learning methods, deep machine learning methods can also be divided into supervised learning and unsupervised learning. The learning models established under different learning frameworks are very different. For example, Convolutional neural network (CNN) is a machine learning model under deep supervised learning, which can also be called a network structure model based on deep learning, while Deep Belief Net (DBN) It is a machine learning model under unsupervised learning.

下面對本申請實施例進行詳細介紹。應理解,本公開實施例可以應用於對圖像進行細胞核分割或者其他類型的實例分割,例如任意具有封閉性結構的實例分割,本公開實施例對此不做限定。The following describes the embodiments of the present application in detail. It should be understood that the embodiments of the present disclosure may be applied to performing cell nucleus segmentation or other types of instance segmentation on an image, for example, any instance segmentation with a closed structure, which is not limited in the embodiments of the present disclosure.

請參閱圖1,圖1是本申請實施例公開的一種圖像處理方法的流程示意圖,該方法可以由任意電子設備執行,例如終端設備、服務器或者處理平臺等,本公開實施例對此不做限定。如圖1所示,該圖像處理包括如下步驟。Please refer to FIG. 1. FIG. 1 is a schematic flow diagram of an image processing method disclosed in the embodiment of the present application. The method can be executed by any electronic device, such as a terminal device, server or processing platform, etc., which is not covered in the embodiment of the present disclosure. limited. As shown in Figure 1, the image processing includes the following steps.

101、獲取N組實例分割輸出數據,其中,上述N組實例分割輸出數據分別為N個實例分割模型對圖像進行處理獲得的實例分割輸出結果,且上述N組實例分割輸出數據具有不同的數據結構,上述N為大於1的整數。101. Obtain N sets of instance segmentation output data, wherein the above N sets of instance segmentation output data are the instance segmentation output results obtained by processing images with N instance segmentation models, and the above N sets of instance segmentation output data have different data structure, the above N is an integer greater than 1.

首先,圖像處理中的實例分割問題定義為:對於一張輸入圖像,要對每一個像素點進行獨立的判斷,判斷其所屬語義類別以及實例ID。例如圖像中有三個細胞核1、2、3,其語義類別都是細胞核,而實例分割結果卻是不同的對象。First of all, the instance segmentation problem in image processing is defined as: For an input image, it is necessary to independently judge each pixel, and judge its semantic category and instance ID. For example, there are three nuclei 1, 2, and 3 in the image, and their semantic categories are all nuclei, but the instance segmentation results are different objects.

在一些可能的實現方式中,實例分割可以依靠卷積神經網路(Convolutional Neural Network,CNN),主要有如下兩種算法的變體:名叫MaskRCNN(Mask Regions with CNN features)和簡單梳理全卷積網路(Fully Convolutional Network,FCN)的目標實例分割框架。MaskRCN的缺點在於參數繁多,對於具體問題要求從業人員具備很高的專業認知才能得到較好的結果,且該方法運行緩慢;FCN需要特殊的圖像後處理才能把黏合的同語義多目標分離成多個實例,這也需要從業人員較高的專業知識。In some possible implementations, instance segmentation can rely on Convolutional Neural Network (CNN), which mainly has the following two variants of the algorithm: called MaskRCNN (Mask Regions with CNN features) and simple combing of the full volume A target instance segmentation framework for Fully Convolutional Network (FCN). The disadvantage of MaskRCN is that there are many parameters. For specific problems, practitioners are required to have high professional knowledge to get better results, and the method runs slowly; FCN needs special image post-processing to separate the glued same-semantic multi-objects into Multiple instances, which also require a high level of expertise from practitioners.

在另一個可能的實現方式中,實例分割也可以依靠實例分割算法來實現,例如基於支持向量機的實例分割算法等機器學習模型,本公開實施例對實例分割模型的具體實現不作限定。In another possible implementation manner, the instance segmentation can also be implemented by means of an instance segmentation algorithm, such as a machine learning model such as an instance segmentation algorithm based on a support vector machine. Embodiments of the present disclosure do not limit the specific implementation of the instance segmentation model.

不同的實例分割模型各有其優勢與缺點,本公開實施例通過集成多個實例分割模型來整合不同單模型的優點。Different instance segmentation models have their own advantages and disadvantages, and embodiments of the present disclosure integrate advantages of different single models by integrating multiple instance segmentation models.

本申請實施例中,可選地,在執行步驟101之前,可以使用不同的實例分割模型對圖像分別進行處理,比如使用MaskRCNN和FCN分別對圖像進行處理,獲得實例分割輸出結果。假設有N個實例分割模型,可以獲取實例分割模型的實例分割結果(以下稱為實例分割輸出數據),即獲得N組實例分割輸出數據。或者,可以從其他設備處獲取該N組實例分割輸出數據,本公開實施例對獲取N組實例分割輸出數據的方式不作限定。In the embodiment of the present application, optionally, before step 101 is performed, different instance segmentation models may be used to process the images respectively, for example, MaskRCNN and FCN are used to process the images respectively to obtain an output result of instance segmentation. Assuming that there are N instance segmentation models, the instance segmentation results of the instance segmentation models (hereinafter referred to as instance segmentation output data) can be obtained, that is, N sets of instance segmentation output data can be obtained. Alternatively, the N sets of instance segmentation output data may be acquired from other devices, and the embodiment of the present disclosure does not limit the manner of acquiring the N sets of instance segmentation output data.

可選地,在使用實例分割模型對圖像進行處理之前,還可以對圖像進行預處理,例如對比度和/或灰度調整,或者裁剪、水平和垂直翻轉、旋轉、縮放、噪聲去除等一種或任意多項操作,以使得預處理後的圖像滿足實例分割模型對於輸入圖像的要求,本公開實施例對此不做限定。Optionally, before processing the image using the instance segmentation model, it is also possible to preprocess the image, such as contrast and/or grayscale adjustment, or cropping, horizontal and vertical flipping, rotation, scaling, noise removal, etc. or any number of operations, so that the preprocessed image meets the requirements of the instance segmentation model for the input image, which is not limited in this embodiment of the present disclosure.

在本公開實施例中,N個實例分割模型輸出的實例分割輸出數據可以具有不同的數據結構或含義。舉例來講,對於一個維度為[高,寬,3]的圖像的輸入,實例分割輸出數據包括 [高,寬]的數據,其中,實例ID為0表示背景,大於0的不同數字表示不同的實例。假設有3個實例分割模型,不同的實例分割模型對應不同的算法或者神經網路結構,其中,第1個實例分割模型的實例分割輸出數據是[邊界、目標、背景]的三分類機率圖;第2個實例分割模型的實例分割輸出數據是[邊界、背景]的二分類機率圖和維度為[目標、背景]的二分類圖;第3個實例分割模型的實例分割輸出數據是[中心區域、目標整體、背景]的三分類機率圖,等等。不同的實例分割模型擁有不同意義的數據輸出。此時,無法使用任意加權平均算法來整合各個實例分割模型的輸出以取得更穩定、更高精度的結果。本申請實施例中的方法可以在此N組具有不同數據結構的實例分割輸出數據的基礎上進行跨實例分割模型的集成。In the embodiments of the present disclosure, the instance segmentation output data output by the N instance segmentation models may have different data structures or meanings. For example, for an image input whose dimension is [height, width, 3], the instance segmentation output data includes the data of [height, width], where the instance ID is 0 to represent the background, and different numbers greater than 0 represent different instance of . Suppose there are 3 instance segmentation models, and different instance segmentation models correspond to different algorithms or neural network structures. Among them, the instance segmentation output data of the first instance segmentation model is a three-class probability map of [boundary, target, background]; The instance segmentation output data of the second instance segmentation model is the binary classification probability map of [boundary, background] and the binary classification map with the dimension of [target, background]; the instance segmentation output data of the third instance segmentation model is [central area , target overall, background] three-category probability map, and so on. Different instance segmentation models have different meaningful data outputs. At this point, it is not possible to use arbitrary weighted averaging algorithms to integrate the outputs of individual instance segmentation models to achieve more stable and high-precision results. The method in the embodiment of the present application can perform cross-instance segmentation model integration on the basis of the N sets of instance segmentation output data with different data structures.

在獲取上述N組實例分割輸出數據之後,可以執行步驟102。Step 102 may be executed after the above N groups of instance segmentation output data are obtained.

102、基於上述N組實例分割輸出數據,得到上述圖像的集成語義數據和集成中心區域數據,其中,上述集成語義數據指示上述圖像中位於實例區域的像素點,上述集成中心區域數據指示上述圖像中位於實例中心區域的像素點。102. Based on the above-mentioned N groups of instance segmentation output data, the integrated semantic data and integrated central area data of the above-mentioned image are obtained, wherein the above-mentioned integrated semantic data indicates the pixels located in the instance area in the above-mentioned image, and the above-mentioned integrated central area data indicates the above-mentioned Pixels in the center of the instance in the image.

具體的,電子設備可以將上述N組實例分割輸出數據進行轉換處理,獲得圖像的集成語義數據和集成中心區域數據。Specifically, the electronic device may convert the above N groups of instance segmentation output data to obtain integrated semantic data and integrated central area data of the image.

本申請實施例中提到的語義分割是電腦視覺中的基本任務,在語義分割中我們需要將視覺輸入分為不同的語義可解釋類別,即分類類別在真實世界中是有意義的。圖像是由許多像素點(Pixel)組成,而語義分割顧名思義就是將像素點按照圖像中表達語義含義的不同進行分組(Grouping)/分割(Segmentation)。例如,我們可能需要區分圖像中屬汽車的所有像素點,並把這些像素點塗成藍色。The semantic segmentation mentioned in the embodiment of this application is a basic task in computer vision. In semantic segmentation, we need to divide the visual input into different semantically interpretable categories, that is, the classification categories are meaningful in the real world. An image is composed of many pixels (Pixel), and semantic segmentation, as the name implies, is to group (Grouping)/segment (Segmentation) pixels according to the different semantic meanings expressed in the image. For example, we might need to distinguish all pixels in an image that belong to cars and color those pixels blue.

像素級別的語義分割可以對圖像中的每個像素點都劃分出對應的類別,即實現像素級別的分類;而類的具體對象,即為實例,那麼實際上實例分割不但要進行像素級別的分類,還需在具體的類別基礎上區別開不同的實例。比如說圖像中有三個人甲、乙、丙,其語義分割結果都是人,而實例分割結果卻是不同的對象。Pixel-level semantic segmentation can divide each pixel in the image into a corresponding category, that is, realize pixel-level classification; and the specific object of the class is an instance, so in fact, instance segmentation not only needs to perform pixel-level Classification also needs to distinguish different instances on the basis of specific categories. For example, there are three people A, B, and C in the image, and the semantic segmentation results are all people, but the instance segmentation results are different objects.

上述實例區域可以理解為圖像中的實例所在的區域,即除去背景區域以外的區域,上述集成語義數據則可以指示上述圖像中位於實例區域的像素點,比如針對細胞核分割的處理,上述集成語義數據可以包括位於細胞核區域的像素點的判斷結果。The above-mentioned instance area can be understood as the area where the instance in the image is located, that is, the area except the background area, and the above-mentioned integrated semantic data can indicate the pixels located in the instance area in the above-mentioned image, such as for the processing of cell nucleus segmentation, the above-mentioned integrated The semantic data may include judgment results of pixels located in the nucleus area.

而上述集成中心區域數據可以指示上述圖像中位於實例中心區域的像素點。The above-mentioned integrated central area data may indicate the pixel points located in the central area of the instance in the above-mentioned image.

可以將實例中心所屬的一小塊區域定義為實例中心區域,即實例中心區域是在該實例區域內並且小於該實例區域的區域,並且該實例中心區域的幾何中心與該實例區域的幾何中心重疊或鄰近,例如,實例中心區域的中心為實例中心。可選地,該實例中心區域可以為圓形、橢圓或其他形狀,上述實例中心區域可以根據需要進行設置,本申請實施例對實例中心區域的具體實現不做限制。A small area to which the instance center belongs can be defined as the instance center area, that is, the instance center area is an area within the instance area and smaller than the instance area, and the geometric center of the instance center area overlaps with the geometric center of the instance area or nearby, for example, the center of the instance center region is the instance center. Optionally, the central area of the example may be a circle, an ellipse or other shapes, and the central area of the above example may be set as required, and the embodiment of the present application does not limit the specific realization of the central area of the example.

具體的,可以先基於上述N個實例分割模型中每個實例分割模型的實例分割輸出數據,得到上述每個實例分割模型的語義數據和中心區域數據,即一共N組語義數據和N組中心區域數據,再基於上述N個實例分割模型中每個實例分割模型的語義數據和中心區域數據進行集成處理,得到上述圖像的集成語義數據和集成中心區域數據。Specifically, based on the instance segmentation output data of each instance segmentation model in the above N instance segmentation models, the semantic data and central area data of each of the above instance segmentation models can be obtained, that is, a total of N sets of semantic data and N sets of central areas data, and then perform integrated processing based on the semantic data and central area data of each instance segmentation model in the above N instance segmentation models, to obtain the integrated semantic data and integrated central area data of the above image.

對於N個實例分割模型的每個實例分割模型中每個實例分割模型的實例分割輸出數據,可以確定在該實例分割模型中的每個像素點對應的實例標識信息(實例ID),再基於上述實例分割模型中所述多個像素點中每個像素點對應的實例標識信息,得到每個像素點在上述實例分割模型中的語義預測值。其中,上述實例分割模型的語義數據包括上述圖像的多個像素點中每個像素點的語義預測值。For the instance segmentation output data of each instance segmentation model in each instance segmentation model of N instance segmentation models, the instance identification information (instance ID) corresponding to each pixel in the instance segmentation model can be determined, and then based on the above The instance identification information corresponding to each of the plurality of pixels in the instance segmentation model is used to obtain the semantic prediction value of each pixel in the above instance segmentation model. Wherein, the semantic data of the above-mentioned instance segmentation model includes a semantic prediction value of each pixel point among the multiple pixel points of the above-mentioned image.

本申請實施例中提到的二值化(Thresholding)是圖像分割的一種簡單的方法。二值化可以把灰度圖像轉換成二值圖像,可以把大於某個臨界灰度值的像素點灰度設為灰度極大值,把小於這個值的像素點灰度設為灰度極小值,從而實現二值化。The binarization (Thresholding) mentioned in the embodiment of this application is a simple method of image segmentation. Binarization can convert a grayscale image into a binary image. The grayscale of pixels greater than a certain critical grayscale value can be set as the maximum grayscale value, and the grayscale of pixels smaller than this value can be set as grayscale. minimum value to achieve binarization.

在本公開實施例中,二值化處理可以為固定閾值的二值化處理或者自適應閾值的二值化處理。例如雙峰法、P參數法、迭代法和OTSU法等,本公開實施例對二值化處理的具體實現不做限定。In the embodiment of the present disclosure, the binarization process may be a binarization process with a fixed threshold or a binarization process with an adaptive threshold. For example, the bimodal method, the P parameter method, the iterative method, and the OTSU method, etc., the embodiment of the present disclosure does not limit the specific implementation of the binarization processing.

在本公開實施例中,可以通過對第一圖像進行處理,得到第一圖像包含的多個像素點中每個像素點的語義預測結果。在一些可能的實現方式中,通過判斷像素點的語義預測值與上述第一閾值之間的大小關係,來獲得像素點的語義預測結果。可選地,上述二值化處理的第一閾值可以是預設的或者是根據實際情況確定的,本公開實施例對此不做限定。In the embodiment of the present disclosure, the semantic prediction result of each pixel in the plurality of pixels included in the first image may be obtained by processing the first image. In some possible implementation manners, the semantic prediction result of the pixel is obtained by judging the magnitude relationship between the semantic prediction value of the pixel and the above-mentioned first threshold. Optionally, the first threshold of the foregoing binarization processing may be preset or determined according to actual conditions, which is not limited in this embodiment of the present disclosure.

在得到上述圖像的集成語義數據和集成中心區域數據之後,可以執行步驟103。After the integrated semantic data and integrated central area data of the above image are obtained, step 103 may be performed.

103、基於上述圖像的集成語義數據和集成中心區域數據,獲得上述圖像的實例分割結果。103. Based on the integrated semantic data and the integrated central region data of the above image, obtain an instance segmentation result of the above image.

在一些可能的實現方式中,可以基於上述圖像的集成中心區域數據,得到上述圖像的至少一個實例中心區域,基於上述至少一個實例中心區域和上述圖像的集成語義數據,確定上述圖像的多個像素點中每個像素點所屬的實例。In some possible implementation manners, at least one instance central area of the above-mentioned image may be obtained based on the integrated central area data of the above-mentioned image, and the above-mentioned image may be determined based on the above-mentioned at least one instance central area and the integrated semantic data of the above-mentioned image The instance to which each pixel in multiple pixels of .

上述集成語義數據指示圖像中位於實例區域的至少一個像素點,例如,集成語義數據可以包括圖像的多個像素點中每個像素點的集成語義值,集成語義值用於指示像素點是否位於實例區域,或用於指示像素點位於實例區域或背景區域。上述集成中心區域數據指示上述圖像中位於實例中心區域的至少一個像素點,例如,集成中心區域數據包括圖像的多個像素點中每個像素點的集成中心區域預測值,集成中心區域預測值用於指示像素點是否位於實例中心區域。The above-mentioned integrated semantic data indicates at least one pixel located in the instance area in the image. For example, the integrated semantic data may include an integrated semantic value of each pixel in multiple pixels of the image, and the integrated semantic value is used to indicate whether the pixel In the instance area, or used to indicate that the pixel is located in the instance area or the background area. The above-mentioned integrated central area data indicates at least one pixel point located in the central area of the instance in the above-mentioned image. The value is used to indicate whether the pixel is located in the central area of the instance.

可選地,通過上述集成語義數據可以確定圖像的實例區域中包含的至少一個像素點,通過上述集成中心區域數據可以確定圖像的實例中心區域中包含的至少一個像素點。基於上述圖像的集成中心區域數據和集成語義數據,則可以確定上述圖像的多個像素點中每個像素點所屬的實例,獲得圖像的實例分割結果。Optionally, at least one pixel contained in the instance region of the image may be determined through the integrated semantic data, and at least one pixel contained in the instance central region of the image may be determined through the integrated central region data. Based on the integrated central area data and integrated semantic data of the above image, the instance to which each pixel of the multiple pixels of the above image belongs can be determined, and an instance segmentation result of the image can be obtained.

通過上述方法獲得的實例分割結果集成了N個實例分割模型的實例分割輸出結果,整合了不同實例分割模型的優點,不再要求不同實例分割模型擁有相同含義的數據輸出,並且提高了實例分割精度。The instance segmentation result obtained by the above method integrates the instance segmentation output results of N instance segmentation models, integrates the advantages of different instance segmentation models, no longer requires different instance segmentation models to have the same meaning of data output, and improves the instance segmentation accuracy .

本申請實施例通過獲取N組實例分割輸出數據,其中,上述N組實例分割輸出數據分別為N個實例分割模型對圖像進行處理獲得的實例分割輸出結果,且上述N組實例分割輸出數據具有不同的數據結構,上述N為大於1的整數,再基於上述N組實例分割輸出數據,得到上述圖像的集成語義數據和集成中心區域數據,其中,上述集成語義數據指示上述圖像中位於實例區域的像素點,上述集成中心區域數據指示上述圖像中位於實例中心區域的像素點,進而基於上述圖像的集成語義數據和集成中心區域數據,獲得上述圖像的實例分割結果,可以在圖像處理的實例分割問題中,實現各個實例分割模型的優勢互補,不再要求各個模型具有相同結構或涵義的數據輸出,在實例分割問題中取得更高的精度。In this embodiment of the present application, N sets of instance segmentation output data are acquired, wherein the above N sets of instance segmentation output data are the instance segmentation output results obtained by processing images with N instance segmentation models, and the above N sets of instance segmentation output data have Different data structures, the above-mentioned N is an integer greater than 1, and then the output data is segmented based on the above-mentioned N groups of instances to obtain the integrated semantic data and integrated central area data of the above-mentioned image, wherein the above-mentioned integrated semantic data indicates that the instance located in the above-mentioned image is The pixel points of the region, the above-mentioned integration center area data indicate the pixels located in the center area of the instance in the above-mentioned image, and then based on the integrated semantic data and the integration center area data of the above-mentioned image, the instance segmentation result of the above-mentioned image is obtained, which can be shown in Fig. In the instance segmentation problem of image processing, the complementary advantages of each instance segmentation model are realized, and the data output of each model is no longer required to have the same structure or meaning, and higher accuracy is achieved in the instance segmentation problem.

請參閱圖2,圖2是本申請實施例公開的另一種圖像處理方法的流程示意圖,圖2是在圖1的基礎上進一步優化得到的。該方法可以由任意電子設備執行,例如終端設備、服務器或者處理平臺等,本公開實施例對此不做限定。如圖2所示,該圖像處理方法包括如下步驟。Please refer to FIG. 2 . FIG. 2 is a schematic flowchart of another image processing method disclosed in the embodiment of the present application. FIG. 2 is further optimized on the basis of FIG. 1 . The method may be executed by any electronic device, such as a terminal device, a server, or a processing platform, which is not limited in this embodiment of the present disclosure. As shown in Figure 2, the image processing method includes the following steps.

201、獲取N組實例分割輸出數據,其中,上述N組實例分割輸出數據分別為N個實例分割模型對圖像進行處理獲得的實例分割輸出結果,且上述N組實例分割輸出數據具有不同的數據結構,上述N為大於1的整數。201. Obtain N sets of instance segmentation output data, wherein the above N sets of instance segmentation output data are respectively instance segmentation output results obtained by processing images with N instance segmentation models, and the above N sets of instance segmentation output data have different data structure, the above N is an integer greater than 1.

其中,上述步驟201可以參考圖1所示實施例的步驟101中的具體描述,此處不再贅述。Wherein, for the above step 201, reference may be made to the specific description in step 101 of the embodiment shown in FIG. 1 , which will not be repeated here.

202、基於上述實例分割模型的實例分割輸出數據,確定在上述實例分割模型中,上述圖像中位於實例區域的至少兩個像素點。202. Based on the instance segmentation output data of the instance segmentation model, determine at least two pixel points located in the instance region in the image in the instance segmentation model.

可以將實例中心所屬的一小塊區域定義為實例中心區域,即實例中心區域是在該實例區域內並且小於該實例區域的區域,並且該實例中心區域的幾何中心與該實例區域的幾何中心重疊或鄰近,例如,實例中心區域的中心為實例中心。可選地,該實例中心區域可以為圓形、橢圓或其他形狀,上述實例中心區域可以根據需要進行設置,本申請實施例對實例中心區域的具體實現不做限制。可選地,實例分割輸出數據可以包括圖像中位於實例區域的至少兩個像素點中每個像素點對應的實例標識信息,例如,實例ID為1、2或3等大於0的整數,或者也可以為其他數值,位於背景區域的像素點對應的實例標識信息可以為預設值,或者位於背景區域的像素點不對應任何實例標識信息。這樣可以基於實例分割輸出數據中多個像素點中每個像素點對應的實例標識信息,確定圖像中位於實例區域的至少兩個像素點。A small area to which the instance center belongs can be defined as the instance center area, that is, the instance center area is an area within the instance area and smaller than the instance area, and the geometric center of the instance center area overlaps with the geometric center of the instance area or nearby, for example, the center of the instance center region is the instance center. Optionally, the central area of the example may be a circle, an ellipse or other shapes, and the central area of the above example may be set as required, and the embodiment of the present application does not limit the specific realization of the central area of the example. Optionally, the instance segmentation output data may include instance identification information corresponding to each pixel in at least two pixels located in the instance area in the image, for example, the instance ID is an integer greater than 0 such as 1, 2, or 3, or It can also be other values, and the instance identification information corresponding to the pixel in the background area can be a preset value, or the pixel in the background area does not correspond to any instance identification information. In this way, at least two pixel points located in the instance region in the image can be determined based on the instance identification information corresponding to each of the plurality of pixel points in the instance segmentation output data.

可選地,實例分割輸出數據也可以不包括每個像素點對應的實例標識信息,此時,可以通過對實例分割輸出數據進行處理,得到圖像中位於實例區域的至少兩個像素點,本公開實施例對此不做限定。Optionally, the instance segmentation output data may not include the instance identification information corresponding to each pixel. At this time, at least two pixel points located in the instance area in the image can be obtained by processing the instance segmentation output data. The disclosed embodiments do not limit this.

在確定上述圖像中位於實例區域的至少兩個像素點之後,可以執行步驟203。After determining at least two pixel points located in the instance area in the above image, step 203 may be performed.

203、基於上述實例分割模型中位於實例區域的至少兩個像素點的位置信息,確定上述實例分割模型的實例中心位置。203. Based on the location information of at least two pixel points located in the instance area in the instance segmentation model, determine the instance center position of the instance segmentation model.

在確定了上述實例分割模型中位於實例區域的至少兩個像素點之後,可以獲得上述至少兩個像素點的位置信息,其中,可選地,該位置信息可以包括像素點在圖像中的坐標,但本公開實施例不限於此。After determining at least two pixel points located in the instance region in the above-mentioned instance segmentation model, the position information of the above-mentioned at least two pixel points can be obtained, wherein, optionally, the position information can include the coordinates of the pixel points in the image , but the embodiments of the present disclosure are not limited thereto.

可以根據上述至少兩個像素點的位置信息,確定上述實例分割模型的實例中心位置,上述實例中心位置不局限於為該實例的幾何中心位置,而是為實例區域中預測的該實例區域的中心位置,用於進一步確定實例中心區域,即可以理解為上述實例中心區域中的任一位置。The instance center position of the above instance segmentation model can be determined according to the position information of the at least two pixel points above. The above instance center position is not limited to the geometric center position of the instance, but is the center of the instance region predicted in the instance region The location is used to further determine the central area of the instance, which can be understood as any position in the central area of the above instance.

可選的,可以將上述位於實例區域的至少兩個像素點的位置的平均值,作為上述實例分割模型的實例中心位置。Optionally, the average value of the positions of at least two pixel points located in the instance region may be used as the instance center position of the instance segmentation model.

具體可以將上述位於實例區域的至少兩個像素點的坐標取平均值,作為上述實例分割模型的實例中心位置的坐標,以確定上述實例中心位置。Specifically, the coordinates of the at least two pixel points located in the instance area may be averaged as the coordinates of the instance center position of the instance segmentation model, so as to determine the instance center position.

204、基於上述實例分割模型的實例中心位置和上述至少兩個像素點的位置信息,確定上述實例分割模型的實例中心區域。204. Determine an instance center region of the instance segmentation model based on the instance center position of the instance segmentation model and the position information of the at least two pixel points.

具體的,可以基於上述實例分割模型的實例中心位置和上述至少兩個像素點的位置信息,確定上述至少兩個像素點與上述實例中心位置的最大距離,再基於上述最大距離,確定第一閾值,然後,可以將上述至少兩個像素點中與上述實例中心位置之間的距離小於或等於上述第一閾值的像素點確定為實例中心區域的像素點。Specifically, based on the instance center position of the above instance segmentation model and the position information of the at least two pixel points, determine the maximum distance between the above at least two pixel points and the above instance center position, and then determine the first threshold based on the above maximum distance , and then, among the at least two pixel points, the pixel points whose distance from the central position of the instance is less than or equal to the first threshold may be determined as the pixel points in the central region of the instance.

比如,可以基於上述實例分割模型的實例中心位置和上述至少兩個像素點的位置信息,計算其中每一個像素點到達該實例中心位置的距離(像素點距離),電子設備中可以預先設置上述第一閾值的算法,比如上述第一閾值可以設置為上述像素點距離中最大距離的30%,在確定上述像素點距離中最大距離之後,可以計算獲得上述第一閾值,以此為基礎,保留像素點距離小於上述第一閾值的像素點,確定這些像素點為上述實例中心區域的像素點,即確定了上述實例中心區域。For example, the distance (pixel distance) from each pixel to the center of the instance can be calculated based on the instance center position of the above instance segmentation model and the position information of the at least two pixels above, and the electronic device can pre-set the above first A threshold algorithm, for example, the above-mentioned first threshold can be set to 30% of the maximum distance among the above-mentioned pixel point distances, after determining the maximum distance among the above-mentioned pixel point distances, the above-mentioned first threshold value can be obtained by calculating, based on this, the pixel If the pixel points whose point distance is smaller than the above-mentioned first threshold are determined as the pixel points of the central area of the above-mentioned instance, that is, the central area of the above-mentioned instance is determined.

可選地,還可以對樣本圖像進行腐蝕處理,得到腐蝕處理後的樣本圖像,並基於腐蝕處理後的樣本圖像確定實例中心區域。Optionally, the sample image may also be etched to obtain an etched sample image, and the central region of the instance may be determined based on the etched sample image.

圖像的腐蝕操作是表示用某種結構元素對圖像進行探測,以便找出在圖像內部可以放下該結構元素的區域。本申請實施例中提到的圖像腐蝕處理可以包括上述腐蝕操作,腐蝕操作是結構元素在被腐蝕圖像中平移填充的過程。從腐蝕後的結果來看,圖像前景區域縮小,區域邊界變模糊,同時一些比較小的孤立的前景區域被完全腐蝕掉,達到了濾波的效果。The erosion operation of the image means to detect the image with a certain structural element in order to find out the area where the structural element can be placed inside the image. The image erosion processing mentioned in the embodiment of the present application may include the above erosion operation, and the erosion operation is a process in which structural elements are translated and filled in the etched image. Judging from the results after erosion, the foreground area of the image shrinks, and the boundary of the area becomes blurred. At the same time, some relatively small isolated foreground areas are completely corroded, achieving the effect of filtering.

比如,針對每一個實例遮罩,首先利用5×5的卷積核對實例遮罩(mask)進行圖像腐蝕處理,然後,將實例包括的多個像素點的坐標進行平均,得到實例的中心位置,並確定實例中的所有像素點到達該實例的中心位置的最大距離,並將與實例的中心位置之間的距離小於上述最大距離的30%的像素點確定為實例的中心區域的像素點,即得到實例的中心區域。這樣,由樣本圖像中的實例遮罩縮小一圈後,進行圖像二值化處理獲得中心區域預測的二值圖遮罩。For example, for each instance mask, first use a 5×5 convolution kernel to perform image erosion processing on the instance mask (mask), and then average the coordinates of multiple pixels included in the instance to obtain the center position of the instance , and determine the maximum distance from all the pixels in the instance to the center of the instance, and determine the pixels whose distance from the center of the instance is less than 30% of the above-mentioned maximum distance as the pixels in the center area of the instance, That is, the central region of the instance is obtained. In this way, after the example mask in the sample image is shrunk a circle, image binarization is performed to obtain the binary image mask predicted by the central region.

此外,可選地,可以基於樣本圖像中標注的實例中包含的像素點的坐標以及實例的中心位置,獲得像素點的中心相對位置信息,即上述像素點與實例中心之間的相對位置信息,例如由像素點到實例中心的向量,並將該相對位置信息作為監督進行神經網路的訓練,但本公開實施例不限於此。In addition, optionally, based on the coordinates of the pixel points contained in the instance marked in the sample image and the center position of the instance, the relative position information of the center of the pixel point can be obtained, that is, the relative position information between the above-mentioned pixel point and the center of the instance , such as a vector from a pixel point to the center of an instance, and use the relative position information as a supervision for neural network training, but the embodiments of the present disclosure are not limited thereto.

205、基於上述N個實例分割模型中每個實例分割模型的語義數據,確定上述圖像的多個像素點中每個像素點的語義投票值。205. Based on the semantic data of each instance segmentation model in the above N instance segmentation models, determine the semantic voting value of each pixel in the multiple pixels of the image.

電子設備可以基於上述N個實例分割模型中每個實例分割模型的語義數據,對多個像素點中每個像素點進行語義投票,確定上述圖像的多個像素點中每個像素點的語義投票值,比如使用基於滑動窗口的投票對上述實例分割模型的語義數據進行處理,確定上述每個像素點的語義投票值,進而可以執行步驟206。The electronic device may perform semantic voting on each of the plurality of pixels based on the semantic data of each of the above N instance segmentation models, and determine the semantics of each of the plurality of pixels of the above image For the voting value, for example, the semantic data of the above-mentioned instance segmentation model is processed by using the voting based on the sliding window, and the semantic voting value of each pixel is determined, and then step 206 can be executed.

206、對上述多個像素點中每個像素點的語義投票值進行二值化處理,得到上述圖像中每個像素點的集成語義值,其中,上述圖像的集成語義數據包括上述多個像素點中每個像素點的集成語義值。206. Perform binarization processing on the semantic voting value of each pixel in the above-mentioned plurality of pixels to obtain the integrated semantic value of each pixel in the above-mentioned image, wherein the integrated semantic data of the above-mentioned image includes the above-mentioned multiple The integrated semantic value of each pixel in the pixel.

上述確定的語義投票值來自上述N個實例分割模型,進一步地,可以對每個像素點的語義投票值進行二值化處理,得到上述圖像中每個像素點的集成語義值,可以理解為不同實例分割模型得到的語義遮罩相加得到集成語義遮罩。The semantic voting value determined above comes from the above N instance segmentation models. Further, the semantic voting value of each pixel can be binarized to obtain the integrated semantic value of each pixel in the above image, which can be understood as The semantic masks obtained by different instance segmentation models are summed to obtain an integrated semantic mask.

具體的,可以基於上述多個實例分割模型的個數N,確定第二閾值;基於上述第二閾值,對上述多個像素點中每個像素點的語義投票值進行二值化處理,得到上述圖像中每個像素點的集成語義值。Specifically, the second threshold can be determined based on the number N of the above multiple instance segmentation models; based on the above second threshold, the semantic voting value of each pixel in the above multiple pixels is binarized to obtain the above The integrated semantic value of each pixel in the image.

上述多個像素點中每個像素點的取值有實例分割模型個數的取值可能性,可以基於上述多個實例分割模型的個數N,確定第二閾值,比如,上述第二閾值可以為N/2的向上取整結果。The value of each pixel in the above-mentioned plurality of pixel points has the value possibility of the number of instance segmentation models, and the second threshold can be determined based on the number N of the above-mentioned multiple instance segmentation models. For example, the above-mentioned second threshold can be It is the rounded up result of N/2.

可以以第二閾值為該步驟中二值化處理的判斷依據,得到上述圖像中每個像素點的集成語義值。電子設備中可以儲存有上述第二閾值的計算方法,比如規定上述預設像素點閾值為N/2,若N/2不為整數則向上取整。舉例來講,比如4個實例分割模型獲得的4組實例分割輸出數據,則N=4,4/2=2,此時的第二閾值是2,比較上述語義投票值和上述第二閾值,語義投票值大於等於2的截斷為1,小於2的截斷為0,由此得到上述圖像中每個像素點的集成語義值,此時輸出的數據具體可以為集成語義二值圖。上述集成語義值可以理解為上述每個像素點的語義分割結果,可以以為基礎確定該像素點所屬的實例,實現實例分割。The second threshold can be used as the judgment basis of the binarization process in this step to obtain the integrated semantic value of each pixel in the above image. The calculation method of the above-mentioned second threshold can be stored in the electronic device, for example, it is stipulated that the above-mentioned preset pixel point threshold is N/2, and if N/2 is not an integer, it is rounded up. For example, if there are 4 sets of instance segmentation output data obtained by 4 instance segmentation models, then N=4, 4/2=2, the second threshold at this time is 2, compare the above semantic voting value with the above second threshold, If the semantic voting value is greater than or equal to 2, it is truncated to 1, and if it is less than 2, it is truncated to 0, thus obtaining the integrated semantic value of each pixel in the above image, and the output data at this time can specifically be an integrated semantic binary image. The above-mentioned integrated semantic value can be understood as the semantic segmentation result of each pixel above, and based on this, the instance to which the pixel belongs can be determined to realize instance segmentation.

207、基於上述圖像的多個像素點中每個像素點的集成語義值和上述至少一個實例中心區域,進行隨機遊走,得到上述每個像素點所屬的實例。207. Based on the integrated semantic value of each pixel in the plurality of pixels of the above image and the center area of at least one instance, perform random walk to obtain the instance to which each pixel above belongs.

隨機遊走(random walk)也稱隨機漫步,隨機行走等,是指基於過去的表現,無法預測將來的發展步驟和方向。隨機遊走的核心概念是指任何無規則行走者所帶的守恆量都各自對應著一個擴散運輸定律,接近於布朗運動,是布朗運動理想的數學狀態。本申請實施例中針對圖像處理的隨機遊走的基本思想是,將圖像看成由固定的頂點和邊組成的連通帶權無向圖,從未標記頂點開始隨機漫步,首次到達各類標記頂點的機率代表了未標記點歸屬於標記類的可能性,把最大的機率所在類的標簽賦給未標記頂點,完成分割。Random walk (random walk), also known as random walk, random walk, etc., means that based on past performance, it is impossible to predict future development steps and directions. The core concept of random walk is that the conserved quantity carried by any random walker corresponds to a diffusion transport law, which is close to Brownian motion and is the ideal mathematical state of Brownian motion. The basic idea of the random walk for image processing in the embodiment of the present application is to regard the image as a connected weighted undirected graph composed of fixed vertices and edges, start a random walk from unmarked vertices, and reach all kinds of markers for the first time The probability of the vertex represents the possibility that the unmarked point belongs to the marked class, and the label of the class with the highest probability is assigned to the unmarked vertex to complete the segmentation.

基於上述圖像的多個像素點中每個像素點的集成語義值和上述至少一個實例中心區域,使用隨機遊走的形式來根據像素點的集成語義值判斷像素點的分配情況,從而得到上述每個像素點所屬的實例,比如可以將離像素點最近的實例中心區域對應的實例確定為該像素點所屬的實例。本申請實施例可以通過得到最終的集成語義圖和集成中心區域圖,結合上述連通區域搜索和隨機遊走的一種具體實現(就近分配)確定實例的像素點分配,獲得最後的實例分割結果。Based on the integrated semantic value of each pixel in the multiple pixels of the above image and the central region of the above at least one instance, use the form of random walk to judge the distribution of the pixel according to the integrated semantic value of the pixel, so as to obtain the above-mentioned each The instance to which a pixel belongs, for example, the instance corresponding to the central area of the instance closest to the pixel can be determined as the instance to which the pixel belongs. In this embodiment of the present application, the final instance segmentation result can be obtained by obtaining the final integrated semantic graph and integrated central region graph, combined with a specific implementation of the above-mentioned connected region search and random walk (nearest allocation) to determine instance pixel assignment.

通過上述方法獲得的實例分割結果集成了N個實例分割模型的實例分割輸出結果,整合了這些實例分割模型的優點,不再要求不同實例分割模型擁有相同含義的連續機率圖輸出,並且提高了實例分割精度。The instance segmentation result obtained by the above method integrates the instance segmentation output results of N instance segmentation models, integrates the advantages of these instance segmentation models, no longer requires different instance segmentation models to have the same meaning of the continuous probability map output, and improves the instance Segmentation accuracy.

本申請實施例中的方法,適用於任意實例分割問題中,例如在臨床的輔助診斷中。在醫生獲得了病人的器官組織切片數位掃描圖像後,將該圖像輸入本申請實施例的處理步驟,可以獲得每一個獨立細胞核的像素點遮罩,醫生可以以此為依據,計算該器官的細胞密度、細胞形態特徵,進而得出醫學判斷。又如在蜂巢四周,飼養員獲得了蜂巢四周密集的蜜蜂飛舞圖像後,可以使用本算法,獲得每一隻獨立蜜蜂的實例像素點遮罩,可進行宏觀的蜜蜂計數、行為模式計算等,具有很大的實用價值。The method in the embodiment of the present application is applicable to any instance segmentation problem, for example, in clinical auxiliary diagnosis. After the doctor obtains the digital scan image of the patient's organ tissue slice, the image is input into the processing steps of the embodiment of the present application, and the pixel point mask of each independent cell nucleus can be obtained, and the doctor can use this as a basis to calculate the The cell density and cell morphology characteristics of the cell, and then draw a medical judgment. Another example is around the hive, after the breeder obtains the dense bee flying images around the hive, he can use this algorithm to obtain the instance pixel mask of each individual bee, and can perform macroscopic bee counting and behavior pattern calculations, etc. It has great practical value.

本申請實施例的具體應用中,對於自底向上的方法,可以優選應用UNet模型。UNet首先被開發用於語義分割,並有效地從多個尺度融合信息。對於自頂向下的方法,可以應用MaskR-CNN模型,MaskR-CNN通過為分割任務添加頭部來擴展更快的R-CNN。此外,所提出的MaskR-CNN中可以將跟蹤特徵與輸入對齊,避免了雙線性插值的任何量化。對齊對於像素點級任務,比如實例分割任務是十分重要的。In the specific application of the embodiment of the present application, for the bottom-up method, the UNet model can be preferably applied. UNet was first developed for semantic segmentation and efficiently fuses information from multiple scales. For the top-down approach, the MaskR-CNN model can be applied, which extends Faster R-CNN by adding a head for the segmentation task. Furthermore, the proposed MaskR-CNN can align tracked features to the input, avoiding any quantization of bilinear interpolation. Alignment is very important for pixel-level tasks, such as instance segmentation tasks.

UNet模型的網路結構由收縮路徑(contracting path)和擴張路徑(expanding path)組成。其中,收縮路徑用於獲取上下文信息(context),擴張路徑用於精確的定位(localization),且兩條路徑相互對稱。該網路能夠從極少圖像端對端進行訓練,並且對於分割電子顯微鏡中的神經元等細胞結構的表現好於以前最好的方法(滑動窗口卷積網路)。除此之外運行速度也非常快,The network structure of the UNet model consists of a contracting path and an expanding path. Among them, the contraction path is used to obtain context information (context), and the expansion path is used for precise positioning (localization), and the two paths are symmetrical to each other. The network can be trained end-to-end from very few images and outperforms the previous best method (sliding window convolutional network) for segmenting cellular structures such as neurons in electron microscopy. In addition to running very fast,

在一種具體的實施方式中,可以利用UNet和Mask R-CNN模型對實例進行分割預測,得到每個實例分割模型的語義遮罩,並通過像素點投票(Vote)進行集成。然後通過腐蝕處理來計算每個實例分割模型的中心遮罩,並對中心遮罩進行集成。最後,利用隨機遊走算法從集成的語義遮罩和中心遮罩中獲得實例分割結果。In a specific implementation, the UNet and Mask R-CNN models can be used to segment and predict instances, obtain the semantic mask of each instance segmentation model, and integrate them through pixel voting (Vote). The center mask of each instance segmentation model is then computed by erosion processing, and the center masks are integrated. Finally, instance segmentation results are obtained from the integrated semantic and center masks using a random walk algorithm.

針對上述結果可以採用交叉驗證(Cross-validation)方法進行評估。交叉驗證主要用於建模應用中。在給定的建模樣本中,拿出大部分樣本進行建模型,留小部分樣本用剛建立的模型進行預報,並求這小部分樣本的預報誤差,記錄它們的平方加和。本申請實施例中可採用3倍交叉驗證進行評估,將三個AJI(5)得分0.605,0.599,0.589的UNet模型與和一個AJI(5)得分0.565的MaskR-CNN模型結合,使用本申請實施例的方法獲得的結果最後AJI(5)得分為0.616,可見本申請的圖像處理方法具有明顯的優勢。For the above results, the cross-validation (Cross-validation) method can be used for evaluation. Cross-validation is primarily used in modeling applications. In the given modeling samples, take most of the samples to build the model, leave a small part of the samples to predict with the model just built, and calculate the forecast error of this small part of the samples, and record their sum of squares. In the embodiment of this application, 3-fold cross-validation can be used for evaluation, and three UNet models with AJI (5) scores of 0.605, 0.599, and 0.589 are combined with a MaskR-CNN model with an AJI (5) score of 0.565, and implemented using this application The final AJI (5) score of the result obtained by the example method is 0.616, which shows that the image processing method of the present application has obvious advantages.

本申請實施例通過獲取N組實例分割輸出數據,其中,上述N組實例分割輸出數據分別為N個實例分割模型對圖像進行處理獲得的實例分割輸出結果,且上述N組實例分割輸出數據具有不同的數據結構,上述N為大於1的整數,基於上述實例分割模型的實例分割輸出數據,確定在上述實例分割模型中,上述圖像中位於實例區域的至少兩個像素點,基於上述實例分割模型中位於實例區域的至少兩個像素點的位置信息,確定上述實例分割模型的實例中心位置,基於上述實例分割模型的實例中心位置和上述至少兩個像素點的位置信息,確定上述實例分割模型的實例中心區域,基於上述N個實例分割模型中每個實例分割模型的語義數據,確定上述圖像的多個像素點中每個像素點的語義投票值,對上述多個像素點中每個像素點的語義投票值進行二值化處理,得到上述圖像中每個像素點的集成語義值,其中,上述圖像的集成語義數據包括上述多個像素點中每個像素點的集成語義值,基於上述圖像的多個像素點中每個像素點的集成語義值和上述至少一個實例中心區域,進行隨機遊走,得到上述每個像素點所屬的實例,可以在圖像處理的實例分割問題中,實現各個實例分割模型的優勢互補,不再要求各個模型具有相同結構或含義的數據輸出,在實例分割問題中取得更高的精度。In this embodiment of the present application, N sets of instance segmentation output data are acquired, wherein the above N sets of instance segmentation output data are the instance segmentation output results obtained by processing images with N instance segmentation models, and the above N sets of instance segmentation output data have Different data structures, the above-mentioned N is an integer greater than 1, and based on the instance segmentation output data of the above-mentioned instance segmentation model, it is determined that in the above-mentioned instance segmentation model, at least two pixel points located in the instance area in the above-mentioned image are based on the above-mentioned instance segmentation Based on the position information of at least two pixels in the instance area in the model, determine the instance center position of the above-mentioned instance segmentation model, and determine the above-mentioned instance segmentation model based on the instance center position of the above-mentioned instance segmentation model and the position information of the above-mentioned at least two pixel points Based on the semantic data of each instance segmentation model in the above N instance segmentation models, determine the semantic voting value of each pixel in the multiple pixels of the above image, and for each of the above multiple pixel points The semantic voting value of the pixel is binarized to obtain the integrated semantic value of each pixel in the above image, wherein the integrated semantic data of the above image includes the integrated semantic value of each pixel in the above multiple pixels , based on the integrated semantic value of each pixel in the multiple pixels of the above image and the center area of at least one instance above, random walk is performed to obtain the instance to which each pixel above belongs, which can be used in the instance segmentation problem of image processing In this method, the complementary advantages of each instance segmentation model are realized, and the data output of each model is no longer required to have the same structure or meaning, and higher accuracy is achieved in the instance segmentation problem.

請參閱圖3,圖3是本申請實施例公開的一種細胞實例分割的圖像表現形式示意圖,如圖所示,以細胞實例分割為例,使用本申請實施例中的方法進行處理,可以獲得精度更高的實例分割結果。使用N種實例分割模型(圖中僅展示4種)分別給出輸入圖片的實例預測遮罩(圖中不同色彩表示不同的細胞實例),將所述實例預測遮罩轉換為使用語義預測分割的語義遮罩和使用中心預測分割的中心區域遮罩後,分別進行像素點投票,再進行集成,最終獲得實例分割結果,可以看出,在該過程中修復了方法1的右側三細胞漏檢兩個的錯誤,修復了方法2的中間兩細胞黏合的錯誤,還修復了4個方法都沒能發現的左下角其實是三個細胞,中間還有個小細胞存在的現象。該集成方法可以在任意實例分割模型上集成,整合了不同方法的優點。通過上述舉例可以更加清晰地瞭解前述實施例的具體過程及其優勢。Please refer to Fig. 3. Fig. 3 is a schematic diagram of the image representation of a cell instance segmentation disclosed in the embodiment of the present application. Instance segmentation results with higher accuracy. Use N kinds of instance segmentation models (only 4 are shown in the figure) to give the instance prediction masks of the input picture (different colors in the figure represent different cell instances), and convert the instance prediction masks into semantic prediction segmentation. After the semantic mask and the center region masked by the center prediction segmentation, the pixel points are voted separately, and then integrated to finally obtain the instance segmentation result. It fixed the error of bonding two cells in the middle of method 2, and also repaired the phenomenon that there are actually three cells in the lower left corner and a small cell in the middle that failed to be found by the four methods. The ensemble method can be ensembled on arbitrary instance segmentation models, combining the advantages of different methods. Through the above examples, the specific process and advantages of the foregoing embodiments can be understood more clearly.

上述主要從方法側執行過程的角度對本申請實施例的方案進行了介紹。可以理解的是,電子設備為了實現上述功能,其包含了執行各個功能相應的硬體結構和/或軟體模塊。本領域技術人員應該很容易意識到,結合本文中所公開的實施例描述的各示例的單元及算法步驟,本申請能夠以硬體或硬體和電腦軟體的結合形式來實現。某個功能究竟以硬體還是電腦軟體驅動硬體的方式來執行,取決於技術方案的特定應用和設計約束條件。專業技術人員可以對特定的應用使用不同方法來實現所描述的功能,但是這種實現不應認為超出本申請的範圍。The foregoing mainly introduces the solutions of the embodiments of the present application from the perspective of executing the process on the method side. It can be understood that, in order to realize the above-mentioned functions, the electronic device includes hardware structures and/or software modules corresponding to each function. Those skilled in the art should easily realize that the present application can be realized in the form of hardware or a combination of hardware and computer software in combination with the units and algorithm steps of each example described in the embodiments disclosed herein. Whether a certain function is executed by hardware or computer software drives the hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functions for specific applications, but such implementation should not be regarded as exceeding the scope of the present application.

本申請實施例可以根據上述方法示例對電子設備進行功能單元的劃分,例如,可以對應各個功能劃分各個功能單元,也可以將兩個或兩個以上的功能集成在一個處理單元中。上述集成的單元既可以採用硬體的形式實現,也可以採用軟體功能單元的形式實現。需要說明的是,本申請實施例中對單元的劃分是示意性的,僅僅為一種邏輯功能劃分,實際實現時可以有另外的劃分方式。The embodiment of the present application may divide the electronic device into functional units according to the above method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The above-mentioned integrated units can be implemented not only in the form of hardware, but also in the form of software functional units. It should be noted that the division of units in the embodiment of the present application is schematic, and is only a logical function division, and there may be another division manner in actual implementation.

請參閱圖4,圖4是本申請實施例公開的一種電子設備的結構示意圖。如圖4所示,該電子設備400包括:獲取模塊410、轉換模塊420和分割模塊430,其中:所述獲取模塊410,用於獲取N組實例分割輸出數據,其中,所述N組實例分割輸出數據分別為N個實例分割模型對圖像進行處理獲得的實例分割輸出結果,且所述N組實例分割輸出數據具有不同的數據結構,所述N為大於1的整數;所述轉換模塊420,用於基於所述N組實例分割輸出數據,得到所述圖像的集成語義數據和集成中心區域數據,其中,所述集成語義數據指示所述圖像中位於實例區域的像素點,所述集成中心區域數據指示所述圖像中位於實例中心區域的像素點;所述分割模塊430,用於基於所述圖像的集成語義數據和集成中心區域數據,獲得所述圖像的實例分割結果。Please refer to FIG. 4 . FIG. 4 is a schematic structural diagram of an electronic device disclosed in an embodiment of the present application. As shown in FIG. 4 , the electronic device 400 includes: an acquisition module 410, a conversion module 420, and a segmentation module 430, wherein: the acquisition module 410 is used to acquire N sets of instance segmentation output data, wherein the N sets of instance segmentation The output data are the instance segmentation output results obtained by processing the image with N instance segmentation models, and the N sets of instance segmentation output data have different data structures, and the N is an integer greater than 1; the conversion module 420 , for segmenting output data based on the N groups of instances to obtain integrated semantic data and integrated central area data of the image, wherein the integrated semantic data indicates pixels located in the instance area in the image, the The integrated central area data indicates the pixels located in the central area of the instance in the image; the segmentation module 430 is configured to obtain the instance segmentation result of the image based on the integrated semantic data and integrated central area data of the image .

可選的,所述轉換模塊420包括第一轉換單元421和第二轉換單元422,其中:所述第一轉換單元421,用於基於所述N個實例分割模型中每個實例分割模型的實例分割輸出數據,得到所述每個實例分割模型的語義數據和中心區域數據;所述第二轉換單元422,用於基於所述N個實例分割模型中每個實例分割模型的語義數據和中心區域數據,得到所述圖像的集成語義數據和集成中心區域數據。Optionally, the conversion module 420 includes a first conversion unit 421 and a second conversion unit 422, wherein: the first conversion unit 421 is configured to segment an instance of a model based on each instance of the N instance segmentation model Segment the output data to obtain the semantic data and central area data of each instance segmentation model; the second conversion unit 422 is used to segment the semantic data and central area of each instance segmentation model based on the N instance segmentation models data to obtain the integrated semantic data and integrated central area data of the image.

可選的,所述第一轉換單元421具體用於:基於所述實例分割模型的實例分割輸出數據,確定在所述實例分割模型中所述圖像的多個像素點中每個像素點對應的實例標識信息;基於所述實例分割模型中所述多個像素點中每個像素點對應的實例標識信息,得到所述每個像素點在所述實例分割模型中的語義預測值,其中,所述實例分割模型的語義數據包括所述圖像的多個像素點中每個像素點的語義預測值。Optionally, the first conversion unit 421 is specifically configured to: determine, based on the instance segmentation output data of the instance segmentation model, that each of the multiple pixels of the image in the instance segmentation model corresponds to instance identification information; based on the instance identification information corresponding to each pixel in the plurality of pixels in the instance segmentation model, the semantic prediction value of each pixel in the instance segmentation model is obtained, wherein, The semantic data of the instance segmentation model includes a semantic prediction value of each pixel in the plurality of pixels of the image.

可選的,所述第一轉換單元421具體還用於:基於所述實例分割模型的實例分割輸出數據,確定在所述實例分割模型中,所述圖像中位於實例區域的至少兩個像素點;基於所述實例分割模型中位於實例區域的至少兩個像素點的位置信息,確定所述實例分割模型的實例中心位置;基於所述實例分割模型的實例中心位置和所述至少兩個像素點的位置信息,確定所述實例分割模型的實例中心區域。Optionally, the first conversion unit 421 is further configured to: determine, in the instance segmentation model, at least two pixels located in the instance region in the image based on the instance segmentation output data of the instance segmentation model point; based on the position information of at least two pixel points located in the instance region in the instance segmentation model, determine the instance center position of the instance segmentation model; based on the instance center position of the instance segmentation model and the at least two pixels Point location information to determine the instance center region of the instance segmentation model.

可選的,所述轉換模塊420還包括腐蝕處理單元423,用於對所述實例分割模型的實例分割輸出數據進行腐蝕處理,得到實例分割模型的腐蝕數據;所述第一轉換單元421具體用於,基於所述實例分割模型的腐蝕數據,確定在所述實例分割模型中,所述圖像中位於實例區域的至少兩個像素點。Optionally, the conversion module 420 further includes an erosion processing unit 423, configured to perform erosion processing on the instance segmentation output data of the instance segmentation model to obtain the erosion data of the instance segmentation model; the first conversion unit 421 specifically uses Then, based on the erosion data of the instance segmentation model, determine at least two pixel points located in the instance region in the image in the instance segmentation model.

可選的,所述第一轉換單元421具體用於,將所述位於實例區域的至少兩個像素點的位置的平均值,作為所述實例分割模型的實例中心位置。Optionally, the first conversion unit 421 is specifically configured to use the average value of the positions of at least two pixel points located in the instance region as the instance center position of the instance segmentation model.

可選的,所述第一轉換單元421具體還用於:基於所述實例分割模型的實例中心位置和所述至少兩個像素點的位置信息,確定所述至少兩個像素點與所述實例中心位置的最大距離;基於所述最大距離,確定第一閾值;將所述至少兩個像素點中與所述實例中心位置之間的距離小於或等於所述第一閾值的像素點確定為實例中心區域的像素點。Optionally, the first conversion unit 421 is further configured to: determine the relationship between the at least two pixel points and the instance center position of the instance segmentation model and the position information of the at least two pixel points. The maximum distance of the center position; based on the maximum distance, determine a first threshold; determine the pixel point of the at least two pixel points whose distance from the center position of the instance is less than or equal to the first threshold value as an instance Pixels in the central area.

可選的,所述轉換模塊420,具體用於:基於所述N個實例分割模型中每個實例分割模型的語義數據,確定所述圖像的多個像素點中每個像素點的語義投票值;對所述多個像素點中每個像素點的語義投票值進行二值化處理,得到所述圖像中每個像素點的集成語義值,其中,所述圖像的集成語義數據包括所述多個像素點中每個像素點的集成語義值。Optionally, the conversion module 420 is specifically configured to: determine the semantic vote of each pixel in the multiple pixels of the image based on the semantic data of each instance segmentation model in the N instance segmentation models value; the semantic voting value of each pixel in the plurality of pixels is binarized to obtain the integrated semantic value of each pixel in the image, wherein the integrated semantic data of the image includes An integrated semantic value of each pixel in the plurality of pixels.

可選的,所述轉換模塊420,具體還用於:基於所述多個實例分割模型的個數N,確定第二閾值;基於所述第二閾值,對所述多個像素點中每個像素點的語義投票值進行二值化處理,得到所述圖像中每個像素點的集成語義值。Optionally, the conversion module 420 is specifically further configured to: determine a second threshold based on the number N of the plurality of instance segmentation models; The semantic voting value of the pixel is binarized to obtain the integrated semantic value of each pixel in the image.

可選的,所述第二閾值為N/2的向上取整結果。Optionally, the second threshold is a rounded-up result of N/2.

可選的,所述分割模塊430,包括中心區域單元431和確定單元432,其中:所述中心區域單元431,用於基於所述圖像的集成中心區域數據,得到所述圖像的至少一個實例中心區域;所述確定單元432,用於基於所述至少一個實例中心區域和所述圖像的集成語義數據,確定所述圖像的多個像素點中每個像素點所屬的實例。Optionally, the segmentation module 430 includes a central area unit 431 and a determination unit 432, wherein: the central area unit 431 is configured to obtain at least one of the images based on the integrated central area data of the image Instance central area; the determining unit 432 is configured to determine the instance to which each pixel of the multiple pixels of the image belongs based on the at least one instance central area and the integrated semantic data of the image.

可選的,所述確定單元432,具體用於基於所述圖像的多個像素點中每個像素點的集成語義值和所述至少一個實例中心區域,進行隨機遊走,得到所述每個像素點所屬的實例。Optionally, the determining unit 432 is specifically configured to perform a random walk based on the integrated semantic value of each pixel in the plurality of pixels of the image and the central area of the at least one instance, to obtain each The instance the pixel belongs to.

實施圖4所示的電子設備400,電子設備400可以獲取N組實例分割輸出數據,其中,上述N組實例分割輸出數據分別為N個實例分割模型對圖像進行處理獲得的實例分割輸出結果,且上述N組實例分割輸出數據具有不同的數據結構,上述N為大於1的整數,再基於上述N組實例分割輸出數據,得到上述圖像的集成語義數據和集成中心區域數據,其中,上述集成語義數據指示上述圖像中位於實例區域的像素點,上述集成中心區域數據指示上述圖像中位於實例中心區域的像素點,進而基於上述圖像的集成語義數據和集成中心區域數據,獲得上述圖像的實例分割結果,可以在圖像處理的實例分割問題中,實現各個實例分割模型的優勢互補,不再要求各個模型具有相同結構或含義的數據輸出,在實例分割問題中取得更高的精度。Implementing the electronic device 400 shown in FIG. 4 , the electronic device 400 can obtain N sets of instance segmentation output data, wherein the above N sets of instance segmentation output data are the instance segmentation output results obtained by processing images with N instance segmentation models, And the above N groups of instance segmentation output data have different data structures, the above N is an integer greater than 1, and then based on the above N groups of instance segmentation output data, the integrated semantic data and integrated central area data of the above image are obtained, wherein the above integrated The semantic data indicates the pixels located in the instance area in the image above, and the integrated center area data indicates the pixels located in the instance center area in the image above, and then based on the integrated semantic data and the integrated center area data of the above image, the above image is obtained The instance segmentation results of the image can realize the complementary advantages of each instance segmentation model in the instance segmentation problem of image processing, no longer require each model to have the same structure or meaning of data output, and achieve higher accuracy in the instance segmentation problem .

請參閱圖5,圖5是本申請實施例公開的另一種電子設備的結構示意圖。如圖5所示,該電子設備500包括處理器501和記憶體502,其中,電子設備500還可以包括匯流排503,處理器501和記憶體502可以通過匯流排503相互連接,匯流排503可以是外設部件互連標準(Peripheral Component Interconnect,PCI)匯流排或擴展工業標準結構(Extended Industry Standard Architecture,EISA)匯流排等。匯流排503可以分為地址匯流排、數據匯流排、控制匯流排等。為便於表示,圖5中僅用一條粗線表示,但並不表示僅有一根匯流排或一種類型的匯流排。其中,電子設備500還可以包括輸入輸出設備504,輸入輸出設備504可以包括顯示屏,例如液晶顯示屏。記憶體502用於儲存電腦程式;處理器501用於調用儲存在記憶體502中的電腦程式執行上述圖1和圖2實施例中提到的部分或全部方法步驟。Please refer to FIG. 5 . FIG. 5 is a schematic structural diagram of another electronic device disclosed in an embodiment of the present application. As shown in FIG. 5 , the electronic device 500 includes a processor 501 and a memory 502, wherein the electronic device 500 may further include a bus bar 503, the processor 501 and the memory 502 may be connected to each other through the bus bar 503, and the bus bar 503 may It is a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus, etc. The bus 503 can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is used in FIG. 5 , but it does not mean that there is only one bus bar or one type of bus bar. Wherein, the electronic device 500 may further include an input and output device 504, and the input and output device 504 may include a display screen, such as a liquid crystal display screen. The memory 502 is used to store computer programs; the processor 501 is used to invoke the computer programs stored in the memory 502 to execute some or all of the method steps mentioned in the embodiments of FIG. 1 and FIG. 2 .

實施圖5所示的電子設備500,電子設備500可以獲取N組實例分割輸出數據,其中,上述N組實例分割輸出數據分別為N個實例分割模型對圖像進行處理獲得的實例分割輸出結果,且上述N組實例分割輸出數據具有不同的數據結構,上述N為大於1的整數,再基於上述N組實例分割輸出數據,得到上述圖像的集成語義數據和集成中心區域數據,其中,上述集成語義數據指示上述圖像中位於實例區域的像素點,上述集成中心區域數據指示上述圖像中位於實例中心區域的像素點,進而基於上述圖像的集成語義數據和集成中心區域數據,獲得上述圖像的實例分割結果,可以在圖像處理的實例分割問題中,實現各個實例分割模型的優勢互補,不再要求各個模型具有相同結構或含義的數據輸出,在實例分割問題中取得更高的精度。Implementing the electronic device 500 shown in FIG. 5 , the electronic device 500 can acquire N sets of instance segmentation output data, wherein the above N sets of instance segmentation output data are the instance segmentation output results obtained by processing images with N instance segmentation models, And the above N groups of instance segmentation output data have different data structures, the above N is an integer greater than 1, and then based on the above N groups of instance segmentation output data, the integrated semantic data and integrated central area data of the above image are obtained, wherein the above integrated The semantic data indicates the pixels located in the instance area in the image above, and the integrated center area data indicates the pixels located in the instance center area in the image above, and then based on the integrated semantic data and the integrated center area data of the above image, the above image is obtained The instance segmentation results of the image can realize the complementary advantages of each instance segmentation model in the instance segmentation problem of image processing, no longer require each model to have the same structure or meaning of data output, and achieve higher accuracy in the instance segmentation problem .

本申請實施例還提供一種電腦可讀儲存介質,其中,該電腦可讀儲存介質用於儲存電腦程式,該電腦程式使得電腦執行如上述方法實施例中記載的任何一種圖像處理方法的部分或全部步驟。An embodiment of the present application also provides a computer-readable storage medium, wherein the computer-readable storage medium is used to store a computer program, and the computer program enables the computer to execute any part or part of the image processing method described in the above-mentioned method embodiments. All steps.

需要說明的是,對於前述的各方法實施例,為了簡單描述,故將其都表述為一系列的動作組合,但是本領域技術人員應該知悉,本申請並不受所描述的動作順序的限制,因為依據本申請,某些步驟可以採用其他順序或者同時進行。其次,本領域技術人員也應該知悉,說明書中所描述的實施例均屬優選實施例,所涉及的動作和模塊並不一定是本申請所必須的。It should be noted that for the foregoing method embodiments, for the sake of simple description, they are expressed as a series of action combinations, but those skilled in the art should know that the present application is not limited by the described action sequence. Depending on the application, certain steps may be performed in other orders or simultaneously. Secondly, those skilled in the art should also know that the embodiments described in the specification are all preferred embodiments, and the actions and modules involved are not necessarily required by this application.

在上述實施例中,對各個實施例的描述都各有側重,某個實施例中沒有詳述的部分,可以參見其他實施例的相關描述。In the foregoing embodiments, the descriptions of each embodiment have their own emphases, and for parts not described in detail in a certain embodiment, reference may be made to relevant descriptions of other embodiments.

在本申請所提供的幾個實施例中,應該理解到,所揭露的裝置,可通過其它的方式實現。例如,以上所描述的裝置實施例僅僅是示意性的,例如所述單元的劃分,僅僅為一種邏輯功能劃分,實際實現時可以有另外的劃分方式,例如多個單元或組件可以結合或者可以集成到另一個系統,或一些特徵可以忽略,或不執行。另一點,所顯示或討論的相互之間的耦合或直接耦合或通信連接可以是通過一些介面,裝置或單元的間接耦合或通信連接,可以是電性或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed device can be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components can be combined or integrated. to another system, or some features may be ignored, or not implemented. In another point, the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical or other forms.

所述作為分離部件說明的單元(模塊)可以是或者也可以不是物理上分開的,作為單元顯示的部件可以是或者也可以不是物理單元,即可以位於一個地方,或者也可以分佈到多個網路單元上。可以根據實際的需要選擇其中的部分或者全部單元來實現本實施例方案的目的。The unit (module) described as a separate component may or may not be physically separated, and the component displayed as a unit may or may not be a physical unit, that is, it may be located in one place, or may also be distributed to multiple networks. road unit. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.

另外,在本申請各個實施例中的各功能單元可以集成在一個處理單元中,也可以是各個單元單獨物理存在,也可以兩個或兩個以上單元集成在一個單元中。上述集成的單元既可以採用硬體的形式實現,也可以採用軟體功能單元的形式實現。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit. The above-mentioned integrated units can be implemented not only in the form of hardware, but also in the form of software functional units.

所述集成的單元如果以軟體功能單元的形式實現並作為獨立的產品銷售或使用時,可以儲存在一個電腦可讀取記憶體中。基於這樣的理解,本申請的技術方案本質上或者說對現有技術做出貢獻的部分或者該技術方案的全部或部分可以以軟體產品的形式體現出來,該電腦軟體產品儲存在一個記憶體中,包括若干指令用以使得一台電腦設備(可為個人電腦、服務器或者網路設備等)執行本申請各個實施例所述方法的全部或部分步驟。而前述的記憶體包括:U盤、唯讀記憶體(Read-Only Memory,ROM)、隨機存取記憶體(Random Access Memory,RAM)、移動硬盤、磁碟或者光盤等各種可以儲存程式代碼的介質。If the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable memory. Based on this understanding, the essence of the technical solution of the present application or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of software products, and the computer software products are stored in a memory. Several instructions are included to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application. The aforementioned memory includes: U disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), mobile hard disk, magnetic disk or optical disk, etc., which can store program codes. medium.

本領域普通技術人員可以理解上述實施例的各種方法中的全部或部分步驟是可以通過程式來指令相關的硬體來完成,該程式可以儲存於一電腦可讀記憶體中,記憶體可以包括:隨身碟、唯讀記憶體、隨機存取器、磁盤或光盤等。Those of ordinary skill in the art can understand that all or part of the steps in the various methods of the above-mentioned embodiments can be completed by instructing related hardware through a program, and the program can be stored in a computer-readable memory, and the memory can include: Pen drive, ROM, RAM, floppy disk or CD, etc.

以上對本申請實施例進行了詳細介紹,本文中應用了具體個例對本申請的原理及實施方式進行了闡述,以上實施例的說明只是用於幫助理解本申請的方法及其核心思想;同時,對於本領域的一般技術人員,依據本申請的思想,在具體實施方式及應用範圍上均會有改變之處,綜上所述,本說明書內容不應理解為對本申請的限制。The embodiments of the present application have been introduced in detail above, and specific examples have been used in this paper to illustrate the principles and implementation methods of the present application. The descriptions of the above embodiments are only used to help understand the methods and core ideas of the present application; meanwhile, for Those skilled in the art will have changes in specific implementation methods and application scopes based on the ideas of the present application. In summary, the contents of this specification should not be construed as limiting the present application.

101、102、103、201、202、203、204、205、206、207:步驟 400、500:電子設備 410:獲取模塊 420:轉換模塊 421:第一轉換單元 422:第二轉換單元 423:腐蝕處理單元 430:分割模塊 431:中心區域單元 432:確定單元 501:處理器 502:記憶體 503:匯流排 504:輸入輸出設備101, 102, 103, 201, 202, 203, 204, 205, 206, 207: steps 400, 500: electronic equipment 410: Get module 420: conversion module 421: The first conversion unit 422: Second conversion unit 423: Corrosion Treatment Unit 430:Split module 431: Central area unit 432: determine the unit 501: Processor 502: memory 503: busbar 504: Input and output devices

為了更清楚地說明本申請實施例或現有技術中的技術方案,下面將對實施例或現有技術描述中所需要使用的附圖作簡單地介紹。 圖1是本申請實施例公開的一種圖像處理方法的流程示意圖。 圖2是本申請實施例公開的另一種圖像處理方法的流程示意圖。 圖3是本申請實施例公開的一種細胞實例分割的圖像表現形式示意圖。 圖4是本申請實施例公開的一種電子設備的結構示意圖。 圖5是本申請實施例公開的另一種電子設備的結構示意圖。In order to more clearly illustrate the technical solutions in the embodiments of the present application or the prior art, the following briefly introduces the drawings that are required in the description of the embodiments or the prior art. FIG. 1 is a schematic flowchart of an image processing method disclosed in an embodiment of the present application. Fig. 2 is a schematic flowchart of another image processing method disclosed in the embodiment of the present application. FIG. 3 is a schematic diagram of an image representation of a cell instance segmentation disclosed in an embodiment of the present application. Fig. 4 is a schematic structural diagram of an electronic device disclosed in an embodiment of the present application. FIG. 5 is a schematic structural diagram of another electronic device disclosed in an embodiment of the present application.

101、102、103:步驟 101, 102, 103: steps

Claims (10)

一種圖像處理方法,包括:獲取N組實例分割輸出數據,其中,所述N組實例分割輸出數據分別為N個不同的實例分割模型對圖像進行處理獲得的實例分割輸出結果,且所述N組實例分割輸出數據具有表示不同含義的數據結構,所述N為大於1的整數,所述N個不同的實例分割模型對應不同的算法或者神經網路結構;基於所述N組實例分割輸出數據,得到所述圖像的集成語義數據和集成中心區域數據,其中,所述集成語義數據指示所述圖像中位於實例區域的像素點,所述集成中心區域數據指示所述圖像中位於實例中心區域的像素點;以及基於所述圖像的集成中心區域數據,得到所述圖像的至少一個實例中心區域;基於所述圖像的多個像素點中每個像素點的集成語義值和所述至少一個實例中心區域,進行隨機遊走,得到所述每個像素點所屬的實例,其中,所述集成語義數據包括所述圖像的多個像素點中每個像素點的集成語義值。 An image processing method, comprising: acquiring N sets of instance segmentation output data, wherein the N sets of instance segmentation output data are the instance segmentation output results obtained by processing images with N different instance segmentation models, and the N groups of instance segmentation output data have data structures representing different meanings, the N is an integer greater than 1, and the N different instance segmentation models correspond to different algorithms or neural network structures; based on the N groups of instance segmentation output data, to obtain the integrated semantic data and integrated central area data of the image, wherein the integrated semantic data indicates the pixels located in the instance area in the image, and the integrated central area data indicates the pixels located in the image in the Pixels in the central region of the instance; and based on the integrated central region data of the image, at least one central region of the instance of the image is obtained; based on the integrated semantic value of each pixel in the multiple pixels of the image Perform a random walk with the at least one instance center area to obtain the instance to which each pixel belongs, wherein the integrated semantic data includes the integrated semantic value of each pixel in the plurality of pixels of the image . 如申請專利範圍第1項所述的圖像處理方法,基於所述N組實例分割輸出數據,得到所述圖像的集成語義數據和集成中心區域數據,包括:針對所述N個不同的實例分割模型中每個實例分割模型,基於所述實例分割模型的實例分割輸出數據,得到所述實例分割模型的語義數據和中心區域數據;以及 基於所述N個不同的實例分割模型中每個實例分割模型的語義數據和中心區域數據,得到所述圖像的集成語義數據和集成中心區域數據。 According to the image processing method described in item 1 of the scope of the patent application, the output data is segmented based on the N groups of instances, and the integrated semantic data and integrated central area data of the image are obtained, including: for the N different instances For each instance segmentation model in the segmentation model, based on the instance segmentation output data of the instance segmentation model, the semantic data and central region data of the instance segmentation model are obtained; and Based on the semantic data and central area data of each of the N different instance segmentation models, the integrated semantic data and integrated central area data of the image are obtained. 如申請專利範圍第2項所述的圖像處理方法,基於所述實例分割模型的實例分割輸出數據,得到所述實例分割模型的語義數據和中心區域數據,包括:基於所述實例分割模型的實例分割輸出數據,確定在所述實例分割模型中所述圖像的多個像素點中每個像素點對應的實例標識信息;以及基於所述實例分割模型中所述多個像素點中每個像素點對應的實例標識信息,得到所述每個像素點在所述實例分割模型中的語義預測值,其中,所述實例分割模型的語義數據包括所述圖像的多個像素點中每個像素點的語義預測值。 According to the image processing method described in item 2 of the scope of patent application, based on the instance segmentation output data of the instance segmentation model, the semantic data and central area data of the instance segmentation model are obtained, including: based on the instance segmentation model Instance segmentation output data, determining the instance identification information corresponding to each of the plurality of pixels in the image in the instance segmentation model; and based on each of the plurality of pixels in the instance segmentation model The instance identification information corresponding to the pixel, to obtain the semantic prediction value of each pixel in the instance segmentation model, wherein the semantic data of the instance segmentation model includes each of the plurality of pixels of the image The semantic prediction value of the pixel. 如申請專利範圍第2或3項所述的圖像處理方法,基於所述實例分割模型的實例分割輸出數據,得到所述實例分割模型的語義數據和中心區域數據,還包括:基於所述實例分割模型的實例分割輸出數據,確定在所述實例分割模型中,所述圖像中位於實例區域的至少兩個像素點;基於所述實例分割模型中位於實例區域的至少兩個像素點的位置信息,確定所述實例分割模型的實例中心位置;以及基於所述實例分割模型的實例中心位置和所述至少兩個像素點的位置信息,確定所述實例分割模型的實例中心區域。 According to the image processing method described in item 2 or 3 of the scope of patent application, based on the instance segmentation output data of the instance segmentation model, the semantic data and central area data of the instance segmentation model are obtained, further comprising: based on the instance Instance segmentation output data of the segmentation model, determining in the instance segmentation model, at least two pixel points located in the instance region in the image; based on the positions of at least two pixel points located in the instance region in the instance segmentation model information, determine the instance center position of the instance segmentation model; and determine the instance center area of the instance segmentation model based on the instance center position of the instance segmentation model and the position information of the at least two pixel points. 如申請專利範圍第4項所述的圖像處理方法,在基於所述實例分割模型的實例分割輸出數據,確定在所述實例分割模型中,所述圖像中位於實例區域的至少兩個像素點之前,所述圖像處理方法還包括:對所述實例分割模型的實例分割輸出數據進行腐蝕處理,得到所述實例分割模型的腐蝕數據;以及基於所述實例分割模型的實例分割輸出數據,確定在所述實例分割模型中,所述圖像中位於實例區域的至少兩個像素點,包括:基於所述實例分割模型的腐蝕數據,確定在所述實例分割模型中,所述圖像中位於實例區域的至少兩個像素點。 In the image processing method described in item 4 of the scope of patent application, based on the instance segmentation output data of the instance segmentation model, it is determined that in the instance segmentation model, at least two pixels located in the instance region of the image Before the point, the image processing method further includes: performing erosion processing on the instance segmentation output data of the instance segmentation model to obtain the erosion data of the instance segmentation model; and based on the instance segmentation output data of the instance segmentation model, Determining that in the instance segmentation model, at least two pixel points located in the instance region in the image include: determining, in the instance segmentation model, that in the image is based on corrosion data of the instance segmentation model At least two pixels in the instance area. 如申請專利範圍第4項所述的圖像處理方法,所述基於所述實例分割模型中位於實例區域的至少兩個像素點的位置信息,確定所述實例分割模型的實例中心位置,包括:將所述位於實例區域的至少兩個像素點的位置的平均值,作為所述實例分割模型的實例中心位置。 According to the image processing method described in item 4 of the patent scope of the application, the determination of the instance center position of the instance segmentation model based on the position information of at least two pixel points located in the instance area in the instance segmentation model includes: Taking the average value of the positions of the at least two pixel points located in the instance region as the instance center position of the instance segmentation model. 如申請專利範圍第4項所述的圖像處理方法,基於所述實例分割模型的實例中心位置和所述至少兩個像素點的位置信息,確定所述實例分割模型的實例中心區域,包括:基於所述實例分割模型的實例中心位置和所述至少兩個像素點的位置信息,確定所述至少兩個像素點與所述實例中心位置的最大距離;基於所述最大距離,確定第一閾值;以及將所述至少兩個像素點中與所述實例中心位置之間的距離小 於或等於所述第一閾值的像素點確定為實例中心區域的像素點。 According to the image processing method described in item 4 of the scope of patent application, the instance center area of the instance segmentation model is determined based on the instance center position of the instance segmentation model and the position information of the at least two pixels, including: Based on the instance center position of the instance segmentation model and the position information of the at least two pixel points, determine the maximum distance between the at least two pixel points and the instance center position; determine a first threshold based on the maximum distance ; and the distance between the at least two pixel points and the central position of the instance is small A pixel point equal to or greater than the first threshold is determined as a pixel point in the central region of the instance. 一種電子設備,包括:獲取模塊,用於獲取N組實例分割輸出數據,其中,所述N組實例分割輸出數據分別為N個不同的實例分割模型對圖像進行處理獲得的實例分割輸出結果,且所述N組實例分割輸出數據具有表示不同含義的數據結構,所述N為大於1的整數,所述N個不同的實例分割模型對應不同的算法或者神經網路結構;轉換模塊,用於基於所述N組實例分割輸出數據,得到所述圖像的集成語義數據和集成中心區域數據,其中,所述集成語義數據指示所述圖像中位於實例區域的像素點,所述集成中心區域數據指示所述圖像中位於實例中心區域的像素點;以及分割模塊,用於基於所述圖像的所述集成中心區域數據,得到所述圖像的至少一個實例中心區域;基於所述圖像的多個像素點中每個像素點的集成語義值和所述至少一個實例中心區域,進行隨機遊走,得到所述每個像素點所屬的實例,其中,所述集成語義數據包括所述圖像的多個像素點中每個像素點的集成語義值。 An electronic device, comprising: an acquisition module, configured to acquire N sets of instance segmentation output data, wherein the N sets of instance segmentation output data are respectively instance segmentation output results obtained by processing images with N different instance segmentation models, And the N groups of instance segmentation output data have data structures representing different meanings, the N is an integer greater than 1, and the N different instance segmentation models correspond to different algorithms or neural network structures; the conversion module is used for Based on the N groups of instance segmentation output data, the integrated semantic data and integrated central area data of the image are obtained, wherein the integrated semantic data indicates the pixels located in the instance area in the image, and the integrated central area The data indicates the pixels located in the central area of the instance in the image; and the segmentation module is configured to obtain at least one central area of the instance of the image based on the integrated central area data of the image; based on the image The integrated semantic value of each pixel in the plurality of pixels of the image and the central area of the at least one instance are randomly walked to obtain the instance to which each pixel belongs, wherein the integrated semantic data includes the image The integrated semantic value of each pixel in multiple pixels of the image. 一種電子設備,包括處理器以及記憶體,所述記憶體用於儲存電腦程式,所述電腦程式被配置成由所述處理器執行,所述處理器用於執行如申請專利範圍第1至7項中任一項所述的圖像處理方法。 An electronic device, including a processor and a memory, the memory is used to store a computer program, the computer program is configured to be executed by the processor, and the processor is used to execute items 1 to 7 of the scope of the patent application The image processing method described in any one. 一種電腦可讀儲存介質,所述電腦可讀儲存介質用於儲存電腦程式,其中,所述電腦程式使得電腦執行如申請專利範圍 第1至7項中任一項所述的圖像處理方法。 A computer-readable storage medium, the computer-readable storage medium is used to store a computer program, wherein the computer program enables the computer to execute the The image processing method described in any one of items 1 to 7.
TW108133166A 2018-09-15 2019-09-16 Image processing method, electronic device, and storage medium TWI786330B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811077358.9 2018-09-15
CN201811077358.9A CN109345540B (en) 2018-09-15 2018-09-15 Image processing method, electronic device and storage medium

Publications (2)

Publication Number Publication Date
TW202013311A TW202013311A (en) 2020-04-01
TWI786330B true TWI786330B (en) 2022-12-11

Family

ID=65305764

Family Applications (1)

Application Number Title Priority Date Filing Date
TW108133166A TWI786330B (en) 2018-09-15 2019-09-16 Image processing method, electronic device, and storage medium

Country Status (2)

Country Link
CN (1) CN109345540B (en)
TW (1) TWI786330B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020052668A1 (en) * 2018-09-15 2020-03-19 北京市商汤科技开发有限公司 Image processing method, electronic device, and storage medium
CN109886272B (en) * 2019-02-25 2020-10-30 腾讯科技(深圳)有限公司 Point cloud segmentation method, point cloud segmentation device, computer-readable storage medium and computer equipment
CN110008956B (en) * 2019-04-01 2023-07-07 深圳华付技术股份有限公司 Invoice key information positioning method, invoice key information positioning device, computer equipment and storage medium
CN110111340B (en) * 2019-04-28 2021-05-14 南开大学 Weak supervision example segmentation method based on multi-path segmentation
CN111681183A (en) * 2020-06-05 2020-09-18 兰州理工大学 Mural image color restoration method and device
CN113792738B (en) * 2021-08-05 2024-09-06 北京旷视科技有限公司 Instance segmentation method, device, electronic equipment and computer readable storage medium
CN114419067B (en) * 2022-01-19 2024-10-18 支付宝(杭州)信息技术有限公司 Image processing method and device based on privacy protection
CN114445954B (en) * 2022-04-08 2022-06-21 深圳市润璟元信息科技有限公司 Entrance guard's device with sound and facial dual discernment
CN116912923B (en) * 2023-09-12 2024-01-05 深圳须弥云图空间科技有限公司 Image recognition model training method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201120816A (en) * 2009-07-30 2011-06-16 Sony Corp Apparatus and method for image processing, and program
CN102324092A (en) * 2011-09-09 2012-01-18 华南理工大学 Method for automatically cutting granular object in digital image
TWI455039B (en) * 2012-09-27 2014-10-01 China Steel Corp Calculation method of average particle size distribution of batch coke
EP2871613A1 (en) * 2012-07-05 2015-05-13 Olympus Corporation Cell division process tracing device and method, and storage medium which stores computer-processable cell division process tracing program
CN106372390A (en) * 2016-08-25 2017-02-01 姹ゅ钩 Deep convolutional neural network-based lung cancer preventing self-service health cloud service system
US20180025749A1 (en) * 2016-07-22 2018-01-25 Microsoft Technology Licensing, Llc Automatic generation of semantic-based cinemagraphs
CN107967688A (en) * 2017-12-21 2018-04-27 联想(北京)有限公司 The method and system split to the object in image
CN108447062A (en) * 2018-02-01 2018-08-24 浙江大学 A kind of dividing method of the unconventional cell of pathological section based on multiple dimensioned mixing parted pattern

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201120816A (en) * 2009-07-30 2011-06-16 Sony Corp Apparatus and method for image processing, and program
CN102324092A (en) * 2011-09-09 2012-01-18 华南理工大学 Method for automatically cutting granular object in digital image
EP2871613A1 (en) * 2012-07-05 2015-05-13 Olympus Corporation Cell division process tracing device and method, and storage medium which stores computer-processable cell division process tracing program
TWI455039B (en) * 2012-09-27 2014-10-01 China Steel Corp Calculation method of average particle size distribution of batch coke
US20180025749A1 (en) * 2016-07-22 2018-01-25 Microsoft Technology Licensing, Llc Automatic generation of semantic-based cinemagraphs
CN106372390A (en) * 2016-08-25 2017-02-01 姹ゅ钩 Deep convolutional neural network-based lung cancer preventing self-service health cloud service system
CN107967688A (en) * 2017-12-21 2018-04-27 联想(北京)有限公司 The method and system split to the object in image
CN108447062A (en) * 2018-02-01 2018-08-24 浙江大学 A kind of dividing method of the unconventional cell of pathological section based on multiple dimensioned mixing parted pattern

Also Published As

Publication number Publication date
CN109345540B (en) 2021-07-13
CN109345540A (en) 2019-02-15
TW202013311A (en) 2020-04-01

Similar Documents

Publication Publication Date Title
TWI786330B (en) Image processing method, electronic device, and storage medium
US20210118144A1 (en) Image processing method, electronic device, and storage medium
CN107506761B (en) Brain image segmentation method and system based on significance learning convolutional neural network
JP7540127B2 (en) Artificial intelligence-based image processing method, image processing device, computer program, and computer device
TWI777092B (en) Image processing method, electronic device, and storage medium
CN107609541B (en) Human body posture estimation method based on deformable convolution neural network
CN108399386A (en) Information extracting method in pie chart and device
WO2020048396A1 (en) Target detection method, apparatus and device for continuous images, and storage medium
CN111931764B (en) Target detection method, target detection frame and related equipment
CN110555481A (en) Portrait style identification method and device and computer readable storage medium
WO2021203865A9 (en) Molecular binding site detection method and apparatus, electronic device and storage medium
JP7013489B2 (en) Learning device, live-action image classification device generation system, live-action image classification device generation device, learning method and program
CN114445670B (en) Training method, device and equipment of image processing model and storage medium
CN110046574A (en) Safety cap based on deep learning wears recognition methods and equipment
EP4404148A1 (en) Image processing method and apparatus, and computer-readable storage medium
CN111428664A (en) Real-time multi-person posture estimation method based on artificial intelligence deep learning technology for computer vision
CN115018999A (en) Multi-robot-cooperation dense point cloud map construction method and device
Ding et al. Rethinking click embedding for deep interactive image segmentation
CN113554656B (en) Optical remote sensing image example segmentation method and device based on graph neural network
Wu et al. Context-based local-global fusion network for 3D point cloud classification and segmentation
Tong et al. Cell image instance segmentation based on PolarMask using weak labels
CN115170599A (en) Method and device for vessel segmentation through link prediction of graph neural network
CN113111879B (en) Cell detection method and system
CN113889233A (en) Cell positioning and counting method based on manifold regression network and application
CN116137914A (en) Method, device, equipment and storage medium for detecting association degree between human face and human hand

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees