CN110659683A - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN110659683A
CN110659683A CN201910896473.7A CN201910896473A CN110659683A CN 110659683 A CN110659683 A CN 110659683A CN 201910896473 A CN201910896473 A CN 201910896473A CN 110659683 A CN110659683 A CN 110659683A
Authority
CN
China
Prior art keywords
region
foreground
area
background
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910896473.7A
Other languages
Chinese (zh)
Inventor
李挺
刘炳宪
谢菊元
张继新
桂坤
操家庆
龙希
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Zhituan Information Technology Co Ltd
Original Assignee
Hangzhou Zhituan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Zhituan Information Technology Co Ltd filed Critical Hangzhou Zhituan Information Technology Co Ltd
Priority to CN201910896473.7A priority Critical patent/CN110659683A/en
Publication of CN110659683A publication Critical patent/CN110659683A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Abstract

The invention provides an image processing method, an image processing device and electronic equipment, and relates to the technical field of image processing, wherein the method mainly comprises the following steps: the method comprises the steps of obtaining an original image to be processed, identifying the original image to obtain a first background area, a first foreground area and an uncertain area, identifying the uncertain area to obtain a second background area and a second foreground area, determining the background area of the original image according to the first background area and the second background area, determining the foreground area of the original image according to the first foreground area and the second foreground area, and generating a target image based on the background area and the foreground area. According to the invention, the problem of blurred areas in the image can be effectively solved by processing the image.

Description

Image processing method and device and electronic equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, and an electronic device.
Background
The image generally contains a large amount of information as a visual basis of the human perception world, and different areas of the image need to be identified and extracted in order to extract required information in the image, but in general, fuzzy contour boundaries may exist between different areas in the image, so that the image is inconvenient to analyze and process. For example, a certain shadow contour usually exists in a tissue region in a medical tissue image, and the tissue region is difficult to clearly divide from the background of the image, so that the subsequent study on the tissue image is inconvenient.
Disclosure of Invention
The invention aims to provide an image processing method, an image processing device and an electronic device, which can effectively solve the problem of blurred areas in an image by processing the image.
In a first aspect, the present invention provides an image processing method, comprising: acquiring an original image to be processed; identifying an original image to obtain a first background area, a first foreground area and an uncertain area; identifying the uncertain area to obtain a second background area and a second foreground area; determining a background area of the original image according to the first background area and the second background area; determining a foreground region of the original image according to the first foreground region and the second foreground region; a target image is generated based on the background region and the foreground region.
With reference to the first aspect, an embodiment of the present invention provides a first possible implementation manner of the first aspect, where the step of identifying an original image to obtain a first background area, a first foreground area, and an uncertain area includes: classifying and identifying the original image through a classifier obtained by pre-training to obtain a first background area and a first foreground area; the classifier comprises a background classifier and a foreground classifier; and setting other areas except the first background area and the first foreground area in the original image as uncertain areas.
With reference to the first aspect, an embodiment of the present invention provides a second possible implementation manner of the first aspect, where the training step of the classifier includes: acquiring a training image; each training image carries a positive sample label and a negative sample label of a foreground area, and carries a positive sample label and a negative sample label of a background area; training the classifier through the training image until the training times reach the preset times and/or the fitting error of the classifier is lower than the preset error, and finishing the training.
With reference to the first aspect, an embodiment of the present invention provides a third possible implementation manner of the first aspect, where the step of identifying the uncertain region to obtain a second background region and a second foreground region includes: and identifying the uncertain region through a Grubcut algorithm to obtain a second background region and a second foreground region.
With reference to the first aspect, an embodiment of the present invention provides a fourth possible implementation manner of the first aspect, where the step of identifying the uncertain region to obtain a second background region and a second foreground region further includes: acquiring the boundary of a first foreground area and the boundary of a first background area; for each pixel point in the uncertain region, respectively calculating a first closest distance from the pixel point to the boundary of the first foreground region and a second closest distance from the pixel point to the boundary of the first background region; determining a set of pixel points with the first closest distance within a first preset distance range as a second foreground area; and determining the set of the pixel points with the second closest distance within the second preset distance range as a second background area.
With reference to the first aspect, an embodiment of the present invention provides a fifth possible implementation manner of the first aspect, where the step of determining a foreground region of the original image according to the first foreground region and the second foreground region includes: acquiring an edge line of a first foreground region and an edge line of a second foreground region; correcting the edge lines of the first foreground area and the edge lines of the second foreground area to obtain a first foreground area after line correction processing and a second foreground area after line correction processing; and merging the first foreground area after the line correction processing and the second foreground area after the line correction processing to obtain the foreground area of the original image.
With reference to the first aspect and the first possibility to the fifth possibility of the first aspect, an embodiment of the present invention provides a sixth possible implementation manner of the first aspect, where an original image includes a tissue region with a blurred boundary; the method further comprises the following steps: and determining the foreground area of the original image as the tissue area without the fuzzy boundary in the target image.
With reference to the first aspect, an embodiment of the present invention provides a seventh possible implementation manner of the first aspect, where the original image is obtained through white balance processing and/or brightness normalization processing.
With reference to the first possible implementation manner of the first aspect, the embodiment of the present invention provides an eighth possible implementation manner of the first aspect, where if the original image is an RGB image, the original image is converted into an HSV image.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, including: the image acquisition module is used for acquiring an original image to be processed; the first identification module is used for identifying the original image to obtain a first background area, a first foreground area and an uncertain area; the second identification module is used for identifying the uncertain area to obtain a second background area and a second foreground area; the area determining module is used for determining a background area of the original image according to the first background area and the second background area; determining a foreground region of the original image according to the first foreground region and the second foreground region; and the target image generation module is used for generating a target image based on the background area and the foreground area.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor and a storage device; the storage means has stored thereon a computer program which, when executed by the processor, performs the method of any one of the aspects as provided in the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the method in any one of the first aspect.
The invention provides an image processing method, an image processing device and electronic equipment, which can firstly identify an acquired original image to obtain a first background area, a first foreground area and an uncertain area, then identify the uncertain area to obtain a second background area and a second foreground area, thereby determining the background area of the original image according to the first background area and the second background area, determining the foreground area of the original image according to the first foreground area and the second foreground area, and finally generating a target image based on the background area and the foreground area. The above manner provided by this embodiment can further identify the uncertain region in the original image, subdivide the uncertain region into the second background region and the second foreground region, and finally obtain the target image with clearly divided background/foreground regions by combining the first background region and the first foreground region determined in the original image, which is convenient for analyzing and researching the target image subsequently.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating an original image according to an embodiment of the present invention;
fig. 3 is a schematic flow chart illustrating dividing an uncertain region according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating another image processing method according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the following embodiments, and it should be understood that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Although the human eye can distinguish a large amount of image information, it is difficult to recognize the extremely fine boundaries of different areas in the image. Therefore, different areas in the image are processed and identified through the computer, and people can obtain information in the image more objectively and accurately.
In view of the problem that in the prior art, when the image is subjected to region identification, because the division of the background region and the foreground region is not clear, and the region identification is usually inaccurate, the embodiment of the invention provides the image processing method, which can further identify the uncertain region in the original image, subdivide the uncertain region into the second background region and the second foreground region, and finally obtain the target image with clear division of the background/foreground region by combining the first background region and the first foreground region determined in the original image, so that the target image can be conveniently analyzed and researched subsequently.
To facilitate understanding of the present embodiment, first, a detailed description is given of an image processing method disclosed in the present embodiment, referring to a flowchart of the image processing method shown in fig. 1, where the method mainly includes the following steps S102 to S110:
step S102: and acquiring an original image to be processed.
The original image to be processed is generally an image that needs to be subjected to area division and identification, and may be acquired by equipment with an image pickup device, such as a camera, a smart phone, a medical device, or may be acquired by a user uploading the original image to be processed.
Step S104: and identifying the original image to obtain a first background area, a first foreground area and an uncertain area.
The first foreground region may be a region of a target image to be recognized in an original image, and the first background region may be a region including an image background, for example, when a tissue image is recognized, a region determined to be a tissue definitely may be the first foreground region, a region determined to be not a tissue definitely may be the first background region, and a region having a blurred boundary and not determined to be a tissue may be an uncertain region. In some embodiments, the RGB image of the original image may be used for feature extraction and recognition. In other embodiments, the RGB image of the original image may be converted into other image forms, such as HSV image, for feature extraction and recognition.
And S106, identifying the uncertain area to obtain a second background area and a second foreground area.
In a specific embodiment, the uncertain region is further divided because the uncertain region has a fuzzy boundary contour, so that a second background region with a pixel value similar to that of the first background region and a second foreground region with a pixel value similar to that of the first foreground region are obtained by subdividing the uncertain region.
Step S108: and determining a background area of the original image according to the first background area and the second background area, and determining a foreground area of the original image according to the first foreground area and the second foreground area.
For example, the first background region and the second background region may be merged to determine the background region of the original image, and the first foreground region and the second foreground region may be merged to determine the foreground region of the original image, so that the uncertain region having the blurred boundary in the original image may be relatively accurately subdivided to obtain a more accurate region-divided image.
Step S110: a target image is generated based on the background region and the foreground region.
The foreground region can be a target region to be identified in one image, the foreground region can be a tissue region with pathological changes to be identified on one tissue image in the medical field, and the foreground region can also be a target face image to be identified in the face identification field. The target image is obtained by segmenting the background area and the foreground area, so that the target image with clearly divided background area and foreground area can be accurately obtained, and the target image can be conveniently analyzed and researched subsequently.
The image processing method provided by the embodiment of the invention can firstly identify the acquired original image to obtain the first background area, the first foreground area and the uncertain area, then identify the uncertain area to obtain the second background area and the second foreground area, thereby determining the background area of the original image according to the first background area and the second background area, determining the foreground area of the original image according to the first foreground area and the second foreground area, and finally generating the target image based on the background area and the foreground area. The above manner provided by this embodiment can further identify the uncertain region in the original image, subdivide the uncertain region into the second background region and the second foreground region, and finally obtain the target image with clearly divided background/foreground regions by combining the first background region and the first foreground region determined in the original image, which is convenient for analyzing and researching the target image subsequently.
In consideration of the interference of light sources and brightness on the original image identification, the original image can be preprocessed before the region identification is carried out, the interference of different light sources on the tissue color is reduced by carrying out white balance processing on the original image, and the brightness normalization processing is carried out on the original image so as to reduce the interference of different brightness on the original tissue image in the identification process.
In one embodiment, the white balance processing on the original image may employ the following steps (1) to (6):
(1) separating three channels of RGB of an original tissue image, namely r, g and b;
(2) calculating the mean value of each channel, i.e. meanr,meang,meanb
Figure BDA0002209393540000071
Wherein, C is r, g, b and N are the number of image pixel points;
(3) calculating meanr,meang,meanbI.e.:
Figure BDA0002209393540000072
(4) calculating the weight w of each channelr,wg,wbNamely:
Figure BDA0002209393540000073
wherein C ═ r, g, b;
(5) calculating the white balance RGB of each pixel of each channel to obtain r ', g ', b ';
Ci'=wC×Ciwherein C ═ r, g, b, i ═ 1,2, …, N;
(6) and combining the r ', g ' and b ' to obtain a new RGB image, namely the image after white balance processing.
In one embodiment, the original image is luminance normalized by first setting the mean value of each channel of a clean, non-tissue background image captured by an imaging system to rmean,gmean,bmeanThe luminance normalization process may employ the following steps (7) to (10):
(7) counting histograms of all channels of the organization image;
(8) obtaining the pixel value with the maximum number of pixels in each channel according to the histogram, and recording the pixel value as rmax,gmax,bmax
(9) Calculating the deviation r of each channeldiff,gdiff,bdiffNamely:
Cdiff=Cmean-Cmaxwherein C ═ r, g, b;
(10) adding the deviation of the corresponding channel to each channel r, g and b of each pixel in the tissue image to obtain a normalized pixel value r ', g ' and b ', namely:
Ci”=Ci+Cdiffwhere "C" is r ", g", b ", C" is r, g, b, i is 1,2, …, N.
In order to facilitate understanding of step S104 in the foregoing embodiment, an embodiment of the present invention provides a specific implementation manner for performing region division on an original image to be processed based on color features, where image feature extraction is performed according to different pixels in the original image, and then a classifier is used to classify the original image based on different pixel features, so as to obtain a first foreground region, a first background region, and an uncertain region. The specific implementation manner of the step S104 can be seen in the following steps 1 to 2:
step 1, converting an RGB image of an original image into an HSV image;
the RGB image is composed of Red, Green and Blue primary colors, the HSV image defines colors through hue, brightness value and saturation, the RGB image is converted into the HSV image, the HSV image is adjusted according to the recognition characteristics of human eyes, the perception of different colors by the human eyes is closer, and therefore the expected display effect of the human eyes is achieved. The following formula is specifically referred to for converting the RGB image into the HSV image:
Tmax=max(r,g,b)
Tmin=min(r,g,b)
Δ=Tmax-Tmin
Figure BDA0002209393540000091
Figure BDA0002209393540000092
and 2, classifying and identifying the original image through a classifier obtained through pre-training to obtain a first background area and a first foreground area, and setting other areas except the first background area and the first foreground area in the original image as uncertain areas.
For the step 2, the embodiment of the present invention provides a method for performing region identification on an original image by using a classifier, which is as follows, in step 2.1 to step 2.2:
and 2.1, training the classifier. The classifier may include a background classifier and a foreground classifier that are trained separately. The process of training the classifier can be performed according to the following steps a to b:
step a, acquiring training images, wherein each training image carries a positive sample label and a negative sample label of a foreground area, and carries a positive sample label and a negative sample label of a background area. The number of training images is multiple.
And b, training the classifier through the training image. The classifier can comprise a background classifier and a foreground classifier, and when the background classifier is trained, a plurality of HSV pixel points in a background area on each image are used as positive samples, and a plurality of HSV pixel points in other areas except the background area are used as negative samples to be trained. Similarly, when the foreground classifier is trained, a plurality of HSV pixel points in the foreground region on each image are used as positive samples, and a plurality of HSV pixel points in other regions except the foreground region are used as negative samples to be trained. And stopping training when the training times reach the preset times and/or the fitting error of the classifier is lower than the preset error.
And 2.2, identifying the original image to be processed by using the trained classifier.
In a specific embodiment, for an original image to be processed, a background classifier is used to predict the class of each pixel on the image, and an image with background identification is obtained, and in practical applications, the pixel value of the positive class in the image is set to 0 (the image area is black), and the pixel value of the negative class in the image is set to 175 (the image area is gray). Performing morphological operations such as erosion or dilation on the image may be understood as erosion of the highlighted portion of the original image, i.e. obtaining a larger background area, and performing dilation on the image may be understood as expansion of the highlighted portion of the image, i.e. obtaining a larger foreground area. And (3) predicting the HSV pixel value of the gray area by using a foreground classifier, setting the positive-class pixel output by the classifier as 255 (the image area is white), and determining the areas except the black area (the pixel value is 0) and the white area (the pixel value is 255) in the image area as uncertain areas.
In an embodiment, the uncertain region can be identified by using a Grabcut algorithm, and since the Grabcut algorithm can better segment a foreground region of an image and requires input of at least 3 channels when identifying the foreground region, the first background region, the first foreground region and the uncertain region are used as input of the algorithm to identify the original image.
For easy understanding, fig. 2 shows a schematic diagram of an original image, in which a background region, a foreground region and an uncertain region are illustrated, and since the edge contours of the background region and the foreground region may appear blurry, the blurred region which is uncertain as the background region and the foreground region is set as the uncertain region, thereby facilitating further dividing the uncertain region.
Preferably, in view of the situation that the edge contours of the first background area and the first foreground area may be blurred, an embodiment of the present invention provides a method for dividing an uncertain area, referring to a flowchart for dividing an uncertain area as shown in fig. 3, where the method mainly includes the following steps S302 to S310:
step S302, a boundary of the first foreground region and a boundary of the first background region are acquired.
And performing region identification on the original image according to the classifier of the method to obtain a first foreground region and a first background region, and further processing the image by taking the obtained boundary of the first foreground region and the obtained boundary of the first background region as a reference.
Step S304, for each pixel point in the uncertain region, respectively calculating a first closest distance from the pixel point to a boundary of the first foreground region and a second closest distance from the pixel point to a boundary of the first background region.
The first closest distance is the closest distance from a pixel point in the uncertain region to a point of a first foreground region boundary, the second closest distance is the closest distance from the pixel point in the uncertain region to a point of a first background region boundary, and by calculating the first closest distance and the second closest distance, the pixel point in the uncertain region closest to the first background region can be actually set as a second background region, and the pixel point in the uncertain region closer to the first foreground region is set as a second foreground region, so that the final background region and the foreground region are obtained.
Step S306, determining a set of pixel points with the first closest distance within a first preset distance range as a second foreground region.
Step S308, determining a set of pixel points with the second closest distance within a second preset distance range as a second background area.
The first preset range is a range in which the distance between the pixel point in the uncertain region and the first foreground region boundary is smaller than a certain value, and the first preset range can be set according to requirements, for example, a range in which the distance between the pixel point in the uncertain region and the first foreground region boundary is smaller than 10 micrometers can be set as the first preset range. The second preset range is a range in which the distance between the pixel point in the uncertain region and the boundary of the first background region is less than a certain value, and can be set according to specific requirements. In a specific embodimentThe nearest distance from each pixel point of the uncertain region to the boundary of the first background region can be recorded as distbackgroundAnd recording the nearest distance from each pixel point of the uncertain region to the boundary of the first foreground region as distforcegroundJudgment of distbackgroundWhether or not to be less than distforcegroundIf yes, the pixel point is divided into a second foreground region, and if not, the pixel point is divided into a second background region.
Further, in order to improve the effect of identifying the region, step S108 may further perform region correction on the obtained first foreground region and second foreground region, and the method for correcting the region edge mainly includes the following steps a to C:
step A: and acquiring the edge lines of the first foreground region and the edge lines of the second foreground region.
And B: and correcting the edge lines of the first foreground area and the edge lines of the second foreground area to obtain a first foreground area after line correction processing and a second foreground area after line correction processing.
In an implementation manner, the obtained edge lines of the first foreground area and the second foreground area can be corrected in a manual correction manner, so that the problem of inaccurate identification caused by algorithm identification is solved, and the accuracy of area identification is further improved.
And C: and merging the first foreground area after the line correction processing and the second foreground area after the line correction processing to obtain the foreground area of the original image.
The above method for correcting the first foreground region and the second foreground region is also applicable to the correction of the first background region and the second background region, and is not described herein again.
In summary, in the embodiment of the present invention, the original image to be processed is first obtained in the above manner, in order to eliminate interference of color and brightness on the original image, white balance processing and brightness normalization processing may be performed on the original image, then the original image is identified to obtain a first background region, a first foreground region and an uncertain region (which may be subjected to feature extraction based on color features and classified by a training classifier), and then the uncertain region is identified to obtain a second background region and a second foreground region, so that the effect of region identification is improved, finally the background region of the original image is determined according to the first background region and the second background region, the foreground region of the original image is determined according to the first foreground region and the second foreground region, and the target image is generated based on the background region and the foreground region. According to the embodiment of the invention, when the original image is identified, the uncertain area is further identified, so that the accuracy of area identification is improved.
An embodiment of the present invention provides a specific example of applying the foregoing image processing method, where the example takes an original image to be processed as a cell tissue image as an example for description, and in specific implementation, refer to another image processing method flowchart shown in fig. 4, where the method mainly includes the following steps S402 to S412:
step S402: an original tissue image is acquired.
The image of the tissue to be identified is obtained by an imaging system, and in practical application, the imaging system may be a device comprising a plane plate with scales and a photographic device, such as a high-speed camera and other devices with cameras, and the image of the tissue to be identified is the image of the tissue obtained by the imaging system.
Step S404: and preprocessing the original tissue image by white balance processing and brightness normalization processing.
Preferably, before processing the tissue image, in order to reduce the influence of different colors and brightness on tissue identification when the imaging system acquires the tissue image, the white balance may be first used to process the image to reduce the interference of the colors on the tissue image, and then the brightness normalization process is used to reduce the interference of the brightness on the tissue image, so as to achieve a better identification effect.
Step S406: and performing feature extraction on the preprocessed tissue image based on the color features.
And converting the RGB images of the tissues subjected to white balance processing and brightness normalization processing into HSV images, and extracting features based on color features.
Step S408: and carrying out region division on the HSV image to obtain a background region, a foreground region and a suspicious region.
In an embodiment, HSV images are placed in a trained background classifier and foreground classifier for classification and identification (based on extracted color features), the training method of the background classifier and the foreground classifier is the same as that of the above embodiment, and details are not repeated here, and HSV images are predicted by the classifier to obtain a background region, a foreground region and a suspicious region (the background region is a first background region in the above embodiment, the foreground region is a first foreground region in the above embodiment, and the suspicious region is an uncertain region in the above embodiment).
Step S410: and further dividing the obtained suspicious region to obtain a background region and a foreground region.
In one embodiment, the suspicious region is identified by a Grabcut algorithm, and because the Grabcut algorithm has at least three channel inputs, the background region, the foreground region and the suspicious region are used as the Grabcut inputs to identify the tissue region, and the Grabcut algorithm automatically identifies the suspicious region as the background region or the foreground region according to the input information.
In another embodiment, the suspicious region is further subdivided, minimum distances from each point in the suspicious region to the boundary of the background region and the boundary of the foreground region are respectively calculated, a set of pixel points in the suspicious region close to the background region is determined as the background suspicious region (i.e., the second background region in the foregoing embodiment), a set of pixel points in the suspicious region close to the foreground region is determined as the foreground suspicious region (i.e., the second foreground region), and the obtained background region, foreground region, background suspicious region and foreground suspicious region are used as inputs of Grabcut, so as to obtain a final background region and foreground region.
Step S412: and finely adjusting the boundary of the background area and the boundary of the foreground area to obtain a final tissue image.
In a specific embodiment, in order to obtain a more accurate recognition effect, the boundary of the background region and the boundary of the foreground region of the tissue image can be further adjusted manually by setting a human-computer interaction interface. After obtaining the background area and the foreground area, the user may select to perform further fine adjustment on the tissue image, and the fine adjustment mode may include an erasing mode and an area adding mode.
If the user selects the erasing mode, when the system identifies that the user erases the redundant area by using the mouse, the area erased by the user is determined as a background area, the area erased by the user is automatically expanded into a background suspicious area according to the sliding area of the mouse of the user, the expanded radius can be 5 microns, and the background suspicious area is further identified by using a Grabcut algorithm to obtain a final background area and a final foreground area.
If the user selects the region adding mode, determining the region erased by the user as a foreground region, and automatically expanding the region erased by the user as a foreground suspicious region according to the sliding region of the mouse of the user, so as to perform further identification, which is not described herein again.
The image processing method provided by the embodiment of the invention comprises the steps of firstly obtaining an original tissue image to be processed, carrying out white balance processing and brightness normalization processing on the original image in order to eliminate the interference of color and brightness on the original tissue image, then identifying the original image to obtain a background area, a foreground area and a suspicious area, further identifying the suspicious area to obtain a background suspicious area and a foreground suspicious area, merging the background suspicious area and the background area, merging the foreground suspicious area and the foreground area to obtain a background area and a foreground area of a final tissue image, and generating a target tissue image based on the background area and the foreground area. The method and the device improve the effect of region identification, and further identify the suspicious region when identifying the original tissue image, thereby improving the accuracy of region identification.
An embodiment of the present invention further provides an image processing apparatus, and referring to a schematic structural diagram of an image processing apparatus shown in fig. 5, the apparatus includes the following modules:
an image obtaining module 502, configured to obtain an original image to be processed;
a first identification module 504, configured to identify an original image to obtain a first background area, a first foreground area, and an uncertain area;
a second identifying module 506, configured to identify the uncertain region to obtain a second background region and a second foreground region;
a region determining module 508, configured to determine a background region of the original image according to the first background region and the second background region; determining a foreground region of the original image according to the first foreground region and the second foreground region;
a target image generation module 510 for generating a target image based on the background region and the foreground region.
The image processing device provided by the embodiment of the invention can further identify the uncertain area in the original image, subdivide the uncertain area into the second background area and the second foreground area, and finally obtain the target image with clearly divided background/foreground areas by combining the first background area and the first foreground area which are determined in the original image, thereby facilitating the follow-up analysis and research of the target image.
In an embodiment, the first identifying module 504 is further configured to identify the original image based on color features, so as to obtain a first background area, a first foreground area, and an uncertain area.
In an embodiment, the second identifying module 506 is further configured to identify the uncertain region by using a Grabcut algorithm, so as to obtain a second background region and a second foreground region.
In an implementation manner, the second identifying module 506 may be further configured to determine, as the second foreground region, a pixel point in the uncertain region that is close to the boundary of the first foreground region by obtaining the boundary of the first foreground region and the boundary of the first background region and calculating a minimum distance between the pixel point in the uncertain region and the boundary of the first foreground region and the boundary of the first background region, and determine, as the second background region, a pixel point in the uncertain region that is close to the boundary of the first background region.
In one embodiment, the above apparatus further comprises:
and the classifier training module is used for training the classifier.
And the area correction module is used for correcting the edge lines of the first foreground area and the edge lines of the second foreground area to obtain a first foreground area after line correction processing and a second foreground area after line correction processing.
The device provided by the embodiment of the present invention has the same implementation principle and technical effect as the method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the method embodiments without reference to the device embodiments.
For convenience of understanding, fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, where the electronic device 100 includes: the processor 60, the memory 61, the bus 62 and the communication interface 63, wherein the processor 60, the communication interface 63 and the memory 61 are connected through the bus 62; the processor 60 is arranged to execute executable modules, such as computer programs, stored in the memory 61.
The Memory 61 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 63 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used.
The bus 62 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 6, but that does not indicate only one bus or one type of bus.
The memory 61 is used for storing a program, and the processor 60 executes the program after receiving an execution instruction, and the method executed by the apparatus defined by the flow process disclosed in any of the foregoing embodiments of the present invention may be applied to the processor 60, or implemented by the processor 60.
The processor 60 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 60. The Processor 60 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory 61, and the processor 60 reads the information in the memory 61 and, in combination with its hardware, performs the steps of the above method.
The image processing method, the image processing apparatus, and the computer program product of the electronic device provided in the embodiments of the present invention include a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the method described in the foregoing method embodiments, and specific implementation may refer to the method embodiments, and will not be described herein again.
In addition, in the description of the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (12)

1. An image processing method, comprising:
acquiring an original image to be processed;
identifying the original image to obtain a first background area, a first foreground area and an uncertain area;
identifying the uncertain area to obtain a second background area and a second foreground area;
determining a background area of the original image according to the first background area and the second background area;
determining a foreground region of the original image according to the first foreground region and the second foreground region;
generating a target image based on the background region and the foreground region.
2. The method of claim 1, wherein the step of identifying the original image to obtain a first background region, a first foreground region and an uncertain region comprises:
classifying and identifying the original image through a classifier obtained through pre-training to obtain a first background area and a first foreground area; wherein the classifier comprises a background classifier and a foreground classifier;
setting a region other than the first background region and the first foreground region in the original image as an uncertain region.
3. The method of claim 2, wherein the training step of the classifier comprises:
acquiring a training image; each training image carries a positive sample label and a negative sample label of a foreground area, and carries a positive sample label and a negative sample label of a background area;
and training the classifier through the training image until the training times reach preset times and/or the fitting error of the classifier is lower than a preset error, and finishing the training.
4. The method of claim 1, wherein the step of identifying the uncertain region to obtain a second background region and a second foreground region comprises:
and identifying the uncertain region through a Grubcut algorithm to obtain a second background region and a second foreground region.
5. The method of claim 1, wherein the step of identifying the uncertain region to obtain a second foreground region and a second background region further comprises:
acquiring the boundary of the first foreground area and the boundary of the first background area;
for each pixel point in the uncertain region, respectively calculating a first closest distance from the pixel point to the boundary of the first foreground region and a second closest distance from the pixel point to the boundary of the first background region;
determining the set of the pixel points with the first closest distance within a first preset distance range as a second foreground area;
and determining the set of the pixel points with the second closest distance within a second preset distance range as a second background area.
6. The method of claim 1, wherein the step of determining the foreground region of the original image from the first foreground region and the second foreground region comprises:
acquiring an edge line of the first foreground region and an edge line of the second foreground region;
correcting the edge lines of the first foreground area and the edge lines of the second foreground area to obtain a first foreground area after line correction processing and a second foreground area after line correction processing;
and combining the first foreground area after the line correction processing and the second foreground area after the line correction processing to obtain the foreground area of the original image.
7. The method according to any one of claims 1 to 6, wherein the original image contains a tissue region with a blurred boundary;
the method further comprises the following steps:
and determining a foreground region of the original image as a tissue region with the fuzzy boundary removed in the target image.
8. The method according to claim 1, wherein the original image is obtained by white balance processing and/or brightness normalization processing.
9. The method of claim 2, further comprising:
and if the original image is an RGB image, converting the original image into an HSV image.
10. An image processing apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring an original image to be processed;
the first identification module is used for identifying the original image to obtain a first background area, a first foreground area and an uncertain area;
the second identification module is used for identifying the uncertain region to obtain a second background region and a second foreground region;
the area determining module is used for determining a background area of the original image according to the first background area and the second background area; determining a foreground region of the original image according to the first foreground region and the second foreground region;
and the target image generation module is used for generating a target image based on the background area and the foreground area.
11. An electronic device comprising a processor and a memory device;
the storage device has stored thereon a computer program which, when executed by the processor, performs the method of any of claims 1 to 9.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 9.
CN201910896473.7A 2019-09-20 2019-09-20 Image processing method and device and electronic equipment Pending CN110659683A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910896473.7A CN110659683A (en) 2019-09-20 2019-09-20 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910896473.7A CN110659683A (en) 2019-09-20 2019-09-20 Image processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN110659683A true CN110659683A (en) 2020-01-07

Family

ID=69037457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910896473.7A Pending CN110659683A (en) 2019-09-20 2019-09-20 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110659683A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112489129A (en) * 2020-12-18 2021-03-12 深圳市优必选科技股份有限公司 Pose recognition model training method and device, pose recognition method and terminal equipment
CN112822476A (en) * 2021-02-26 2021-05-18 广东以诺通讯有限公司 Automatic white balance method, system and terminal for color cast of large number of monochrome scenes

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101357067A (en) * 2007-05-01 2009-02-04 韦伯斯特生物官能公司 Edge detection in ultrasound images
US20140147037A1 (en) * 2012-11-26 2014-05-29 Samsung Electronics Co., Ltd. Image processing apparatus and method
CN104809465A (en) * 2014-01-23 2015-07-29 北京三星通信技术研究有限公司 Classifier training method, target detection, segmentation or classification method and target detection, segmentation or classification device
US20160092746A1 (en) * 2014-09-25 2016-03-31 Aricent Technologies Luxembourg s.a.r.l. Intelligent background selection and image segmentation
CN109978890A (en) * 2019-02-25 2019-07-05 平安科技(深圳)有限公司 Target extraction method, device and terminal device based on image procossing
CN110111342A (en) * 2019-04-30 2019-08-09 贵州民族大学 A kind of optimum option method and device of stingy nomography

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101357067A (en) * 2007-05-01 2009-02-04 韦伯斯特生物官能公司 Edge detection in ultrasound images
US20140147037A1 (en) * 2012-11-26 2014-05-29 Samsung Electronics Co., Ltd. Image processing apparatus and method
CN104809465A (en) * 2014-01-23 2015-07-29 北京三星通信技术研究有限公司 Classifier training method, target detection, segmentation or classification method and target detection, segmentation or classification device
US20160092746A1 (en) * 2014-09-25 2016-03-31 Aricent Technologies Luxembourg s.a.r.l. Intelligent background selection and image segmentation
CN109978890A (en) * 2019-02-25 2019-07-05 平安科技(深圳)有限公司 Target extraction method, device and terminal device based on image procossing
CN110111342A (en) * 2019-04-30 2019-08-09 贵州民族大学 A kind of optimum option method and device of stingy nomography

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邵帅: "一种基于三支决策SVM分类的图像识别方法", 《现代计算机》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112489129A (en) * 2020-12-18 2021-03-12 深圳市优必选科技股份有限公司 Pose recognition model training method and device, pose recognition method and terminal equipment
CN112822476A (en) * 2021-02-26 2021-05-18 广东以诺通讯有限公司 Automatic white balance method, system and terminal for color cast of large number of monochrome scenes

Similar Documents

Publication Publication Date Title
CN108197546B (en) Illumination processing method and device in face recognition, computer equipment and storage medium
CN109978890B (en) Target extraction method and device based on image processing and terminal equipment
US20190130169A1 (en) Image processing method and device, readable storage medium and electronic device
US8363933B2 (en) Image identification method and imaging apparatus
US9530045B2 (en) Method, system and non-transitory computer storage medium for face detection
CN111881913A (en) Image recognition method and device, storage medium and processor
WO2015070723A1 (en) Eye image processing method and apparatus
CN103098078A (en) Smile detection systems and methods
CN107911625A (en) Light measuring method, device, readable storage medium storing program for executing and computer equipment
CN105139404A (en) Identification camera capable of detecting photographing quality and photographing quality detecting method
CN109712177B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN104636754B (en) Intelligent image sorting technique based on tongue body subregion color characteristic
CN101983507A (en) Automatic redeye detection
CN107993209A (en) Image processing method, device, computer-readable recording medium and electronic equipment
Percannella et al. A classification-based approach to segment HEp-2 cells
CN111091571A (en) Nucleus segmentation method and device, electronic equipment and computer-readable storage medium
CN110659683A (en) Image processing method and device and electronic equipment
CN110881103A (en) Focusing control method and device, electronic equipment and computer readable storage medium
CN113743378B (en) Fire monitoring method and device based on video
CN107909542A (en) Image processing method, device, computer-readable recording medium and electronic equipment
CN107578372B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
US20230177699A1 (en) Image processing method, image processing apparatus, and image processing system
CN107770446B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
KR100488014B1 (en) YCrCb color based human face location detection method
WO2022126923A1 (en) Asc-us diagnosis result identification method and apparatus, computer device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200107

RJ01 Rejection of invention patent application after publication