CN110827300A - Image segmentation method and corresponding separation device thereof - Google Patents

Image segmentation method and corresponding separation device thereof Download PDF

Info

Publication number
CN110827300A
CN110827300A CN201911090049.XA CN201911090049A CN110827300A CN 110827300 A CN110827300 A CN 110827300A CN 201911090049 A CN201911090049 A CN 201911090049A CN 110827300 A CN110827300 A CN 110827300A
Authority
CN
China
Prior art keywords
picture
training
photo
silhouette
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911090049.XA
Other languages
Chinese (zh)
Other versions
CN110827300B (en
Inventor
李小波
卞峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hengxin Oriental Culture Ltd By Share Ltd
Original Assignee
Hengxin Oriental Culture Ltd By Share Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hengxin Oriental Culture Ltd By Share Ltd filed Critical Hengxin Oriental Culture Ltd By Share Ltd
Priority to CN201911090049.XA priority Critical patent/CN110827300B/en
Publication of CN110827300A publication Critical patent/CN110827300A/en
Application granted granted Critical
Publication of CN110827300B publication Critical patent/CN110827300B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an image segmentation method and an image segmentation device thereof, wherein the method comprises the following steps: obtaining a fully exposed picture I; obtaining a silhouette photograph of photo one, wherein the exposure rate of the silhouette photograph is lower than that of photo one; processing the first photo to obtain an interested photo; processing the silhouette picture to obtain edge information of the silhouette picture; inputting the interested picture and the edge information into a pre-trained deep learning network to obtain a second picture; and fusing the first picture and the background picture through the second picture to obtain a display picture. The method and the device have the advantages that the deep learning network with the convergence effect is used for carrying out segmentation processing on the picture in picture segmentation, so that the segmentation effect is enhanced, meanwhile, the silhouette picture with the reference effect with the input picture is introduced, and the segmentation result is accurate.

Description

Image segmentation method and corresponding separation device thereof
Technical Field
The present invention relates to the field of image processing, and in particular, to an image segmentation method and a corresponding separation device.
Background
When taking a photograph, there is often a problem that the subject being photographed is close to the background color, so that there is an obstacle to the extraction of the subject being photographed. The existing processing mode is usually to change the background color manually, but because of the numerous subjects to be shot, the background color cannot be changed according to each subject to be shot, and meanwhile, the continuous change of the background color also leads to the manual extension of the shooting process, and increases the workload of photographers.
Disclosure of Invention
In order to solve the above problems, the present application provides an image segmentation method and an image segmentation apparatus, which enable image segmentation to be more accurate by using a depth learning network trained in advance, thereby automatically completing the replacement of an image background.
The application requests to protect an image segmentation method, which comprises the following steps: obtaining a fully exposed picture I; obtaining a silhouette photograph of photo one, wherein the exposure rate of the silhouette photograph is lower than that of photo one; processing the first photo to obtain an interested photo; processing the silhouette picture to obtain edge information of the silhouette picture; inputting the interested picture and the edge information into a pre-trained deep learning network to obtain a second picture; and fusing the first picture and the background picture through the second picture to obtain a display picture.
Preferably, the training of the deep learning network in advance comprises the following sub-steps: constructing a training library; and training the deep learning model by using the training library to obtain a deep learning network.
Preferably, the training library includes three sub-libraries, a training picture is stored in the first sub-library, a training silhouette picture with an exposure rate lower than that of the training picture corresponding to the training picture stored in the first sub-library is stored in the second sub-library, and a region-of-interest mask is stored in the third sub-library.
Preferably, wherein the first photo is processed, obtaining the photo of interest comprises the sub-steps of: carrying out graying processing on the first picture; obtaining a pre-fabricated region of interest mask from a database; and multiplying the interested area mask and the grayed picture I to obtain the interested picture.
Preferably, photo two has a transparent channel property.
The present application also provides an image segmentation apparatus, including the following components: a camera for taking pictures;
a processor performing the following processing steps: receiving a first photo and a silhouette photo of the first photo taken by a camera, wherein the exposure rate of the silhouette photo is lower than that of the first photo; processing the first photo to obtain an interested photo; processing the silhouette picture to obtain edge information of the silhouette picture; inputting the interested picture and the edge information into a pre-trained deep learning network to obtain a second picture; and fusing the first picture and the background picture through the second picture to obtain a display picture.
Preferably, wherein the processor pre-trains the deep learning network, comprising the sub-steps of: constructing a training library; and training the deep learning model by using the training library to obtain a deep learning network.
Preferably, the training library includes three sub-libraries, a training picture is stored in the first sub-library, a training silhouette picture with an exposure rate lower than that of the training picture corresponding to the training picture stored in the first sub-library is stored in the second sub-library, and a region-of-interest mask is stored in the third sub-library.
Preferably, wherein the first photo is processed, obtaining the photo of interest comprises the sub-steps of: carrying out graying processing on the first picture; obtaining a pre-fabricated region of interest mask from a database; and multiplying the interested area mask and the grayed picture I to obtain the interested picture.
Preferably, photo two has a transparent channel property.
The application requests to protect an image segmentation method and a corresponding segmentation device thereof, the method carries out a series of processing on a photo to be segmented, thereby obtaining a segmented picture with clear segmentation boundary, and the segmented picture is fused with a new background, thereby realizing the successful segmentation processing of an input photo.
Drawings
FIG. 1 is a block diagram of an image segmentation apparatus;
FIG. 2 is a flow chart of an image segmentation method;
FIG. 3 is a photograph of a full exposure;
FIG. 4 is a silhouette photograph of the fully exposed photograph of FIG. 3;
FIG. 5 is a photograph of the cut-out;
fig. 6 is a picture of the synthesis.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 shows a configuration diagram of an image segmentation device of the present application, wherein the image segmentation system comprises a camera 1 and a processor 2, wherein the camera 1 takes a picture and transmits the obtained picture to the processor 2 for processing, and the processor 2 processes the obtained picture to obtain a display picture. Wherein the processor 2 performs the following segmentation method as shown in fig. 2.
Fig. 2 shows a flow chart of an image segmentation method comprising the steps of:
step S210, obtaining a fully exposed picture;
when in photographing, a plurality of groups of lights are used for irradiating a photographed object, and in order to obtain a fully exposed picture, the lights for irradiating the photographed object are all turned on to obtain a fully exposed picture 301, as shown in fig. 3;
step S220, obtaining a silhouette picture of the picture 301, wherein the exposure rate of the silhouette picture is lower than that of the picture 301;
adjusting the lighting illuminating the object to be photographed, for example, turning off the front light and leaving only the background light, a silhouette photograph 402 as shown in fig. 4 is obtained, wherein the exposure rate of the silhouette photograph 402 is lower than that of the photograph 301. The rate of reduced exposure may be set manually in advance or may be given automatically by the processor 2. Wherein the processor 2 controls the camera 1 to perform photographing. The light can be adjusted manually, and the processor 2 can also control the on and off of the light.
Step S230, processing the photo 301 to obtain an interested photo; the method comprises the following substeps:
step S2301, performing gradation processing on the photograph 301;
the photograph 301 includes three colors, i.e., R, G, B, and when R ═ G ═ B, the color represents a gray scale color, where the value of R ═ G ═ B is called the gray scale value, so that the gray scale image only needs one byte per pixel to store the gray scale value, which ranges from 0 to 255.
The method for carrying out the graying processing on the image comprises a component method, a maximum value method, an average value method, a weighted average method and the like, and the traditional method is selected to finish the graying processing of the picture.
Step S2302, obtaining a prefabricated region of interest mask from a database;
the region of interest mask is a two-dimensional matrix array, is pre-fabricated, and is stored in a database.
Step S2303, the region of interest mask is multiplied by the grayed picture 301 to obtain a picture of interest.
Step S240, processing the silhouette picture 402 to obtain edge information of the silhouette picture; the method comprises the following substeps:
step S2401, processing pixel points of the silhouette picture 402 to obtain a silhouette image of training pixel points;
processing is performed on all pixel points of the silhouette photograph 402 obtained by the camera 1, the processing includes obtaining R, G, B channel values of the silhouette photograph 402, and calculating pixel point values of an image of a pixel point of a training silhouette using the following formula:
s1 ═ S1 ═ R2+ S2 ═ G2+ S3 ═ B2 (formula one)
Wherein S1 represents pixel point values of a training silhouette pixel point image, R2, G2, and B2 are coefficients of R, G, B channel values of the silhouette photograph 402, and S1, S2, and S3 are coefficients of R, G, B channel values of the silhouette photograph 402, which are preset to satisfy that S1+ S2+ S3 is 1;
step S2402, comparing the pixel point value of the training silhouette pixel point image with a threshold value to obtain a pixel point matrix of the training silhouette pixel point image;
and comparing the pixel point values of the obtained training silhouette pixel point image with a threshold value respectively, wherein the threshold value is a preset value, the position of the pixel point value which is greater than the threshold value is recorded as 1, and the position of the pixel point value which is less than the threshold value is recorded as 0, so that a pixel point value matrix of the training silhouette pixel point image is formed.
Step S2403, obtaining edge information of the silhouette image according to the pixel point matrix of the training silhouette pixel point image;
in the pixel point value matrix, a value 1 represents that the original photo is an outline pixel point of a picture to be extracted, and the outline pixel point with the pixel point matrix of 1 is automatically marked and connected on the image of the silhouette photo 402, so that the edge information of the silhouette photo 402 is obtained by combining the pixel points of the silhouette photo 402. Since the silhouette photograph is different from the photograph 301 only in that the exposure rate is lower than that of the photograph 301, the edge information of the silhouette photograph 402 is the edge information of the photograph 301.
Step S250, inputting the interested picture and the edge information into a pre-trained deep learning network to obtain a foreground image;
and smoothing and denoising the interested picture, and obtaining a foreground image with clear boundary by using edge information.
The further setting of the transparency channel value of the processed foreground image may be set automatically by the processor or may be preset to obtain a transparency map, i.e. the obtained foreground image is a transparent image, as shown in fig. 5, which is a film of the region of interest in the image.
And S260, fusing the photo 301 and the background picture through the foreground image to obtain a display picture.
The foreground image is overlaid on the photo 301 to obtain a desired region of interest, and further, the region of interest is fused with the background picture to obtain a display picture, as shown in fig. 6.
Example 2
Before the deep learning network is used, in particular, the deep learning network is pre-trained, and the method comprises the following sub-steps:
step P110: constructing a training library;
the training library comprises three sub libraries, a training picture is stored in the first sub library, a training silhouette picture with the exposure rate lower than that of the training picture corresponding to the training picture stored in the first sub library is stored in the second sub library, and an interested region mask is stored in the third sub library.
Step P120: and training the deep learning model by using the training library to obtain a deep learning network. The method comprises the following substeps:
step P1201, obtaining a training picture from the first subbank;
step P1202, obtaining a training silhouette picture corresponding to the training picture and having an exposure rate lower than that of the training picture from a second sub-library;
step P1203, processing the training picture to obtain an interested photo;
step P1204, processing the training silhouette picture to obtain edge information of the silhouette picture;
and step P1205, inputting the interested picture and the edge information into a deep learning model for training, and obtaining a deep learning network when the model is converged.
An existing deep learning model, such as a convolutional neural network, can be selected and trained until the model converges, i.e., the deep learning network is obtained.
Steps P1203 to P1204 correspond to the foregoing steps S240 to S250, and the specific implementation steps join steps S240 to S250.
The description and applications of the invention herein are illustrative and are not intended to limit the scope of the invention to the embodiments described above. Variations and modifications of the embodiments disclosed herein are possible, and alternative and equivalent various components of the embodiments will be apparent to those skilled in the art. It will be clear to those skilled in the art that the present invention may be embodied in other forms, structures, arrangements, proportions, and with other components, materials, and parts, without departing from the spirit or essential characteristics thereof. Other variations and modifications of the embodiments disclosed herein may be made without departing from the scope and spirit of the invention.

Claims (10)

1. An image segmentation method comprising the steps of:
obtaining a fully exposed picture I;
obtaining a silhouette photograph of photo one, wherein the exposure rate of the silhouette photograph is lower than that of photo one;
processing the first photo to obtain an interested photo;
processing the silhouette picture to obtain edge information of the silhouette picture;
inputting the interested picture and the edge information into a pre-trained deep learning network to obtain a second picture;
and fusing the first picture and the background picture through the second picture to obtain a display picture.
2. The image segmentation method of claim 1, wherein the deep learning network is trained in advance, comprising the sub-steps of:
constructing a training library;
and training the deep learning model by using the training library to obtain a deep learning network.
3. The image segmentation method according to claim 2, wherein the training library includes three sub-libraries, a first sub-library stores a training picture, a second sub-library stores a training silhouette picture corresponding to the training picture stored in the first sub-library and having a lower exposure rate than the training picture, and a third sub-library stores a region-of-interest mask.
4. The image segmentation method as set forth in claim 3, wherein the processing of the first picture to obtain the picture of interest includes the sub-steps of:
carrying out graying processing on the first picture;
obtaining a pre-fabricated region of interest mask from a database;
and multiplying the interested area mask and the grayed picture I to obtain the interested picture.
5. The image segmentation method of claim 4, wherein photo two has a transparent channel property.
6. An image segmentation apparatus comprising the following components:
a camera for taking pictures;
a processor performing the following processing steps:
receiving a first photo and a silhouette photo of the first photo taken by a camera, wherein the exposure rate of the silhouette photo is lower than that of the first photo;
processing the first photo to obtain an interested photo;
processing the silhouette picture to obtain edge information of the silhouette picture;
inputting the interested picture and the edge information into a pre-trained deep learning network to obtain a second picture;
and fusing the first picture and the background picture through the second picture to obtain a display picture.
7. The image segmentation apparatus of claim 6, wherein the processor pre-trains the deep learning network, comprising the sub-steps of:
constructing a training database;
and training the deep learning model by using the training database to obtain a deep learning network.
8. The image segmentation apparatus according to claim 7, wherein the training library includes three sub-libraries, a first sub-library stores therein a training picture, a second sub-library stores therein a training silhouette picture having a lower exposure rate than the training picture corresponding to the training picture stored in the first sub-library, and a third sub-library stores therein a region-of-interest mask.
9. The image segmentation apparatus as set forth in claim 8, wherein the processing of the first picture to obtain the picture of interest includes the sub-steps of:
carrying out graying processing on the first picture;
obtaining a pre-fabricated region of interest mask from a database;
and multiplying the interested area mask and the grayed picture I to obtain the interested picture. .
10. The image segmentation apparatus of claim 6, wherein photo two has a transparent channel property.
CN201911090049.XA 2019-11-08 2019-11-08 Image segmentation method and corresponding separation device thereof Active CN110827300B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911090049.XA CN110827300B (en) 2019-11-08 2019-11-08 Image segmentation method and corresponding separation device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911090049.XA CN110827300B (en) 2019-11-08 2019-11-08 Image segmentation method and corresponding separation device thereof

Publications (2)

Publication Number Publication Date
CN110827300A true CN110827300A (en) 2020-02-21
CN110827300B CN110827300B (en) 2022-08-26

Family

ID=69553862

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911090049.XA Active CN110827300B (en) 2019-11-08 2019-11-08 Image segmentation method and corresponding separation device thereof

Country Status (1)

Country Link
CN (1) CN110827300B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113674298A (en) * 2020-05-14 2021-11-19 北京金山云网络技术有限公司 Image segmentation method and device and server

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006033178A1 (en) * 2004-09-22 2006-03-30 Polygon Magic, Inc. Image processing device, method, and program
US20160005182A1 (en) * 2013-02-25 2016-01-07 Agent Video Intelligence Ltd. Method, system and software module for foreground extraction
CN107690048A (en) * 2016-08-04 2018-02-13 韦拉 A kind of method for obtaining 360 degree of images of object and the system for realizing this method
CN109788215A (en) * 2017-11-15 2019-05-21 佳能株式会社 Image processing apparatus, computer readable storage medium and image processing method
CN110099209A (en) * 2018-01-30 2019-08-06 佳能株式会社 Image processing apparatus, image processing method and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006033178A1 (en) * 2004-09-22 2006-03-30 Polygon Magic, Inc. Image processing device, method, and program
US20160005182A1 (en) * 2013-02-25 2016-01-07 Agent Video Intelligence Ltd. Method, system and software module for foreground extraction
CN107690048A (en) * 2016-08-04 2018-02-13 韦拉 A kind of method for obtaining 360 degree of images of object and the system for realizing this method
CN109788215A (en) * 2017-11-15 2019-05-21 佳能株式会社 Image processing apparatus, computer readable storage medium and image processing method
CN110099209A (en) * 2018-01-30 2019-08-06 佳能株式会社 Image processing apparatus, image processing method and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113674298A (en) * 2020-05-14 2021-11-19 北京金山云网络技术有限公司 Image segmentation method and device and server

Also Published As

Publication number Publication date
CN110827300B (en) 2022-08-26

Similar Documents

Publication Publication Date Title
CN109479098B (en) Multi-view scene segmentation and propagation
US11017586B2 (en) 3D motion effect from a 2D image
CN110188760B (en) Image processing model training method, image processing method and electronic equipment
JP6027159B2 (en) Image blurring method and apparatus, and electronic apparatus
US5745668A (en) Example-based image analysis and synthesis using pixelwise correspondence
US7961970B1 (en) Method and apparatus for using a virtual camera to dynamically refocus a digital image
JP6044134B2 (en) Image area dividing apparatus, method, and program according to optimum image size
WO2018053952A1 (en) Video image depth extraction method based on scene sample library
CN110166684B (en) Image processing method, image processing device, computer readable medium and electronic equipment
CN107833193A (en) A kind of simple lens global image restored method based on refinement network deep learning models
CN110827300B (en) Image segmentation method and corresponding separation device thereof
CN113724282A (en) Image processing method and related product
CN104700384B (en) Display systems and methods of exhibiting based on augmented reality
CN106780558B (en) Method for generating unmanned aerial vehicle target initial tracking frame based on computer vision point
CN110717913B (en) Image segmentation method and device
KR101913623B1 (en) A Method of Generating 3-Dimensional Advertisement Using Photographed Images
CN114418897B (en) Eye spot image restoration method and device, terminal equipment and storage medium
CN113240573B (en) High-resolution image style transformation method and system for local and global parallel learning
CN117689894A (en) Image processing method and device, electronic equipment and storage medium
CN109191396B (en) Portrait processing method and device, electronic equipment and computer readable storage medium
CN111724300B (en) Single picture background blurring method, device and equipment
CN110490877B (en) Target segmentation method for binocular stereo image based on Graph Cuts
Liang et al. The" Vertigo Effect" on Your Smartphone: Dolly Zoom via Single Shot View Synthesis
CN112508801A (en) Image processing method and computing device
CN112669337A (en) Self-iterative local green curtain image matting method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant