CN109191486A - A kind of pet image partition method and electronic equipment - Google Patents
A kind of pet image partition method and electronic equipment Download PDFInfo
- Publication number
- CN109191486A CN109191486A CN201811060032.5A CN201811060032A CN109191486A CN 109191486 A CN109191486 A CN 109191486A CN 201811060032 A CN201811060032 A CN 201811060032A CN 109191486 A CN109191486 A CN 109191486A
- Authority
- CN
- China
- Prior art keywords
- pet
- region
- area
- background
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of pet image partition method and electronic equipments, comprising: obtains the first pet image, such as photo when user takes pictures or can be for the first pet image with the picture that any channel obtains;Pet identification is carried out to the first pet image, identifies the facial characteristics and trunk feature of pet;Pet image segmentation is carried out to the first pet image according to the facial characteristics that identifies, trunk feature, to determine the including first area of pet and as the second area of background.Second area according to ground pet first area is identified and as background determines foreground seeds region, background seed region;Pet identification is carried out to the first pet image according to foreground seeds region, background seed region, is partitioned into complete pet region.When the present invention can make that even pet is with background colour or close background patterns, can also divide it is clear so that boundary is clearly demarcated between pet and background.
Description
Technical field
The present invention relates to intelligent pet field of image processing, in particular to a kind of pet image partition method and electronics are set
It is standby.
Background technique
In daily life and work, pet is often plucked out from pet image and synthesizes it with other images by people,
To generate a new pet image, but this stingy figure is all manual progress, and handles the picture that band has powerful connections by hand and grasping
Make that process is very troublesome, and currently available technology, when carrying out the pet segmentation comprising pet image, effect is bad, user's body
It tests bad.
Summary of the invention
Problem to be solved by this invention is to provide one kind can be intelligent and efficiently to pet image progress pet and background
Divide so that the clearly demarcated pet image processing method in boundary between pet and background.
To solve the above-mentioned problems, the present invention provides a kind of pet image partition method, comprising:
The first pet image is obtained, such as photo when user takes pictures or the picture obtained with any channel can be for first
Pet image;
Pet identification is carried out to the first pet image, identifies the facial characteristics and trunk feature of pet;
According to identify facial characteristics, trunk feature to the first pet image carry out pet image segmentation, to determine
First area including pet and the second area as background.
Second area according to ground pet first area is identified and as background determines foreground seeds region, background kind
Subregion;
Pet identification is carried out to the first pet image according to foreground seeds region, background seed region, is partitioned into complete
Pet region.
Further, the basis identifies facial characteristics, trunk feature carry out pet image to the first pet image
When segmentation specifically:
Pet image segmentation is carried out to the first pet image based on facial characteristics, trunk feature and priori data.
Further, the basis identifies that prospect kind is determined in ground pet first area and the second area as background
Subregion, background seed region, specifically: it can be to the region progress certain proportion of the facial area, torso portion that identify
Diminution, as foreground seeds region;Determine that one is scheduled on background area according to the pet region determined and priori data
Interior background seed region.
Further, described that pet knowledge is carried out to the first pet image according to foreground seeds region, background seed region
Not, it is partitioned into complete pet region: being come further using Graph Cuts to first area and second area further progress
Segmentation, to be partitioned into completely pet region.
Further, the method further includes being synthesized in the pet region being partitioned into other images, to generate
Third image;Or pet clothes are synthesized in pet to obtain the effect that pet is put on clothes.
Further, the invention also discloses a kind of electronic equipment, and the electronic equipment includes: acquisition device, for obtaining
Take the first pet image;
Identification device identifies that the facial characteristics of pet and trunk are special for carrying out pet identification to the first pet image
Sign;
First segmenting device, the facial characteristics identified for basis, trunk feature carry out pet to the first pet image
Image segmentation, to determine the including first area of pet and as the second area of background.
First determining device, before being determined for the second area according to ground pet first area is identified and as background
Scape seed region, background seed region;
Second segmenting device, for being doted on according to foreground seeds region, background seed region the first pet image
Object identification, is partitioned into complete pet region.
Further, first segmenting device, for being based on facial characteristics, trunk feature and priori data to first
Pet image carries out pet image segmentation.
Further, described the with determining device, carries out one for the region to the facial area, torso portion that identify
The diminution of certainty ratio, as foreground seeds region;Determine that one is scheduled on back according to the pet region determined and priori data
Background seed region in scene area.
Further, second segmenting device, for being come further using Graph Cuts to first area and second
Region further progress segmentation, to be partitioned into completely pet region.
Further, the electronic equipment further includes determining module, and the pet region for will be partitioned into is synthesized to other
In image, to generate third image;Or pet clothes are synthesized in pet to obtain the effect that pet is put on clothes.
The beneficial effect of pet image processing method and electronic equipment of the invention is that electronic equipment is by applying this hair
It when bright pet image processing method makes that even pet is with background colour or close background patterns, can also divide clear, make
It is clearly demarcated to obtain boundary between pet and background.
Detailed description of the invention
Fig. 1 is the flow chart of the pet image processing method in the embodiment of the present invention.
Fig. 2 is the structural block diagram of electronic equipment in the embodiment of the present invention.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
Of the invention is described in detail below in conjunction with attached drawing.
It should be understood that various modifications can be made to disclosed embodiments.Therefore, following description should not regard
To limit, and only as the example of embodiment.Those skilled in the art will expect within the scope and spirit of this
Other modifications.
The attached drawing being included in the description and forms part of the description shows embodiment of the disclosure, and with it is upper
What face provided is used to explain the disclosure together to substantially description and the detailed description given below to embodiment of the disclosure
Principle.
It is of the invention by the description of the preferred form with reference to the accompanying drawings to the embodiment for being given as non-limiting example
These and other characteristic will become apparent.
Although being also understood that invention has been described referring to some specific examples, those skilled in the art
Member realizes many other equivalents of the invention in which can determine, they have feature as claimed in claim and therefore all
In the protection scope defined by whereby.
When read in conjunction with the accompanying drawings, in view of following detailed description, above and other aspect, the feature and advantage of the disclosure will become
It is more readily apparent.
The specific embodiment of the disclosure is described hereinafter with reference to attached drawing;It will be appreciated, however, that the disclosed embodiments are only
Various ways implementation can be used in the example of the disclosure.Known and/or duplicate function and structure and be not described in detail to avoid
Unnecessary or extra details makes the disclosure smudgy.Therefore, specific structural and functionality disclosed herein is thin
Section is not intended to restrictions, but as just the basis of claim and representative basis be used to instructing those skilled in the art with
Substantially any appropriate detailed construction diversely uses the disclosure.
This specification can be used phrase " in one embodiment ", " in another embodiment ", " in another embodiment
In " or " in other embodiments ", it can be referred to one or more of the identical or different embodiment according to the disclosure.
As shown in Figure 1, the embodiment of the present invention provides a kind of pet image processing method, comprising:
S1, the first pet image is obtained, such as photo when user takes pictures or with the picture that any channel obtains can is
First pet image;
S2, pet identification is carried out to the first pet image, identifies the facial characteristics and trunk feature of pet;
S3, according to identify facial characteristics, trunk feature to the first pet image carry out pet image segmentation, with determination
First area including pet and the second area as background out;
S4, according to identifying pet first area and the second area as background determine foreground seeds region, back
Scape seed region;
S5, pet identification is carried out to the first pet image according to foreground seeds region, background seed region, be partitioned into
Whole pet region.
The tool when facial characteristics and trunk feature that the basis identifies are to the first pet image progress pet image segmentation
Body are as follows:
Pet image segmentation is carried out to the first pet image based on facial characteristics, trunk feature and priori data.Specifically
Ground obtains the facial characteristics and trunk feature of pet, then according to face after system carries out pet identification to the first pet image
Special trunk feature and priori data come determine be in pet face, torso portion region, after determining, by the above-mentioned face determined
Portion region, torso area are defined as foreground seeds region, that is, one being positioned in pet region in the region.In order to more acurrate
Ground determines foreground seeds region, and a certain proportion of diminution, such as 0.9 can be carried out to the region of the torso portion identified, with
The region for guaranteeing the torso portion identified must be the region for including pet hair.
Then a background seed being scheduled in background area is determined according to the pet region and priori data that determine
Region in region, such as pet image at adjacent bezel.
It is described that pet identification is carried out to the first pet image according to foreground seeds region, background seed region, it is partitioned into
Complete pet region;In particular, come further using Graph Cuts to first area and second area further progress
Segmentation, to be partitioned into completely pet region.
Pet image processing method in through this embodiment can not only quickly determine pet region and background area
Domain, and in some pet images, color and the background of the hair zones of pet are closer to, such as hair is red,
Background is also all red, if without the above-mentioned pet identification step in the present embodiment, and pet image segmentation is directly carried out,
Then system is easy to for the hair zones of pet being also divided in background, causes the pet being partitioned into relatively large deviation occur, seriously
Reduce pet image segmentation.
Further, user is carrying out a variety of processing to complete pet region, processing mode include adjust picture brightness,
Pet image segmentation is carried out to determine one of head zone, neck area or a variety of.
Further, the pet region being partitioned into can be synthesized in other images by user, to generate third image.Or
Other pet clothes can also be synthesized in pet to obtain the effect that pet is put on clothes by person user.
As shown in Fig. 2, the embodiment of the present invention also provides a kind of electronic equipment, comprising:
Acquisition device, for obtaining the first pet image;
Identification device identifies that the facial characteristics of pet and trunk are special for carrying out pet identification to the first pet image
Sign;
First segmenting device, the facial characteristics identified for basis, trunk feature carry out pet to the first pet image
Image segmentation, to determine the including first area of pet and as the second area of background.
First determining device, before being determined for the second area according to ground pet first area is identified and as background
Scape seed region, background seed region;
Second segmenting device, for being doted on according to foreground seeds region, background seed region the first pet image
Object identification, is partitioned into complete pet region.
Further, the first segmenting device, according to the facial characteristics and trunk feature identified to the first pet image into
When row pet image segmentation specifically:
Pet image segmentation is carried out to the first pet image based on facial characteristics, trunk feature and priori data.Specifically
Ground obtains the facial characteristics and trunk feature of pet, then according to face after system carries out pet identification to the first pet image
Special trunk feature and priori data come determine be in pet face, torso portion region, after determining, by the above-mentioned face determined
Portion region, torso area are defined as foreground seeds region, that is, one being positioned in pet region in the region.In order to more acurrate
Ground determines foreground seeds region, can be to a certain proportion of diminution of region progress of the facial area, torso portion that identify, example
Such as 0.9, to guarantee that the region of the torso portion identified must be the region for including pet hair.
Then a background seed being scheduled in background area is determined according to the pet region and priori data that determine
Region in region, such as pet image at adjacent bezel.
Second segmenting device dotes on the first pet image according to foreground seeds region, background seed region
Object identification, is partitioned into complete pet region;In particular, come further using Graph Cuts to first area and the secondth area
Domain further progress segmentation, to be partitioned into completely pet region.
Pet image processing apparatus in the embodiment of the present invention is by carrying out pet identification to the first pet image, based on doting on
The data of object identification to carry out pet image segmentation to the first pet image so that exist for the first pet image comprising pet
It carries out dividing better effect when pet image segmentation, precision is higher.
Further, the electronic processing equipment further includes processing unit, and the processing unit is for realizing user to complete
Whole pet region carries out a variety of processing, and processing mode includes adjusting picture brightness, progress pet image segmentation to lift one's head with determining
One of portion region, neck area are a variety of.
Further, the pet region being partitioned into can be synthesized in other images by user, to generate third image.Or
Other pet clothes can also be synthesized in pet to obtain the effect that pet is put on clothes by person user.
Above embodiments are only exemplary embodiment of the present invention, are not used in the limitation present invention, protection scope of the present invention
It is defined by the claims.Those skilled in the art can within the spirit and scope of the present invention make respectively the present invention
Kind modification or equivalent replacement, this modification or equivalent replacement also should be regarded as being within the scope of the present invention.
Claims (10)
1. a kind of pet image partition method, which comprises
S1, the first pet image is obtained;
S2, pet identification is carried out to the first pet image, identifies the facial characteristics and trunk feature of pet;
S3, according to identify facial characteristics, trunk feature to the first pet image carry out pet image segmentation, to determine to wrap
Include the first area of pet and the second area as background;
S4, according to identifying pet first area and the second area as background determine foreground seeds region, background kind
Subregion;
S5, pet identification is carried out to the first pet image according to foreground seeds region, background seed region, be partitioned into complete
Pet region.
2. according to the method described in claim 1, the basis identify facial characteristics, trunk feature is to the first pet image
When carrying out pet image segmentation specifically:
Pet image segmentation is carried out to the first pet image based on facial characteristics, trunk feature and priori data.
3. according to the method described in claim 1, the basis identifies ground pet first area and the secondth area as background
Foreground seeds region, background seed region are determined in domain, specifically: it can be to the area of the facial area, torso portion that identify
Domain carries out a certain proportion of diminution, as foreground seeds region;It is determined according to the pet region determined and priori data
One is scheduled on the background seed region in background area.
4. according to the method described in claim 1, described scheme the first pet according to foreground seeds region, background seed region
As carrying out pet identification, it is partitioned into complete pet region: is come further using Graph Cuts to first area and the secondth area
Domain further progress segmentation, to be partitioned into completely pet region.
5. according to the method described in claim 1, the method further includes being synthesized to other for the pet region being partitioned into
In image, to generate third image;Or pet clothes are synthesized in pet to obtain the effect that pet is put on clothes.
6. a kind of electronic equipment, the electronic equipment include:
Acquisition device, for obtaining the first pet image;
Identification device identifies the facial characteristics and trunk feature of pet for carrying out pet identification to the first pet image;
First segmenting device, the facial characteristics identified for basis, trunk feature carry out pet image to the first pet image
Segmentation, to determine the including first area of pet and as the second area of background;
First determining device determines prospect kind for the second area according to ground pet first area is identified and as background
Subregion, background seed region;
Second segmenting device, for carrying out pet knowledge to the first pet image according to foreground seeds region, background seed region
Not, it is partitioned into complete pet region.
7. electronic equipment according to claim 6, first segmenting device, for being based on facial characteristics, trunk feature
And priori data carries out pet image segmentation to the first pet image.
8. electronic equipment according to claim 6, first determining device, for facial area, the body identified
The region of stem portion carries out a certain proportion of diminution, as foreground seeds region;According to the pet region determined and priori
Data determine a background seed region being scheduled in background area.
9. electronic equipment according to claim 6, second segmenting device is further for being come using Graph Cuts
First area and second area further progress are divided, to be partitioned into completely pet region.
10. electronic equipment according to claim 6, the electronic equipment further includes determining module, for what will be partitioned into
Pet region is synthesized in other images, to generate third image;Or pet clothes are synthesized in pet to obtain pet
The effect put on clothes.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811060032.5A CN109191486A (en) | 2018-09-12 | 2018-09-12 | A kind of pet image partition method and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811060032.5A CN109191486A (en) | 2018-09-12 | 2018-09-12 | A kind of pet image partition method and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109191486A true CN109191486A (en) | 2019-01-11 |
Family
ID=64910246
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811060032.5A Pending CN109191486A (en) | 2018-09-12 | 2018-09-12 | A kind of pet image partition method and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109191486A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11051493B2 (en) * | 2019-09-09 | 2021-07-06 | Council Of Agriculture | Method and system for distinguishing identities based on nose prints of animals |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102239687A (en) * | 2009-10-07 | 2011-11-09 | 松下电器产业株式会社 | Device, method, program, and circuit for selecting subject to be tracked |
CN104504745A (en) * | 2015-01-16 | 2015-04-08 | 成都品果科技有限公司 | Identification photo generation method based on image segmentation and image matting |
CN108053366A (en) * | 2018-01-02 | 2018-05-18 | 联想(北京)有限公司 | A kind of image processing method and electronic equipment |
-
2018
- 2018-09-12 CN CN201811060032.5A patent/CN109191486A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102239687A (en) * | 2009-10-07 | 2011-11-09 | 松下电器产业株式会社 | Device, method, program, and circuit for selecting subject to be tracked |
CN104504745A (en) * | 2015-01-16 | 2015-04-08 | 成都品果科技有限公司 | Identification photo generation method based on image segmentation and image matting |
CN108053366A (en) * | 2018-01-02 | 2018-05-18 | 联想(北京)有限公司 | A kind of image processing method and electronic equipment |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11051493B2 (en) * | 2019-09-09 | 2021-07-06 | Council Of Agriculture | Method and system for distinguishing identities based on nose prints of animals |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105787878B (en) | A kind of U.S. face processing method and processing device | |
CN107771336A (en) | Feature detection and mask in image based on distribution of color | |
US10304166B2 (en) | Eye beautification under inaccurate localization | |
CN107123083B (en) | Face edit methods | |
US9007480B2 (en) | Automatic face and skin beautification using face detection | |
US11636639B2 (en) | Mobile application for object recognition, style transfer and image synthesis, and related systems, methods, and apparatuses | |
CN105404846B (en) | A kind of image processing method and device | |
CN108053366A (en) | A kind of image processing method and electronic equipment | |
CN104992402A (en) | Facial beautification processing method and device | |
CN104346136B (en) | A kind of method and device of picture processing | |
ES2205262T3 (en) | SYSTEM AND METHOD FOR IMAGE PROCESSING. | |
CN110969631B (en) | Method and system for dyeing hair by refined photos | |
CN105046227A (en) | Key frame acquisition method for human image video system | |
JP7420971B2 (en) | Human body image processing method and electronic equipment | |
CN105187721B (en) | A kind of the license camera and method of rapid extraction portrait feature | |
CN108629200A (en) | A kind of image processing method and device | |
CN108805838A (en) | A kind of image processing method, mobile terminal and computer readable storage medium | |
CN110288715A (en) | Virtual necklace try-in method, device, electronic equipment and storage medium | |
CN104952093B (en) | Virtual hair colouring methods and device | |
CN110706187B (en) | Image adjusting method for uniform skin color | |
CN114187166A (en) | Image processing method, intelligent terminal and storage medium | |
CN109191486A (en) | A kind of pet image partition method and electronic equipment | |
CN113747640B (en) | Intelligent central control method and system for digital exhibition hall lamplight | |
CN109427038A (en) | A kind of cell phone pictures display methods and system | |
CN114049262A (en) | Image processing method, image processing device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190111 |