CN109274891A - A kind of image processing method, device and its storage medium - Google Patents
A kind of image processing method, device and its storage medium Download PDFInfo
- Publication number
- CN109274891A CN109274891A CN201811323983.7A CN201811323983A CN109274891A CN 109274891 A CN109274891 A CN 109274891A CN 201811323983 A CN201811323983 A CN 201811323983A CN 109274891 A CN109274891 A CN 109274891A
- Authority
- CN
- China
- Prior art keywords
- image
- pet face
- image processing
- processed
- face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
Abstract
The present invention provides a kind of image processing method, device and its storage mediums, are related to technical field of image processing.Described image processing method includes: to obtain image to be processed, and the image to be processed is the image comprising pet face;The pet face is positioned based on target detection network, and the main parts of the pet face are positioned using feature point detection algorithm, the main parts include at least one of ear, eye, nose, mouth and face mask.The image processing method is primarily based on target detection network and identifies and positions to pet face, then is positioned by main parts of the feature point detection algorithm to pet face, improves the positioning accuracy to the pet face in image.
Description
Technical field
The present invention relates to technical field of image processing, in particular to a kind of image processing method, device and its storage
Medium.
Background technique
With the rapid development of computer equipment, network and image processing techniques, traditional naked eyes image identification method
Gradually substituted by the image recognition mode carried out automatically by computer, thus greatly improve image recognition efficiency and
Accuracy rate.Automatically identifying that face is the common task of a computer vision field by computer, application is also very much, such as
Automatic recognition software on the mobile terminals such as existing computer, smart phone can be identified and be beautified to face, more next
More appears in daily life and amusement social activity, such as increases light sensation and fairness to human face region, to part five
Official's Enhanced feature effect replaces background picture etc. for image.
Nowadays people increasingly paid attention to oneself cute pet, be ready to share oneself all to oneself love dog and be keen to
The daily of pet dog and the epoch of state are sprouted in allowing more people to see, and pet image can be carried out currently not yet accurate
The software of pet face identification, and existing recognition positioning method can not carry out accurately identification positioning to pet face.
Summary of the invention
In view of this, the embodiment of the present invention is designed to provide a kind of image processing method, device and its storage medium,
To solve the problems, such as that above-mentioned existing recognition positioning method can not carry out accurately identification positioning to pet face.
In a first aspect, described image processing method includes: to obtain the embodiment of the invention provides a kind of image processing method
Image to be processed, the image to be processed are the image comprising pet face;The pet face is carried out based on target detection network
Positioning, and the main parts of the pet face being positioned using feature point detection algorithm, the main parts include ear,
At least one of eye, nose, mouth and face mask.
Synthesis is in a first aspect, described obtain image to be processed, comprising: acquires image to be processed by camera;Or from
Local storage reads image to be processed;Or image to be processed is obtained on network by uniform resource locator.
Synthesis is in a first aspect, described acquire image to be processed by camera, comprising: obtains the pre- of the camera acquisition
Video flowing is look at, based on whether there is pet face in preview video stream described in target detection Network Recognition;If so, the preview is regarded
There are the picture frames of the pet face as image to be processed in frequency stream.
Synthesis is in a first aspect, described image processing method further include: is determined based on target detection network the pet face
Position result obtains the identification frame for indicating pet face position in the image to be processed.It is described to be detected using characteristic point
Algorithm positions the main parts of the pet face, comprising: using feature point detection algorithm to institute in the identification frame
The main parts for stating pet face are positioned.
Synthesis is in a first aspect, described image processing method further include: response stage property fitting instruction, based on the identification frame
The decorative picture that the positioning result of size and the main parts selects user is translated, is rotated, zoom operations, and right
Translation, rotation, scaling after the decorative picture with the movement of characteristic portion and dynamic adhesion in the correspondence position of the pet face
It sets.
Synthesis is in a first aspect, described image processing method further include: using multi-layer image Techniques of preserving to the figure to be processed
Image component layer after picture and addition decorative picture saves in the buffer.
Synthesis is in a first aspect, described image processing method further include: the positioning result based on the target detection network is true
Determine position coordinates of the pet face in the image to be processed, dotes on described in the positioning result determination based on the main parts
The characteristic point coordinate of object face;The angle of the pet face is calculated based on the characteristic point coordinate;In the light of the image to be processed
Line, the position coordinates and the angle acquire when meeting predetermined condition and save present image.
Synthesis is in a first aspect, described image processing method further include: is carried out based on the main parts to the pet face
Background segment obtains the pet face image of the pet face;Background image will be updated and the pet face image merged
It is replaced at background, obtains target image, wherein the background image for updating background image as the target image, described to dote on
Foreground image of the object face image as the target image.
Second aspect, the embodiment of the invention provides a kind of image processing apparatus, described image processing unit includes: to obtain
Module, for obtaining image to be processed, the image to be processed is the image comprising pet face;Locating module, for being based on mesh
Mark detection network positions the pet face, and is carried out using main parts of the feature point detection algorithm to the pet face
Positioning, the main parts include at least one of ear, eye, nose, mouth and face mask.
The third aspect, it is described computer-readable the embodiment of the invention also provides a kind of computer-readable storage medium
It takes and is stored with computer program instructions in storage medium, when the computer program instructions are read and run by a processor, hold
Step in any of the above-described aspect the method for row.
Beneficial effect provided by the invention is:
The present invention provides a kind of image processing method, device and its storage mediums, and described image processing method is using instruction
The target detection network perfected identifies pet face, then uses feature point detection algorithm to the main parts of the pet face
It is positioned, improves the setting accuracy of pet face and its main parts, be provided simultaneously with lower background false detection rate and general
Property, to have higher standard when continuing to carry out image to be processed the pets face beautifications such as decorative picture addition, background replacement
Exactness.
Other features and advantages of the present invention will be illustrated in subsequent specification, also, partly be become from specification
It is clear that by implementing understanding of the embodiment of the present invention.The objectives and other advantages of the invention can be by written theory
Specifically noted structure is achieved and obtained in bright book, claims and attached drawing.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be to needed in the embodiment attached
Figure is briefly described, it should be understood that the following drawings illustrates only certain embodiments of the present invention, therefore is not construed as pair
The restriction of range for those of ordinary skill in the art without creative efforts, can also be according to this
A little attached drawings obtain other relevant attached drawings.
Fig. 1 is a kind of flow diagram for image processing method that first embodiment of the invention provides;
Fig. 2 is a kind of flow diagram for preview video stream pet face identification step that first embodiment of the invention provides;
Fig. 3 is a kind of flow diagram for main parts positioning step that first embodiment of the invention provides;
Fig. 4 is the flow diagram that a kind of decorative picture that first embodiment of the invention provides adds step;
Fig. 5 is a kind of flow diagram for background replacement step that first embodiment of the invention provides;
Fig. 6 is a kind of flow diagram for pet face image verification step that first embodiment of the invention provides;
Fig. 7 is a kind of module diagram for image processing apparatus 100 that second embodiment of the invention provides;
Fig. 8 is a kind of electronic equipment 200 that can be applied in the embodiment of the present application that third embodiment of the invention provides
Structural block diagram.
Icon: 100- image processing apparatus;110- obtains module;120- locating module;200- electronic equipment;201- storage
Device;202- storage control;203- processor;204- Peripheral Interface;205- input-output unit;206- audio unit;207-
Display unit.
Specific embodiment
Below in conjunction with attached drawing in the embodiment of the present invention, technical solution in the embodiment of the present invention carries out clear, complete
Ground description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Usually exist
The component of the embodiment of the present invention described and illustrated in attached drawing can be arranged and be designed with a variety of different configurations herein.Cause
This, is not intended to limit claimed invention to the detailed description of the embodiment of the present invention provided in the accompanying drawings below
Range, but it is merely representative of selected embodiment of the invention.Based on the embodiment of the present invention, those skilled in the art are not doing
Every other embodiment obtained under the premise of creative work out, shall fall within the protection scope of the present invention.
It should also be noted that similar label and letter indicate similar terms in following attached drawing, therefore, once a certain Xiang Yi
It is defined in a attached drawing, does not then need that it is further defined and explained in subsequent attached drawing.Meanwhile of the invention
In description, term " first ", " second " etc. are only used for distinguishing description, are not understood to indicate or imply relative importance.
First embodiment
Through the applicant the study found that the application program of the mobile terminal based on recognition of face and face U.S. face is more and more
Appear in daily life and amusement social activity, but mature pet face recognition method and application program energy not yet
It is enough that pet face is carried out to accurately identify positioning, it is not able to satisfy and is identified for pet face, and pet image taking mistake can not be solved
Movability, active object are difficult to the defect for capturing and grabbing in journey, can not precisely determine the position of pet face in image, accurately will
Customize the corresponding position that pet face is arranged in stage property fitting.To solve the above-mentioned problems, first embodiment of the invention provides
It is a kind of applied to computer or the image processing method of other processing equipments.
Referring to FIG. 1, Fig. 1 is a kind of flow diagram for image processing method that first embodiment of the invention provides, it should
The specific steps of image processing method can be such that
Step S20: obtaining image to be processed, and the image to be processed is the image comprising pet face.
The image to be processed can be the image of picture, video or extended formatting.Simultaneously, it is contemplated that the specification of image procossing
Property and uniformity demand, it may be necessary to only the image of specified type is handled, and user may specify other that cannot locate
The image of the format of reason, therefore the image that can be determined to user is filtered, and is manageable in the image that user specifies
Using the image as image to be processed when preset format.
Step S40: positioning the pet face based on target detection network, and using feature point detection algorithm to institute
The main parts for stating pet face are positioned, and the main parts include at least one of ear, eye, nose, mouth and face mask.
Target detection network in the present embodiment is the Feature Selection Model established based on convolutional neural networks, target inspection
Survey grid network can identify the target object in picture, and position after identifying target object to it, this implementation
Target object in example is pet face.
Further, in this embodiment target detection network can be based on RCNN (Regions with CNN
features)、Fast-RCNN、Faster-RCNN、SPP-Net(Spatial Pyramid Pooling Network)、YOLO
Or other object detection algorithms obtain.In view of needing to be measured in real time to pet face, the target detection in the present embodiment
Network can select YOLOv3 model, and the identification locating speed of YOLOv3 model is fast, background false detection rate is low, versatile, to increase
The Efficiency and accuracy of strong pet face identification, positioning.
In image procossing, characteristic point refers to the point or the curvature on image border that acute variation occurs for gray value of image
Biggish point (intersection point at i.e. two edges).Image characteristic point has particularly significant in the image matching algorithm based on characteristic point
Effect.Image characteristic point is able to reflect image substantive characteristics, can be identified for that target object in image.Pass through the matching of characteristic point
It can complete the matching of image.Characteristic point detection is to carry out the image data of higher-dimension to simplify expression most effective way, from
In the data matrix of piece image, we do not see any information, so we must extract image according to these data
In key message, some primary elements and their relationship, it is more accurately fixed to be carried out with the main parts to pet face
Position.Such as obtaining characteristic point is left and right canthus and upper and lower eye socket vertex, then can determine left and right canthus and upper and lower eye
The central point on socket of the eye vertex is the main parts of the eye in pet face.
Feature point detection algorithm in the present embodiment can be using Gauss-Laplace detection method (LOG),
Utilize pixel Hessian matrix (second-order differential) and its method (DOH) of determinant, Scale invariant features transform algorithm
(SIFT), accelerate robust features (SURF) or other feature point detection algorithms.
Image processing method provided in an embodiment of the present invention first uses target detection network to identify and determine pet face
Position, then be accurately positioned by main parts of the feature point detection algorithm to pet face, completion accurately identifies pet face
And positioning, improve the accuracy of pet face positioning;Needs are chosen simultaneously modifies or add decoration possibility highest and aspect ratio
Obvious ear, face, nose, mouth and face mask etc. are used as main parts, further improve the setting accuracy of pet face.
As an alternative embodiment, before executing step S20, this method default directly initiate camera and to
User shows the preview video stream of camera live view picture, while showing the selection that local upload, network obtain to user
Prompt carries out image acquisition can choose user in a manner of shooting immediately, local reading or network obtain.
In view of the present embodiment is shooting/recording process based on main body (people) to object (pet), start to carry out figure
The photographic device connecting with processing equipment is defaulted as the camera obtained, it should be appreciated that photographic device can be
With camera, computer eye, the DV of the wired or wireless connection of processing equipment etc..Meanwhile the present embodiment is also mentioned to user
For Shot change function, to be carried out when processing equipment is connected with multiple photographic devices to the photographic device for obtaining image to be processed
Switching.
Further, in addition to being automatically adjusted by processing equipment and application program to acquisition parameters, the present embodiment may be used also
To provide a user camera shooting adjusting function so that user by photographic device carry out image taking when to aperture, aperture time,
The parameters such as sensitivity are adjusted, and the quality of image is further increased while sufficiently meeting the individual demand of user.
When user selects network acquisition modes to obtain image, according to the uniform resource locator of user's input from network
Search obtains specified image as image to be processed.Wherein, the corresponding image of uniform resource locator can be stored in cloud
Database, data server or with the processing equipment network connection other processing equipments in.
When user needs to obtain image by local upload mode, the image to be processed in the present embodiment can be root
The storage address specified according to user is directly from the internal storage of the processing equipment or the external storage connecting with the processing equipment
The image stored extracted in device.
As an alternative embodiment, when obtaining image immediately by camera, it is contemplated that user needs imaging
It is shot when including pet face image in the picture of finding a view of head, to guarantee to have in image to be processed pet face, therefore step S20
It is middle when obtaining image to be processed can preview video stream to camera carry out the identification of pet face.Referring to FIG. 2, Fig. 2 is this hair
A kind of flow diagram for preview video stream pet face identification step that bright first embodiment provides, the step specifically can be as
Under:
Step S21: the preview video stream of the camera acquisition is obtained, based on the view of preview described in target detection Network Recognition
It whether there is pet face in frequency stream.
Step S22: if so, there are the picture frames of the pet face as image to be processed using in the preview video stream.
For step S40, as an alternative embodiment, accurate in order to further increase the identification of main parts
Rate can also obtain the identification frame of an expression pet face position by target detection network, then by feature point detection algorithm pair
Image in the identification frame carries out the positioning of main parts.Referring to FIG. 3, Fig. 3 is one kind that first embodiment of the invention provides
The flow diagram of main parts positioning step, the step can be such that
Step S41: obtaining one based on positioning result of the target detection network to the pet face indicates the pet face
The identification frame of position in the image to be processed.
Step S42: the main parts of the pet face are determined in the identification frame using feature point detection algorithm
Position.
It should be understood that the identification frame of the pet face obtained in above-mentioned steps can be drawn display, allow a user to
It is enough accurately to determine position of the pet face in image to be processed.Further, in the preview video stream picture of step S21
The identification frame of pet face can be shown, so that user can be located at the suitable position of shooting picture in pet face when shooting
When catch right moment for camera rapidly, to improve the quality of pet image.
The present embodiment completes after being accurately positioned the main parts of pet face, next can be based on positioning result to figure
As carrying out landscaping treatment.
Referring to FIG. 4, Fig. 4 is the process signal that a kind of decorative picture that first embodiment of the invention provides adds step
Figure, the step specifically can be such that
Step S61: response stage property fitting instruction, the positioning knot of size and the main parts based on the identification frame
The decorative picture that fruit selects user translates, rotates, zoom operations.
Step S62: to the decorative picture after translation, rotation, scaling with the movement of characteristic portion and dynamic adhesion exists
The corresponding position of the pet face.
It is special that decorative picture in the present embodiment can be ear, eye, nose, mouth and face mask that system is selected according to user etc.
The corresponding decorative picture that position is recommended is levied, can also be the decorative picture that user independently selects.Wherein, the dress in the present embodiment
Adorn image can for pet face and its matched cap of face, glasses, earrings etc., can also be and neck etc. near pet face
Matched scarf of main parts etc..
It should be understood that decorative picture can also be use in order to enhance the rich of user's independence and decorative picture
The specific decoration image that family oneself is drawn and saved.
In pet face and main parts position fixing process, decorative picture is fitted in the main parts of pet face, face automatically
Or on head during, the present embodiment uses the feature point detection algorithm based on cascade network thinking, to pet characteristic point on the face
Coordinate position (including ear, face, nose, five main parts of mouth and face mask and other pet face key points) carry out it is high-precision
Degree positioning, is added to pet on the face for decorative picture, so as to complete the addition of high-precision decorative picture, makes decorative picture and dotes on
The matching degree of object face is higher.
Referring to FIG. 5, Fig. 5 is a kind of flow diagram for background replacement step that first embodiment of the invention provides, it should
Step specifically can be such that
Step S81: background segment is carried out to the pet face based on the main parts, obtains the pet of the pet face
Face image.
Step S82: will update background image and the pet face image carries out fusion and completes background replacement, obtain target figure
Picture, wherein the background image for updating background image as the target image, the pet face image is as the target
The foreground image of image.
It should be noted that the method that two images merge is had very much, such as: by two image corresponding positions
The rgb value of pixel is overlapped.
Further, the present embodiment is before step S81, can also be to whether there is background segment pair in image to be processed
As being that pet face image is confirmed.Referring to FIG. 6, Fig. 6 is that a kind of pet face image that first embodiment of the invention provides is true
Recognize the flow diagram of step, which specifically can be such that
Step S71: corresponding pet face position is obtained based on the main parts, the pet face position includes area of ear
Domain, face area, nasal region, mouth region and face mask region.
Step S72: judge whether the image to be processed includes complete pet face according to the pet face position.
Step S73: if so, obtaining the pet face figure for including in the image to be processed according to the pet face position
Picture, using the pet face image as the object of background segment.
The embodiment of the present invention before background segment is handled from image to be processed by identifying image beautification to be carried out
The pet face image of processing, and determine that pet face image is completely, to carry out background segment to pet face image so as to subsequent, mention
The accuracy of high background segment.
As an alternative embodiment, in addition to the shooting instruction control camera triggered according to user carries out image bat
It takes the photograph outside, the execution equipment in the present embodiment can also there are the angle of pet face and the picture, light, positions to close judging picture
In due course, camera is controlled under candid photograph mode to be captured, it is automatic to obtain the image comprising pet face.Under above-mentioned candid photograph mode
Image acquisition step is specifically as follows: determining the pet face described wait locate based on the positioning result of the target detection network
The position coordinates in image are managed, the characteristic point coordinate of the pet face is determined based on the positioning result of the main parts;It is based on
The characteristic point coordinate calculates the angle of the pet face;In the light, the position coordinates and institute of the image to be processed
It states when angle meets predetermined condition and acquires and save present image.
Further, after user triggers a shooting instruction, camera can carry out multiple continuous shooting in continuous shooting mode, with
Guarantee the image of acquisition movability, active object.
Image processing method provided in this embodiment is also provided with timing function, and it is pre- to start acquisition in camera starting
It lookes at after video flowing, once to not detecting that there are pet faces in preview video stream within a preset time, and there is no human intervention
Shutter is pressed, then triggers standby signal, user is reminded to carry out next step operation behavior.
As an alternative embodiment, can be treated using multi-layer image Techniques of preserving when carrying out image preservation
It handles image, addition decorative picture or carries out the image progress component layer preservation after background segment.The image of different figure layers can be with
It is individually extracted and is handled, such as image to be processed is figure layer 1, and the figure after scarf decorative picture is added on the image to be processed
As being figure layer 2, the image after adding cap decorative picture again on adding the image after scarf decorative picture is figure layer 3, is used
Family can then modify to figure layer 2 when needing and carrying out the adjustment such as addition again to scarf decorative picture, improve decoration figure
The addition operating efficiency of picture.
It is saved in the buffer it should be understood that the image that above-mentioned component layer saves can also be, to improve to difference
The speed that picture is read when decorative picture is added further increases the addition efficiency and background segment effect of decorative picture
Rate.
When receiving the local preservation instruction of user's triggering in the present embodiment, judge whether that receiving local save instructs,
If so, saving the image corresponding with local preservation instruction in caching to local storage.
Further, it saves by image to local storage, the operating space of a pictures management can also be provided,
User is allowed to carry out the operation such as storage setting, browsing and inquiry to the picture for having shot, having edited.
It should be understood that image processing method provided in this embodiment in addition to adding decorative picture and back in pet on the face
Scape replacement is outer, can also include other picture beautifying functions such as addition filter, addition background goods of furniture for display rather than for use.
Image processing method provided in an embodiment of the present invention knows pet face using trained target detection network
Not, then using feature point detection algorithm the main parts of the pet face are positioned, improves pet face and its basic portion
The setting accuracy of position, is provided simultaneously with lower background false detection rate and versatility, thus continuing to fill image to be processed
Has higher accuracy when the pets face beautifications such as the addition of decorations image, background replacement.
Second embodiment
For the image processing method for cooperating first embodiment of the invention to provide, second embodiment of the invention additionally provides one
Kind image processing apparatus 100.
Referring to FIG. 7, Fig. 7 is a kind of module diagram for image processing apparatus 100 that second embodiment of the invention provides.
Image processing apparatus 100 includes obtaining module 110, locating module 120.
Module 110 is obtained, for obtaining image to be processed, the image to be processed is the image comprising pet face.
Locating module 120 for being positioned based on target detection network to the pet face, and is detected using characteristic point
Algorithm positions the main parts of the pet face, and the main parts include in ear, eye, nose, mouth and face mask
At least one.
As an alternative embodiment, image processing apparatus 100 provided in this embodiment can also include decoration figure
As adding module, the decorative picture adding module is for responding stage property fitting instruction, size and institute based on the identification frame
State the decorative picture that the positioning results of main parts selects user translate, rotate, zoom operations, and to translation, rotation,
The decorative picture after scaling with the movement of characteristic portion and dynamic adhesion the pet face corresponding position.
As an alternative embodiment, image processing apparatus 100 provided in this embodiment can also be replaced including background
Block is changed the mold, which includes:
Background segment unit is doted on described in acquisition for carrying out background segment to the pet face based on the main parts
The pet face image of object face;
Background replacement unit carries out fusion and completes background replacement for that will update background image and the pet face image,
Target image is obtained, wherein the background image for updating background image as the target image, the pet face image are made
For the foreground image of the target image.
It is apparent to those skilled in the art that for convenience and simplicity of description, the device of foregoing description
Specific work process, no longer can excessively be repeated herein with reference to the corresponding process in preceding method.
3rd embodiment
Fig. 8 is please referred to, Fig. 8 is a kind of electronics that can be applied in the embodiment of the present application that third embodiment of the invention provides
The structural block diagram of equipment 200.Electronic equipment 200 provided in this embodiment may include image processing apparatus 100, memory 201,
Storage control 202, processor 203, Peripheral Interface 204, input-output unit 205, audio unit 206, display unit 207.
The memory 201, storage control 202, processor 203, Peripheral Interface 204, input-output unit 205, sound
Frequency unit 206, each element of display unit 207 are directly or indirectly electrically connected between each other, to realize the transmission or friendship of data
Mutually.It is electrically connected for example, these elements can be realized between each other by one or more communication bus or signal wire.Described image
Processing unit 100 include at least one can be stored in the form of software or firmware (firmware) in the memory 201 or
The software function module being solidificated in the operating system (operating system, OS) of image processing apparatus 100.The processing
Device 203 is for executing the executable module stored in memory 201, such as the software function mould that image processing apparatus 100 includes
Block or computer program.
Wherein, memory 201 may be, but not limited to, random access memory (Random Access Memory,
RAM), read-only memory (Read Only Memory, ROM), programmable read only memory (Programmable Read-Only
Memory, PROM), erasable read-only memory (Erasable Programmable Read-Only Memory, EPROM),
Electricallyerasable ROM (EEROM) (Electric Erasable Programmable Read-Only Memory, EEPROM) etc..
Wherein, memory 201 is for storing program, and the processor 203 executes described program after receiving and executing instruction, aforementioned
Method performed by the server that the stream process that any embodiment of the embodiment of the present invention discloses defines can be applied to processor 203
In, or realized by processor 203.
Processor 203 can be a kind of IC chip, the processing capacity with signal.Above-mentioned processor 203 can
To be general processor, including central processing unit (Central Processing Unit, abbreviation CPU), network processing unit
(Network Processor, abbreviation NP) etc.;Can also be digital signal processor (DSP), specific integrated circuit (ASIC),
Ready-made programmable gate array (FPGA) either other programmable logic device, discrete gate or transistor logic, discrete hard
Part component.It may be implemented or execute disclosed each method, step and the logic diagram in the embodiment of the present invention.General processor
It can be microprocessor or the processor 203 be also possible to any conventional processor etc..
Various input/output devices are couple processor 203 and memory 201 by the Peripheral Interface 204.Some
In embodiment, Peripheral Interface 204, processor 203 and storage control 202 can be realized in one single chip.Other one
In a little examples, they can be realized by independent chip respectively.
Input-output unit 205 realizes user and the server (or local terminal) for being supplied to user input data
Interaction.The input-output unit 205 may be, but not limited to, the equipment such as mouse and keyboard.
Audio unit 206 provides a user audio interface, may include one or more microphones, one or more raises
Sound device and voicefrequency circuit.
Display unit 207 provides an interactive interface (such as user's operation circle between the electronic equipment 200 and user
Face) or for display image data give user reference.In the present embodiment, the display unit 207 can be liquid crystal display
Or touch control display.It can be the capacitance type touch control screen or resistance of support single-point and multi-point touch operation if touch control display
Formula touch screen etc..Single-point and multi-point touch operation is supported to refer to that touch control display can sense on the touch control display one
Or at multiple positions simultaneously generate touch control operation, and the touch control operation that this is sensed transfer to processor 203 carry out calculate and
Processing.
It is appreciated that structure shown in Fig. 8 is only to illustrate, the electronic equipment 200 may also include more than shown in Fig. 8
Perhaps less component or with the configuration different from shown in Fig. 8.Each component shown in fig. 8 can use hardware, software
Or combinations thereof realize.
It is apparent to those skilled in the art that for convenience and simplicity of description, the device of foregoing description
Specific work process, no longer can excessively be repeated herein with reference to the corresponding process in preceding method.
In conclusion the embodiment of the invention provides a kind of image processing method, device and its storage medium, described image
Processing method identifies pet face using trained target detection network, then is doted on using feature point detection algorithm described
The main parts of object face are positioned, and the setting accuracy of pet face and its main parts is improved, and are provided simultaneously with lower back
Scape false detection rate and versatility, thus continuing to carry out image to be processed the pets face beautifications such as decorative picture addition, background replacement
When have higher accuracy.
In several embodiments provided herein, it should be understood that disclosed device and method can also pass through
Other modes are realized.The apparatus embodiments described above are merely exemplary, for example, flow chart and block diagram in attached drawing
Show the device of multiple embodiments according to the present invention, the architectural framework in the cards of method and computer program product,
Function and operation.In this regard, each box in flowchart or block diagram can represent the one of a module, section or code
Part, a part of the module, section or code, which includes that one or more is for implementing the specified logical function, to be held
Row instruction.It should also be noted that function marked in the box can also be to be different from some implementations as replacement
The sequence marked in attached drawing occurs.For example, two continuous boxes can actually be basically executed in parallel, they are sometimes
It can execute in the opposite order, this depends on the function involved.It is also noted that every in block diagram and or flow chart
The combination of box in a box and block diagram and or flow chart can use the dedicated base for executing defined function or movement
It realizes, or can realize using a combination of dedicated hardware and computer instructions in the system of hardware.
In addition, each functional module in each embodiment of the present invention can integrate one independent portion of formation together
Point, it is also possible to modules individualism, an independent part can also be integrated to form with two or more modules.
It, can be with if the function is realized and when sold or used as an independent product in the form of software function module
It is stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially in other words
The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, the meter
Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a
People's computer, server or network equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention.
And storage medium above-mentioned includes: that USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited
The various media that can store program code such as reservoir (RAM, Random Access Memory), magnetic or disk.
The foregoing is only a preferred embodiment of the present invention, is not intended to restrict the invention, for the skill of this field
For art personnel, the invention may be variously modified and varied.All within the spirits and principles of the present invention, made any to repair
Change, equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.It should also be noted that similar label and letter exist
Similar terms are indicated in following attached drawing, therefore, once being defined in a certain Xiang Yi attached drawing, are then not required in subsequent attached drawing
It is further defined and explained.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain
Lid is within protection scope of the present invention.Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
It should be noted that, in this document, relational terms such as first and second and the like are used merely to a reality
Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation
In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to
Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those
Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment
Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that
There is also other identical elements in process, method, article or equipment including the element.
Claims (10)
1. a kind of image processing method, which is characterized in that described image processing method includes:
Image to be processed is obtained, the image to be processed is the image comprising pet face;
The pet face is positioned based on target detection network, and using feature point detection algorithm to the base of the pet face
This position is positioned, and the main parts include at least one of ear, eye, nose, mouth and face mask.
2. image processing method according to claim 1, which is characterized in that described to obtain image to be processed, comprising:
Image to be processed is acquired by camera;Or
Image to be processed is read from local storage;Or
Image to be processed is obtained on network by uniform resource locator.
3. image processing method according to claim 2, which is characterized in that described to acquire figure to be processed by camera
Picture, comprising:
The preview video stream for obtaining camera acquisition, based on whether being deposited in preview video stream described in target detection Network Recognition
In pet face;
If so, using in the preview video stream, there are the picture frames of the pet face as image to be processed.
4. image processing method according to claim 1, which is characterized in that described image processing method further include:
Obtaining one based on positioning result of the target detection network to the pet face indicates the pet face described to be processed
The identification frame of position in image;
It is described to be positioned using main parts of the feature point detection algorithm to the pet face, comprising:
The main parts of the pet face are positioned in the identification frame using feature point detection algorithm.
5. image processing method according to claim 4, which is characterized in that described image processing method further include:
Stage property fitting instruction is responded, the positioning result of size and the main parts based on the identification frame selects user
Decorative picture translated, rotated, zoom operations, and to the decorative picture after translation, rotation, scaling with characteristic portion
Movement and dynamic adhesion the pet face corresponding position.
6. image processing method described in any claim in -5 according to claim 1, which is characterized in that described image processing method
Further include:
The image component layer after the image to be processed and addition decorative picture is stored in using multi-layer image Techniques of preserving slow
In depositing.
7. image processing method according to claim 2, which is characterized in that described image processing method further include:
Position coordinates of the pet face in the image to be processed are determined based on the positioning result of the target detection network,
The characteristic point coordinate of the pet face is determined based on the positioning result of the main parts;
The angle of the pet face is calculated based on the characteristic point coordinate;
It acquires and saves when the light, the position coordinates and the angle of the image to be processed meet predetermined condition and work as
Preceding image.
8. image processing method according to claim 1, which is characterized in that described image processing method further include:
Background segment is carried out to the pet face based on the main parts, obtains the pet face image of the pet face;
Background image will be updated and the pet face image carries out fusion and completes background replacement, target image is obtained, wherein described
Update background image of the background image as the target image, foreground picture of the pet face image as the target image
Picture.
9. a kind of image processing apparatus, which is characterized in that described image processing unit includes:
Module is obtained, for obtaining image to be processed, the image to be processed is the image comprising pet face;
Locating module for positioning based on target detection network to the pet face, and uses feature point detection algorithm pair
The main parts of the pet face are positioned, the main parts include in ear, eye, nose, mouth and face mask at least it
One.
10. a kind of computer-readable storage medium, computer program is stored in the computer-readable storage medium and is referred to
It enables, when the computer program instructions are read and run by a processor, perform claim is required in any one of 1-8 the method
The step of.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811323983.7A CN109274891B (en) | 2018-11-07 | 2018-11-07 | Image processing method, device and storage medium thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811323983.7A CN109274891B (en) | 2018-11-07 | 2018-11-07 | Image processing method, device and storage medium thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109274891A true CN109274891A (en) | 2019-01-25 |
CN109274891B CN109274891B (en) | 2021-06-22 |
Family
ID=65191641
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811323983.7A Active CN109274891B (en) | 2018-11-07 | 2018-11-07 | Image processing method, device and storage medium thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109274891B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109886145A (en) * | 2019-01-29 | 2019-06-14 | 浙江泽曦科技有限公司 | Pet recognition algorithms and system |
CN111325132A (en) * | 2020-02-17 | 2020-06-23 | 深圳龙安电力科技有限公司 | Intelligent monitoring system |
CN111589132A (en) * | 2020-04-26 | 2020-08-28 | 腾讯科技(深圳)有限公司 | Virtual item display method, computer equipment and storage medium |
CN111767914A (en) * | 2019-04-01 | 2020-10-13 | 佳能株式会社 | Target object detection device and method, image processing system, and storage medium |
CN113469041A (en) * | 2021-06-30 | 2021-10-01 | 北京市商汤科技开发有限公司 | Image processing method and device, computer equipment and storage medium |
CN113469914A (en) * | 2021-07-08 | 2021-10-01 | 网易(杭州)网络有限公司 | Animal face beautifying method and device, storage medium and electronic equipment |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101377815A (en) * | 2007-08-31 | 2009-03-04 | 卡西欧计算机株式会社 | Image pick-up apparatus having a function of recognizing a face and method of controlling the apparatus |
JP2011019013A (en) * | 2009-07-07 | 2011-01-27 | Ricoh Co Ltd | Imaging apparatus, area detection method, and program |
US20110090359A1 (en) * | 2009-10-20 | 2011-04-21 | Canon Kabushiki Kaisha | Image recognition apparatus, processing method thereof, and computer-readable storage medium |
CN102413282A (en) * | 2011-10-26 | 2012-04-11 | 惠州Tcl移动通信有限公司 | Self-shooting guidance method and equipment |
CN104081757A (en) * | 2012-02-06 | 2014-10-01 | 索尼公司 | Image processing apparatus, image processing method, program, and recording medium |
CN106577350A (en) * | 2016-11-22 | 2017-04-26 | 深圳市沃特沃德股份有限公司 | Method and device for recognizing pet type |
CN107437051A (en) * | 2016-05-26 | 2017-12-05 | 上海市公安局刑事侦查总队 | Image processing method and device |
CN107800964A (en) * | 2017-10-26 | 2018-03-13 | 武汉大学 | It is a kind of that method of the face automatic detection with capturing is realized based on dual camera |
-
2018
- 2018-11-07 CN CN201811323983.7A patent/CN109274891B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101377815A (en) * | 2007-08-31 | 2009-03-04 | 卡西欧计算机株式会社 | Image pick-up apparatus having a function of recognizing a face and method of controlling the apparatus |
JP2011019013A (en) * | 2009-07-07 | 2011-01-27 | Ricoh Co Ltd | Imaging apparatus, area detection method, and program |
US20110090359A1 (en) * | 2009-10-20 | 2011-04-21 | Canon Kabushiki Kaisha | Image recognition apparatus, processing method thereof, and computer-readable storage medium |
CN102413282A (en) * | 2011-10-26 | 2012-04-11 | 惠州Tcl移动通信有限公司 | Self-shooting guidance method and equipment |
CN104081757A (en) * | 2012-02-06 | 2014-10-01 | 索尼公司 | Image processing apparatus, image processing method, program, and recording medium |
CN107437051A (en) * | 2016-05-26 | 2017-12-05 | 上海市公安局刑事侦查总队 | Image processing method and device |
CN106577350A (en) * | 2016-11-22 | 2017-04-26 | 深圳市沃特沃德股份有限公司 | Method and device for recognizing pet type |
CN107800964A (en) * | 2017-10-26 | 2018-03-13 | 武汉大学 | It is a kind of that method of the face automatic detection with capturing is realized based on dual camera |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109886145A (en) * | 2019-01-29 | 2019-06-14 | 浙江泽曦科技有限公司 | Pet recognition algorithms and system |
CN111767914A (en) * | 2019-04-01 | 2020-10-13 | 佳能株式会社 | Target object detection device and method, image processing system, and storage medium |
CN111325132A (en) * | 2020-02-17 | 2020-06-23 | 深圳龙安电力科技有限公司 | Intelligent monitoring system |
CN111589132A (en) * | 2020-04-26 | 2020-08-28 | 腾讯科技(深圳)有限公司 | Virtual item display method, computer equipment and storage medium |
CN113469041A (en) * | 2021-06-30 | 2021-10-01 | 北京市商汤科技开发有限公司 | Image processing method and device, computer equipment and storage medium |
CN113469914A (en) * | 2021-07-08 | 2021-10-01 | 网易(杭州)网络有限公司 | Animal face beautifying method and device, storage medium and electronic equipment |
CN113469914B (en) * | 2021-07-08 | 2024-03-19 | 网易(杭州)网络有限公司 | Animal face beautifying method and device, storage medium and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN109274891B (en) | 2021-06-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109274891A (en) | A kind of image processing method, device and its storage medium | |
TWI751161B (en) | Terminal equipment, smart phone, authentication method and system based on face recognition | |
CN109409319A (en) | A kind of pet image beautification method, device and its storage medium | |
CN108765278B (en) | Image processing method, mobile terminal and computer readable storage medium | |
US10872420B2 (en) | Electronic device and method for automatic human segmentation in image | |
WO2021082635A1 (en) | Region of interest detection method and apparatus, readable storage medium and terminal device | |
CN106934376B (en) | A kind of image-recognizing method, device and mobile terminal | |
CN108323204A (en) | A kind of method and intelligent terminal of detection face flaw point | |
CN107771336A (en) | Feature detection and mask in image based on distribution of color | |
US11256919B2 (en) | Method and device for terminal-based object recognition, electronic device | |
CN108062526A (en) | A kind of estimation method of human posture and mobile terminal | |
CN108346171B (en) | Image processing method, device, equipment and computer storage medium | |
CN109191414A (en) | A kind of image processing method, device, electronic equipment and storage medium | |
CN110427859A (en) | A kind of method for detecting human face, device, electronic equipment and storage medium | |
CN110110118A (en) | Dressing recommended method, device, storage medium and mobile terminal | |
CN109891466A (en) | The enhancing of 3D model scans | |
CN108810406B (en) | Portrait light effect processing method, device, terminal and computer readable storage medium | |
WO2019114508A1 (en) | Image processing method, apparatus, computer readable storage medium, and electronic device | |
CN111008935B (en) | Face image enhancement method, device, system and storage medium | |
US20160148430A1 (en) | Mobile device, operating method for modifying 3d model in ar, and non-transitory computer readable storage medium for storing operating method | |
CN109978640A (en) | Dress ornament tries method, apparatus, storage medium and mobile terminal on | |
CN107172354A (en) | Method for processing video frequency, device, electronic equipment and storage medium | |
CN107959798B (en) | Video data real-time processing method and device and computing equipment | |
CN104809288A (en) | Trying method or customizing nail art | |
CN109948525A (en) | It takes pictures processing method, device, mobile terminal and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |