CN104573721A - Image processing method - Google Patents

Image processing method Download PDF

Info

Publication number
CN104573721A
CN104573721A CN201410855305.0A CN201410855305A CN104573721A CN 104573721 A CN104573721 A CN 104573721A CN 201410855305 A CN201410855305 A CN 201410855305A CN 104573721 A CN104573721 A CN 104573721A
Authority
CN
China
Prior art keywords
image
target person
image procossing
processing section
portrait
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410855305.0A
Other languages
Chinese (zh)
Inventor
瞿颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jinli Communication Equipment Co Ltd
Original Assignee
Shenzhen Jinli Communication Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jinli Communication Equipment Co Ltd filed Critical Shenzhen Jinli Communication Equipment Co Ltd
Priority to CN201410855305.0A priority Critical patent/CN104573721A/en
Publication of CN104573721A publication Critical patent/CN104573721A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04806Zoom, i.e. interaction techniques or interactors for controlling the zooming operation

Abstract

The embodiment of the invention discloses an image processing method. The image processing method comprises the steps of obtaining a target person image of a user in real time, extracting figure characteristic information corresponding to a target processing portion of the target person image, determining image processing materials matched with the figure characteristic information, and displaying the image processing materials in the target processing portion of the target person image. The image processing method can improve image processing efficiency and intellectuality.

Description

A kind of image processing method
Technical field
The present invention relates to image processing field, particularly relate to a kind of image processing method.
Background technology
Along with the development of terminal technology, camera has become the standard configuration of large multi-terminal equipment, for user shooting is provided, the function such as to take pictures, great enjoyment and convenience is brought to people's live and work, and camera pixel is more and more higher, be intended to allow user photograph the more beautiful photo of better quality, pattern or video.
Certain effect is reached in order to make the good photo of shooting or video, a lot of image processing techniques can both carry out post-processed to the photo taken or video now, but often the post-processed cycle is long, and when treatment effect is not good, retake is carried out in the time place being often difficult to return original shooting, make troubles to user and regret, Consumer's Experience is low.
Summary of the invention
The embodiment of the present invention provides a kind of image processing method, to can be implemented in shooting process according to the portrait characteristic information in target person image as user recommends image procossing material, improves the efficiency of image procossing and intelligent.
Embodiments provide a kind of image processing method, described method can comprise:
The target person image of user in real;
Extract the portrait characteristic information of target processing section position correspondence in described target person image;
Determine and the image procossing material that described portrait characteristic information mates;
In described target person image, described image procossing material is shown in target processing section position.
The embodiment of the present invention can by the portrait characteristic information of target processing section position correspondence in the target person image of extraction Real-time Obtaining, and then extract the image procossing material mated with described portrait characteristic information, then in target person image, described image procossing material is shown in target processing section position, to achieve in shooting process according to the portrait characteristic information in target person image as user recommends image procossing material, improve the efficiency of image procossing and intelligent.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the schematic flow sheet of a kind of image processing method that the embodiment of the present invention provides;
Fig. 2 is the schematic flow sheet of a kind of image processing method in another embodiment of the present invention;
Fig. 3 is the schematic flow sheet of a kind of image processing method in further embodiment of this invention;
Fig. 4 is the structural representation of a kind of terminal that the embodiment of the present invention provides;
Fig. 5 is the structural representation of the embodiment of material determining unit in the embodiment of the present invention shown in Fig. 4;
Fig. 6 is the structural representation of the another kind of terminal that the embodiment of the present invention provides.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art are not making the every other embodiment obtained under creative work prerequisite, all belong to the scope of embodiment of the present invention protection.
The realizing scene and can include but are not limited to user and use terminal device when taking pictures to reach the image procossing that certain effect of taking pictures is carried out of the embodiment of the present invention, or when user utilizes that terminal front-facing camera is auxiliary makes up, dressing recommendation is carried out to the character image of camera Real-time Obtaining, and the dressing of recommendation is illustrated in the corresponding position of character image.
See Fig. 1, it is the schematic flow sheet of a kind of image processing method that the embodiment of the present invention provides.Image processing method described in the present embodiment, comprising:
S101, the target person image of user in real.
Concrete, terminal can receive the image processing requests that user triggers, character image in Real time identification camera image pickup scope, if the character image in camera image pickup scope only has one, then above-mentioned character image is defined as target person image, if the character image in camera image pickup scope comprises at least two, then the character image that user chooses is defined as target person image.Optionally, after the target person image in all right Real time identification camera image pickup scope of terminal, obtain the amplification instruction of user for described target person image, and amplify described target person image.Concrete, when the character image in camera image pickup scope comprises at least two, receive user and select information for the portrait of the character image chosen, character image user chosen is defined as target person image.Terminal is by target person Nonlinear magnify, make the target person image after amplifying on display interface, occupy comparatively large regions, the target person image of user in real in the whole process of image procossing, by Real-time Obtaining to target person image real-time exhibition on display interface.
S102, extracts the portrait characteristic information of target processing section position correspondence in described target person image.
Above-mentioned portrait characteristic information includes but are not limited to Skin Color Information, shape of face information, hair style information or clothing color information.Concrete, terminal according to the corresponding relation of the treatment sites preset and portrait characteristic information kind, can be determined that the target person of target processing section position correspondence is as characteristic information kind, extracts the portrait characteristic information that in target person image, target signature information kind is corresponding.Such as, target processing section position is eyebrow position, the corresponding relation of the treatment sites that inquiry is preset and portrait characteristic information kind, obtain the angle that portrait characteristic information type corresponding to eyebrow position is line between point of fixity in shape of face, extracting in target person image in above-mentioned shape of face the angular values of line between point of fixity is portrait characteristic information corresponding to target signature information kind.
Above-mentioned target processing section position includes but are not limited to eyebrow position, rouge position, eye shadow position, vermilion border position, bang position or eyelashes position.Optionally, before step S102, described method also comprises the Local treatment request received for target processing section position in described target person image, determine the portrait characteristic information type of described target processing section position correspondence according to above-mentioned Local treatment request, and then extract the portrait characteristic information that in target person image, target signature information kind is corresponding.Optionally, described treatment sites also can process according to the processing sequence preset successively.
S103, determines and the image procossing material that described portrait characteristic information mates.
Above-mentioned image procossing material includes but are not limited to eyebrow material, rouge material, eye shadow material, lip red pigment material, bang material or eyelashes material.In specific implementation, terminal can be determined and the portrait tagsort that described portrait characteristic information mates, and obtains the image procossing material corresponding with described portrait tagsort.Such as, target processing section position in step S102 is eye shadow position, the portrait characteristic information extracted is the rgb value of clothes color, wherein R=153, G=510, B=0, then the portrait tagsort of mating with described portrait characteristic information is red colour system clothes color, and the image procossing material of answering with red colour system clothes Color pair is large ground colour eye shadow.
Optionally, if the image procossing material corresponding with described portrait tagsort only comprises one, then terminal obtains described image procossing material, if the image procossing material corresponding with described portrait tagsort comprises at least two, then terminal is according to the priority level of the image procossing material corresponding with described portrait tagsort preset, and obtains the image procossing material that priority level is the highest.
S104, in described target person image, described image procossing material is shown in target processing section position.
Concrete, the position of image procossing material target processing section position in target person image that step S103 can obtain by terminal displays, such as above-mentioned image procossing material is eyebrow material, then the eyebrow material obtained in step S103 is illustrated in the position that in goal task image, eyebrow position is corresponding.
Optionally, after step S104, described method can also comprise:
Receive for the change in size request of described image procossing material in described target person image, change the size of described image procossing material according to described change in size request.
Concrete, can carry the dimension scale change information for described image procossing material in above-mentioned change in size request, terminal can change the size of described image procossing material target processing section position display in target person image according to dimension scale change information.
The embodiment of the present invention can by the portrait characteristic information of target processing section position correspondence in the target person image of extraction Real-time Obtaining, and then extract the image procossing material mated with described portrait characteristic information, then in target person image, described image procossing material is shown in target processing section position, to achieve in shooting process according to the portrait characteristic information in target person image as user recommends image procossing material, improve the efficiency of image procossing and intelligent.
See Fig. 2, be the schematic flow sheet of a kind of image processing method in another embodiment of the present invention, the image processing method described in the present embodiment, comprising:
S201, the target person image of user in real.
Concrete, terminal receives the image processing requests that user triggers, character image in Real time identification camera image pickup scope, if the character image in camera image pickup scope only has one, then above-mentioned character image is defined as target person image, if the character image in camera image pickup scope comprises at least two, then the character image that user chooses is defined as target person image.Optionally, terminal amplifies described target person image after the character image that user chooses can also being defined as target person image.Concrete, when the character image in camera image pickup scope comprises at least two, receive user and select information for the portrait of the character image chosen, character image user chosen is defined as target person image.Terminal is by target person Nonlinear magnify, make the target person image after amplifying on display interface, occupy comparatively large regions, the target person image of user in real in the whole process of image procossing, by Real-time Obtaining to target person image real-time exhibition on display interface.
S202, extracts the portrait characteristic information of target processing section position correspondence in described target person image.
Above-mentioned portrait characteristic information includes but are not limited to Skin Color Information, shape of face information, hair style information or clothing color information.Concrete, according to the corresponding relation of the treatment sites preset and portrait characteristic information kind, determine that the target person of target processing section position correspondence is as characteristic information kind, extract the portrait characteristic information that in target person image, target signature information kind is corresponding.Such as, target processing section position is eyebrow position, the corresponding relation of the treatment sites that inquiry is preset and portrait characteristic information kind, obtain the angle that portrait characteristic information type corresponding to eyebrow position is line between point of fixity in shape of face, extracting in target person image in above-mentioned shape of face the angular values of line between point of fixity is portrait characteristic information corresponding to target signature information kind.
Above-mentioned target processing section position includes but are not limited to eyebrow position, rouge position, eye shadow position, vermilion border position or eyelashes position.Optionally, before step S202, described method also comprises the Local treatment request received for target processing section position in described target person image, determine the portrait characteristic information type of described target processing section position correspondence according to above-mentioned Local treatment request, and then extract the portrait characteristic information that in target person image, target signature information kind is corresponding.Optionally, described treatment sites also can process according to the processing sequence preset successively.
S203, determines and the image procossing material that described portrait characteristic information mates.
Above-mentioned image procossing material includes but are not limited to eyebrow material, rouge material, eye shadow material, lip red pigment material or eyelashes material.In specific implementation, terminal can be determined and the portrait tagsort that described portrait characteristic information mates, and obtains the image procossing material corresponding with described portrait tagsort.Such as, target processing section position in step S202 is eye shadow position, the portrait characteristic information extracted is the rgb value of clothes color, wherein R=153, G=510, B=0, then the portrait tagsort of mating with described portrait characteristic information is red colour system clothes color, and the image procossing material of answering with red colour system clothes Color pair is large ground colour eye shadow.
S204, if the image procossing material corresponding with described portrait tagsort comprises at least two, then according to the priority level of the image procossing material corresponding with described portrait tagsort preset, the region beyond described target person image show according to priority level order from high to low described at least two image procossing materials.
Concrete, if image procossing material corresponding to the portrait tagsort obtained in step S203 comprises at least two, then terminal is according to the priority level of the image procossing material corresponding with described portrait tagsort set up in advance, the region beyond target person image is shown in the display interface of display-object character image, above-mentioned at least two image procossing materials are shown according to priority level order from high to low, the priority level of the image procossing material that above-mentioned and described portrait tagsort is corresponding, represent height portrait tagsort respective image process material being recommended to degree.Such as, above-mentioned target processing section position is bang position, above-mentioned portrait tagsort is circular face, the image procossing material corresponding with circular face has long oblique bang, broken oblique bang, matched bang, broken neat bang, heart-shaped bang, ultrashort bang, above-mentioned image procossing material according to the priority preset, is shown the region beyond target person image according to the order list view of long oblique bang, broken oblique bang, heart-shaped bang, broken neat bang, ultrashort bang, matched bang by terminal in display interface.
S205, in described target person image, the image procossing material that user chooses is shown in target processing section position.
Concrete, user can select the target image process material at least two the image procossing materials shown in step S204, terminal receives the selection of materials information for target image process material in above-mentioned at least two image procossing materials of user's input, delete the image procossing material that target processing section position is shown in character image, in character image, the described target process material that user chooses is shown in target processing section position.
The embodiment of the present invention can by the portrait characteristic information of target processing section position correspondence in the target person image of extraction Real-time Obtaining, and then extract the image procossing material mated with described portrait characteristic information, then in target person image, described image procossing material is shown in target processing section position, to achieve in shooting process according to the portrait characteristic information in target person image as user recommends image procossing material, improve the efficiency of image procossing and intelligent.
See Fig. 3, be the schematic flow sheet of a kind of image processing method in further embodiment of this invention, the image processing method described in the present embodiment, comprising:
S301, the target person image of user in real.
Concrete, terminal can receive the image processing requests that user triggers, character image in Real time identification camera image pickup scope, if the character image in camera image pickup scope only has one, then above-mentioned character image is defined as target person image, if the character image in camera image pickup scope comprises at least two, then the character image that user chooses is defined as target person image.Optionally, after character image user chosen is defined as target person image, amplify described target person image.Concrete, when the character image in camera image pickup scope comprises at least two, receive user and select information for the portrait of the character image chosen, character image user chosen is defined as target person image.By target person Nonlinear magnify, make the target person image after amplifying on display interface, occupy comparatively large regions, the target person image of user in real in the whole process of image procossing, by Real-time Obtaining to target person image real-time exhibition on display interface.
S302, extracts the portrait characteristic information of target processing section position correspondence in described target person image.
Above-mentioned portrait characteristic information includes but are not limited to Skin Color Information, shape of face information, hair style information or clothing color information.Concrete, terminal according to the corresponding relation of the treatment sites preset and portrait characteristic information kind, can be determined that the target person of target processing section position correspondence is as characteristic information kind, extracts the portrait characteristic information that in target person image, target signature information kind is corresponding.Such as, target processing section position is eyebrow position, the corresponding relation of the treatment sites that inquiry is preset and portrait characteristic information kind, obtain the angle that portrait characteristic information type corresponding to eyebrow position is line between point of fixity in shape of face, extracting in target person image in above-mentioned shape of face the angular values of line between point of fixity is portrait characteristic information corresponding to target signature information kind.
Above-mentioned target processing section position includes but are not limited to eyebrow position, rouge position, eye shadow position, vermilion border position, bang position or eyelashes position.Optionally, before step S302, described method also comprises the Local treatment request received for target processing section position in described target person image, determine the portrait characteristic information type of described target processing section position correspondence according to above-mentioned Local treatment request, and then extract the portrait characteristic information that in target person image, target signature information kind is corresponding.Optionally, described treatment sites also can process according to the processing sequence preset successively.
S303, determines and the image procossing material that described portrait characteristic information mates.
Above-mentioned image procossing material includes but are not limited to eyebrow material, rouge material, eye shadow material, lip red pigment material, bang material or eyelashes material.In specific implementation, terminal is determined and the portrait tagsort that described portrait characteristic information mates, and obtains the image procossing material corresponding with described portrait tagsort.Such as, target processing section position in step S302 is eye shadow position, the portrait characteristic information that terminal is extracted is the rgb value of clothes color, wherein R=153, G=510, B=0, then the portrait tagsort of mating with described portrait characteristic information is red colour system clothes color, and the image procossing material of answering with red colour system clothes Color pair is large ground colour eye shadow.
Optionally, if the image procossing material corresponding with described portrait tagsort only comprises one, then obtain described image procossing material, if the image procossing material corresponding with described portrait tagsort comprises at least two, then according to the priority level of the image procossing material corresponding with described portrait tagsort preset, obtain the image procossing material that priority level is the highest.
S304, in described target person image, described image procossing material is shown in target processing section position.
Concrete, the position of image procossing material target processing section position in target person image that step S303 obtains by terminal displays, such as above-mentioned image procossing material is eyebrow material, then the eyebrow material obtained in step S303 is illustrated in the position that in goal task image, eyebrow position is corresponding.
S305, amplifies the regional area belonging to described image procossing material and described target processing section position.
Concrete, terminal obtains the amplification instruction of user for the regional area belonging to described image procossing material and described target processing section position, and the regional area belonging to the image procossing material obtained in step S304 and target processing section position is amplified display, the region making above-mentioned image procossing material and the regional area belonging to target processing section position shared on display interface increases.Described user can by phonetic entry for the amplification instruction of the regional area belonging to described image procossing material and described target processing section position, to make user when both hands carry out makeups during inconvenient touch-control input, by the amplification instruction of phonetic entry for the regional area belonging to described image procossing material and described target processing section position.Regional area belonging to above-mentioned target processing section position can be the center of circle centered by target processing section position, with the border circular areas that the half of the longest partial-length in target processing section position is radius, it also can be the minimum rectangle all holding above-mentioned target processing section position.
S306, receive the material confirmation for described image procossing material, in described target person image, target processing section position shows that the profile material that described image procossing material is corresponding, the region beyond described target person image show the color material that described image procossing material is corresponding.
Concrete, above-mentioned image procossing material comprises profile material and color material, terminal receives user for after the material confirmation of image procossing material, profile material corresponding for above-mentioned image procossing material is illustrated in target processing section position in described target person image, color material corresponding for above-mentioned image procossing material is illustrated in display interface the region shown beyond described target person image, the above-mentioned region in order to show color material can be the region of presetting in display interface.Such as, when user utilizes terminal front-facing camera to carry out auxiliary cosmetic, after the eyebrow material recommended is confirmed, the eyebrow outline material that above-mentioned eyebrow material is corresponding is shown to user, the predeterminable area shown in display interface beyond user's character image shows eyebrow color material corresponding to above-mentioned eyebrow outline, to make user select the eyebrow pencil color for thrush according to above-mentioned eyebrow color material, and carry out thrush according to above-mentioned eyebrow outline material.
Optionally, after receiving the material confirmation for described image procossing material, in described target person image, the profile material that described image procossing material is corresponding and color material can also be shown in translucent mode in target processing section position.
The embodiment of the present invention can by the portrait characteristic information of target processing section position correspondence in the target person image of extraction Real-time Obtaining, and then extract the image procossing material mated with described portrait characteristic information, then in target person image, described image procossing material is shown in target processing section position, to achieve in shooting process according to the portrait characteristic information in target person image as user recommends image procossing material, improve the efficiency of image procossing and intelligent.
See Fig. 4, it is the structural representation of a kind of terminal that the embodiment of the present invention provides, the terminal that wherein embodiment of the present invention is mentioned to can comprise mobile phone, panel computer, PC (personal computer, personal computer), car-mounted terminal or Worn type smart machine etc., the terminal that the embodiment of the present invention provides is corresponding with the method shown in Fig. 1, may operate in the executive agent of the image processing method shown in Fig. 1, terminal as shown in the figure in the embodiment of the present invention at least can comprise image acquisition unit 401, information extraction unit 402, material determining unit 403 and material display unit 404, wherein:
Image acquisition unit 401, for the target person image of user in real.
Concrete, image acquisition unit 401 can receive the image processing requests that user triggers, character image in Real time identification camera image pickup scope, if the character image in camera image pickup scope only has one, then above-mentioned character image is defined as target person image, if the character image in camera image pickup scope comprises at least two, then the character image that user chooses is defined as target person image.
Information extraction unit 402, for extracting the portrait characteristic information of target processing section position correspondence in described target person image.
Above-mentioned target processing section position includes but are not limited to eyebrow position, rouge position, eye shadow position, vermilion border position, bang position or eyelashes position.Above-mentioned portrait characteristic information includes but are not limited to Skin Color Information, shape of face information, hair style information or clothing color information.Concrete, information extraction unit 402 is according to the treatment sites preset and the corresponding relation of portrait characteristic information kind, determine that the target person of target processing section position correspondence is as characteristic information kind, extract the portrait characteristic information that in target person image, target signature information kind is corresponding.Such as, target processing section position is eyebrow position, the corresponding relation of the treatment sites that inquiry is preset and portrait characteristic information kind, obtain the angle that portrait characteristic information type corresponding to eyebrow position is line between point of fixity in shape of face, extracting in target person image in above-mentioned shape of face the angular values of line between point of fixity is portrait characteristic information corresponding to target signature information kind.
Optionally, described terminal also can comprise further: request reception unit 407, for before described information extraction unit 402 extracts the portrait characteristic information of target processing section position correspondence in described target person image, receive the Local treatment request for target processing section position in described target person image.Information extraction unit 402 determines the portrait characteristic information type of described target processing section position correspondence according to above-mentioned Local treatment request, and then extracts the portrait characteristic information that in target person image, target signature information kind is corresponding.
Material determining unit 403, for determining and the image procossing material that described portrait characteristic information mates.
Above-mentioned image procossing material includes but are not limited to eyebrow material, rouge material, eye shadow material, lip red pigment material, bang material or eyelashes material.
Material display unit 404, shows described image procossing material for target processing section position in described target person image.
Concrete, the position of the image procossing material target processing section position in target person image above-mentioned material determining unit 403 obtained displays, such as above-mentioned image procossing material is eyebrow material, then the eyebrow material that material determining unit 403 obtains is illustrated in the position that in goal task image, eyebrow position is corresponding.
In a kind of optional embodiment, described terminal can also comprise further:
Portrait amplification instruction acquiring unit 405, for obtaining the amplification instruction of user for described target person image.
Size changes unit 406, after the target person image in described image acquisition unit 401 Real time identification camera image pickup scope, amplifies described target person image.
Concrete, when the character image in camera image pickup scope comprises at least two, portrait amplification instruction acquiring unit 405 obtains the amplification instruction of user for described target person image, above-mentioned size changes unit 406, according to the described amplification instruction for described target person image, the target person Nonlinear magnify that user is chosen, make the target person image after amplifying on display interface, occupy comparatively large regions, the target person image of user in real in the whole process of image procossing, by Real-time Obtaining to target person image real-time exhibition on display interface.
Optionally, described terminal can also comprise:
Region amplification instruction receiving element 408, after described image procossing material is shown in target processing section position in described target person image at described material display unit, obtain the amplification instruction of user for the regional area belonging to described image procossing material and described target processing section position.Described user can by phonetic entry for the amplification instruction of the regional area belonging to described image procossing material and described target processing section position, to make user when both hands carry out makeups during inconvenient touch-control input, by the amplification instruction of phonetic entry for the regional area belonging to described image procossing material and described target processing section position.
Described size changes unit 406, also for after at described material display unit 404, in described target person image, described image procossing material is shown in target processing section position, is amplified by the regional area belonging to described image procossing material and described target processing section position.
Further alternative, described material display unit 404, also for when described material determining unit 403 determines that the image procossing material mated with described portrait characteristic information comprises at least two, according to the priority level of the image procossing material corresponding with described portrait tagsort preset, region beyond described target person image show according to priority level order from high to low described at least two image procossing materials, and the image procossing material that user chooses is shown in target processing section position in described target person image.
Concrete, if the image procossing material mated with described portrait characteristic information that described material determining unit 403 is determined comprises at least two, then according to the priority level of the image procossing material corresponding with described portrait tagsort set up in advance, the region beyond target person image is shown in the display interface of display-object character image, above-mentioned at least two image procossing materials are shown according to priority level order from high to low, the priority level of the image procossing material that above-mentioned and described portrait tagsort is corresponding, represent height portrait tagsort respective image process material being recommended to degree.Such as, above-mentioned target processing section position is bang position, above-mentioned portrait tagsort is circular face, the image procossing material corresponding with circular face has long oblique bang, broken oblique bang, matched bang, broken neat bang, heart-shaped bang, ultrashort bang, by above-mentioned image procossing material according to the priority preset, in display interface, show the region beyond target person image according to the order list view of long oblique bang, broken oblique bang, heart-shaped bang, broken neat bang, ultrashort bang, matched bang.
Concrete, target image process material at least two image procossing materials that user can select material display unit 404 to show, receive the selection of materials information for target image process material in above-mentioned at least two image procossing materials of user's input, delete the image procossing material that target processing section position is shown in character image, in character image, the described target process material that user chooses is shown in target processing section position.
As optional embodiment, described request receiving element 407, also for after at described material display unit 404, in described target person image, described image procossing material is shown in target processing section position, receive for the change in size request of described image procossing material in described target person image.Concrete, the dimension scale change information for described image procossing material can be carried in above-mentioned change in size request.
Described size changes unit 406, also for changing the size of described image procossing material according to described change in size request.The size of described image procossing material target processing section position display in target person image is changed according to the above-mentioned dimension scale change information for described image procossing material.
In another optional embodiment, described request receiving element 407, also for after described image procossing material is shown in target processing section position in described target person image at described material display unit 404, receive the material confirmation for image procossing material.
Concrete, user can select other image procossing materials recommending to show on display interface as required, and then is displayed by the target image process material chosen by material display unit 404.
Described material display unit 404, also shows for target processing section position in described target person image the profile material that described image procossing material is corresponding, and the color material that described image procossing material is corresponding is shown in the region beyond described target person image.
Concrete, above-mentioned image procossing material comprises profile material and color material, profile material corresponding for above-mentioned image procossing material is illustrated in target processing section position in described target person image by material display unit 404, color material corresponding for above-mentioned image procossing material is illustrated in display interface the region shown beyond described target person image, the above-mentioned region in order to show color material can be the region of presetting in display interface.Such as, when user utilizes terminal front-facing camera to carry out auxiliary cosmetic, after the eyebrow material recommended is confirmed, the eyebrow outline material that above-mentioned eyebrow material is corresponding is shown to user, the predeterminable area shown in display interface beyond user's character image shows eyebrow color material corresponding to above-mentioned eyebrow outline, to make user select the eyebrow pencil color for thrush according to above-mentioned eyebrow color material, and carry out thrush according to above-mentioned eyebrow outline material.
Optionally, after the material confirmation that described material display unit 404 can also receive for described image procossing material at described request receiving element 407, in described target person image, the profile material that described image procossing material is corresponding and color material are shown in translucent mode in target processing section position.
The embodiment of the present invention can by the portrait characteristic information of target processing section position correspondence in the target person image of extraction Real-time Obtaining, and then extract the image procossing material mated with described portrait characteristic information, then in target person image, described image procossing material is shown in target processing section position, to achieve in shooting process according to the portrait characteristic information in target person image as user recommends image procossing material, improve the efficiency of image procossing and intelligent.
See Fig. 5, be the structural representation of the embodiment of the material determining unit in the embodiment of the present invention shown in Fig. 4, this material determining unit 403 can comprise: tagsort determining unit 4301 and material obtaining unit 4302.
Tagsort determining unit 4301, for determining and the portrait tagsort that described portrait characteristic information mates.
Such as, above-mentioned target processing section position is eye shadow position, and the portrait characteristic information of extraction is the rgb value of clothes color, wherein R=153, G=510, B=0, then the portrait tagsort of mating with described portrait characteristic information is red colour system clothes color.For another example above-mentioned target processing section position is eyebrow position, and the portrait characteristic information of extraction is chin centre is 65 degree with the angle numerical value of left and right tail of the eye line respectively, then the portrait tagsort of mating with described portrait characteristic information is circular face.
Material obtaining unit 4302, for obtaining the image procossing material corresponding with described portrait tagsort.
Such as, the portrait tagsort of mating with described portrait characteristic information that tagsort determining unit 4301 obtains is red colour system clothes color, and the image procossing material of answering with red colour system clothes Color pair is large ground colour eye shadow.And for example, the portrait tagsort of mating with described portrait characteristic information that tagsort determining unit 4301 obtains is circular face, and the image procossing material corresponding with circular face is yi word pattern eyebrow.
Optionally, above-mentioned material obtaining unit 4302, for when the image procossing material corresponding with described portrait tagsort comprises at least two, according to the priority level of the image procossing material corresponding with described portrait tagsort preset, obtain the image procossing material that priority level is the highest.
The embodiment of the present invention can by the portrait characteristic information of target processing section position correspondence in the target person image of extraction Real-time Obtaining, and then extract the image procossing material mated with described portrait characteristic information, then in target person image, described image procossing material is shown in target processing section position, to achieve in shooting process according to the portrait characteristic information in target person image as user recommends image procossing material, improve the efficiency of image procossing and intelligent.
See Fig. 6, be the structural representation of the another kind of terminal that the embodiment of the present invention provides, the terminal described in the present embodiment can comprise: at least one input media 601, at least one output unit 602, at least one processor 603, such as CPU, storer 604 and at least one bus 605.
Wherein, above-mentioned bus 605 is for connecting above-mentioned input media 601, output unit 602, processor 603 and storer 604.
Wherein, above-mentioned first input media 601 specifically can be the camera of terminal, for the target person image of user in real.Secondary input device 607 specifically can be the contact panel of terminal, comprises touch-screen, touch screen and microphone, the operational order on sense terminals contact panel or the operational order by phonetic entry.
Above-mentioned output establishes device 602 specifically to can be the display screen of terminal, for display-object character image and image procossing material.
Above-mentioned storer 604 can be high-speed RAM storer, also can be non-labile storer (non-volatile memory), such as magnetic disk memory.Above-mentioned storer 604 is for storing batch processing code, and above-mentioned first input media 601, output unit 602, processor 603 and secondary input device 607, for calling the program code stored in storer 604, perform and operate as follows:
Above-mentioned first input media 601, for the target person image of user in real.
Above-mentioned processor 603, for extracting the portrait characteristic information of target processing section position correspondence in described target person image.
Above-mentioned processor 603, also for determining and the image procossing material that described portrait characteristic information mates.
Above-mentioned output unit 602, shows described image procossing material for target processing section position in described target person image.
In a kind of embodiment, above-mentioned secondary input device 607, for receiving the image processing requests that user triggers;
Above-mentioned first input media 601, for the target person image in Real time identification camera image pickup scope.
And then in an alternative embodiment, above-mentioned secondary input device 607, also for after the target person image in the first input media 601 Real time identification image pickup scope, obtains the amplification instruction of user for described target person image;
Above-mentioned processor 603, also for amplifying described target person image.
Further, above-mentioned secondary input device 607, also for extract target processing section position correspondence in target person image at processor 603 portrait characteristic information before, receive the Local treatment request for target processing section position in described target person image.
And then in an alternative embodiment, above-mentioned processor 603, also for determining and the portrait tagsort that described portrait characteristic information mates;
Above-mentioned processor 603, also for obtaining the image procossing material corresponding with described portrait tagsort.
In another kind of embodiment, above-mentioned processor 603, also for when the image procossing material corresponding with described portrait tagsort comprises at least two, according to the priority level of the image procossing material corresponding with described portrait tagsort preset, obtain the image procossing material that priority level is the highest.
In another embodiment, above-mentioned output unit 602, also for when the image procossing material corresponding with described portrait tagsort comprises at least two, according to the priority level of the image procossing material corresponding with described portrait tagsort preset, region beyond described target person image show according to priority level order from high to low described at least two image procossing materials, and the image procossing material that user chooses is shown in target processing section position in described target person image.
In another embodiment, above-mentioned secondary input device, also for after described image procossing material is shown in target processing section position in described target person image at output unit 602, obtain the amplification instruction of user for the regional area belonging to described image procossing material and described target processing section position;
Above-mentioned processor 603, for amplifying the regional area belonging to described image procossing material and described target processing section position.
In another embodiment, above-mentioned secondary input device 607, also for after at output unit 602, in described target person image, described image procossing material is shown in target processing section position, receive for the change in size request of described image procossing material in described target person image;
Above-mentioned processor 603, also for changing the size of described image procossing material according to described change in size request.
In further alternative embodiment, above-mentioned secondary input device 607, also for receiving the material confirmation for image procossing material;
Above-mentioned output unit 602, also shows for target processing section position in described target person image the profile material that described image procossing material is corresponding, and the color material that described image procossing material is corresponding is shown in the region beyond described target person image.
In specific implementation, the first input media 601, output unit 602, processor 603 and secondary input device 607 described in the embodiment of the present invention can perform the implementation in the inventive method embodiment one to five, do not repeat them here.
Module in all embodiments of the present invention or submodule, universal integrated circuit can be passed through, such as CPU (Central Processing Unit, central processing unit), or realized by ASIC (Application SpecificIntegrated Circuit, special IC).
Step in embodiment of the present invention method can be carried out order according to actual needs and be adjusted, merges and delete.
Unit in embodiment of the present invention device can carry out merging, divide and deleting according to actual needs.
One of ordinary skill in the art will appreciate that all or part of flow process realized in above-described embodiment method, that the hardware that can carry out instruction relevant by computer program has come, described program can be stored in a computer read/write memory medium, this program, when performing, can comprise the flow process of the embodiment as above-mentioned each side method.Wherein, described storage medium can be magnetic disc, CD, read-only store-memory body (Read-Only Memory, ROM) or random store-memory body (Random Access Memory, RAM) etc.
Above disclosedly be only present pre-ferred embodiments, certainly can not limit the interest field of the present invention with this, therefore according to the equivalent variations that the claims in the present invention are done, still belong to the scope that the present invention is contained.

Claims (10)

1. an image processing method, is characterized in that, comprising:
The target person image of user in real;
Extract the portrait characteristic information of target processing section position correspondence in described target person image;
Determine and the image procossing material that described portrait characteristic information mates;
In described target person image, described image procossing material is shown in target processing section position.
2. method according to claim 1, is characterized in that, the target person image of described user in real, comprising:
Receive the image processing requests that user triggers;
Target person image in Real time identification camera image pickup scope.
3. method according to claim 2, is characterized in that, after the target person image in described Real time identification camera image pickup scope, also comprises:
Obtain the amplification instruction of user for described target person image;
Amplify described target person image.
4. method according to claim 1, is characterized in that, in described extraction target person image target processing section position correspondence portrait characteristic information before, also comprise:
Receive the Local treatment request for target processing section position in described target person image.
5. method according to claim 1, is characterized in that, describedly determines and the image procossing material that described portrait characteristic information mates, and comprising:
Determine and the portrait tagsort that described portrait characteristic information mates;
Obtain the image procossing material corresponding with described portrait tagsort.
6. method according to claim 5, is characterized in that, the image procossing material that described acquisition is corresponding with described portrait tagsort, comprising:
If the image procossing material corresponding with described portrait tagsort comprises at least two, then according to the priority level of the image procossing material corresponding with described portrait tagsort preset, obtain the image procossing material that priority level is the highest.
7. method according to claim 5, is characterized in that, described method also comprises:
If the image procossing material corresponding with described portrait tagsort comprises at least two, then according to the priority level of the image procossing material corresponding with described portrait tagsort preset, the region beyond described target person image show according to priority level order from high to low described at least two image procossing materials;
In described target person image, the image procossing material that user chooses is shown in target processing section position.
8. method according to claim 1, is characterized in that, described after in described target person image, described image procossing material is shown in target processing section position, also comprises:
Obtain the amplification instruction of user for the regional area belonging to described image procossing material and described target processing section position;
Regional area belonging to described image procossing material and described target processing section position is amplified.
9. method according to claim 1, is characterized in that, described after in described target person image, described image procossing material is shown in target processing section position, also comprises:
Receive for the change in size request of described image procossing material in described target person image;
The size of described image procossing material is changed according to described change in size request.
10. method according to claim 1, is characterized in that, described image procossing material comprises profile material and color material;
Described after in described target person image, described image procossing material is shown in target processing section position, also comprise:
Receive the material confirmation for described image procossing material;
In described target person image, the profile material that described image procossing material is corresponding is shown in target processing section position, and the color material that described image procossing material is corresponding is shown in the region beyond described target person image; Or the profile material that described image procossing material is corresponding and color material are shown in translucent mode in target processing section position in described target person image.
CN201410855305.0A 2014-12-31 2014-12-31 Image processing method Pending CN104573721A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410855305.0A CN104573721A (en) 2014-12-31 2014-12-31 Image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410855305.0A CN104573721A (en) 2014-12-31 2014-12-31 Image processing method

Publications (1)

Publication Number Publication Date
CN104573721A true CN104573721A (en) 2015-04-29

Family

ID=53089741

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410855305.0A Pending CN104573721A (en) 2014-12-31 2014-12-31 Image processing method

Country Status (1)

Country Link
CN (1) CN104573721A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139357A (en) * 2015-08-26 2015-12-09 深圳市金立通信设备有限公司 Image editing method and terminal
CN105320929A (en) * 2015-05-21 2016-02-10 维沃移动通信有限公司 Synchronous beautification method for photographing and photographing apparatus thereof
CN106886752A (en) * 2017-01-06 2017-06-23 深圳市金立通信设备有限公司 The method and terminal of a kind of image procossing
CN108288248A (en) * 2018-01-02 2018-07-17 腾讯数码(天津)有限公司 A kind of eyes image fusion method and its equipment, storage medium, terminal
CN109300093A (en) * 2018-09-29 2019-02-01 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN109410308A (en) * 2018-09-29 2019-03-01 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
WO2021031147A1 (en) * 2019-08-21 2021-02-25 L'oreal Computing device, method and apparatus for recommending at least one of makeup palette or hair coloration scheme

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509316A (en) * 2011-09-23 2012-06-20 上海华勤通讯技术有限公司 Mobile terminal and image animating method
CN103024167A (en) * 2012-12-07 2013-04-03 广东欧珀移动通信有限公司 Photographing method and system for mobile terminal
CN103605975A (en) * 2013-11-28 2014-02-26 小米科技有限责任公司 Image processing method and device and terminal device
CN104063890A (en) * 2013-03-22 2014-09-24 中国移动通信集团福建有限公司 Method for cartooning human face and system thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509316A (en) * 2011-09-23 2012-06-20 上海华勤通讯技术有限公司 Mobile terminal and image animating method
CN103024167A (en) * 2012-12-07 2013-04-03 广东欧珀移动通信有限公司 Photographing method and system for mobile terminal
CN104063890A (en) * 2013-03-22 2014-09-24 中国移动通信集团福建有限公司 Method for cartooning human face and system thereof
CN103605975A (en) * 2013-11-28 2014-02-26 小米科技有限责任公司 Image processing method and device and terminal device

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105320929A (en) * 2015-05-21 2016-02-10 维沃移动通信有限公司 Synchronous beautification method for photographing and photographing apparatus thereof
CN105139357A (en) * 2015-08-26 2015-12-09 深圳市金立通信设备有限公司 Image editing method and terminal
CN106886752A (en) * 2017-01-06 2017-06-23 深圳市金立通信设备有限公司 The method and terminal of a kind of image procossing
CN108288248A (en) * 2018-01-02 2018-07-17 腾讯数码(天津)有限公司 A kind of eyes image fusion method and its equipment, storage medium, terminal
CN109300093A (en) * 2018-09-29 2019-02-01 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN109410308A (en) * 2018-09-29 2019-03-01 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
WO2021031147A1 (en) * 2019-08-21 2021-02-25 L'oreal Computing device, method and apparatus for recommending at least one of makeup palette or hair coloration scheme
JP2022538094A (en) * 2019-08-21 2022-08-31 ロレアル Computing device, method, and apparatus for recommending at least one of makeup palettes and hair dye color schemes

Similar Documents

Publication Publication Date Title
CN104573721A (en) Image processing method
CN109816764B (en) Image generation method and device, electronic equipment and storage medium
CN105488511B (en) The recognition methods of image and device
CN104967784B (en) Mobile terminal calls the method and mobile terminal of the substrate features pattern of camera function
US20210097651A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN111091610B (en) Image processing method and device, electronic equipment and storage medium
CN105354792A (en) Method for trying virtual glasses and mobile terminal
EP3822757A1 (en) Method and apparatus for setting background of ui control
EP4191513A1 (en) Image processing method and apparatus, device and storage medium
CN108776800B (en) Image processing method, mobile terminal and computer readable storage medium
CN105872252A (en) Image processing method and device
CN103260036B (en) Image processing apparatus, image processing method, storage medium and image processing system
CN105574834B (en) Image processing method and device
CN104574310A (en) Terminal
CN111105474A (en) Font drawing method and device, computer equipment and computer readable storage medium
CN108171716B (en) Video character decorating method and device based on self-adaptive tracking frame segmentation
CN105426904A (en) Photo processing method, apparatus and device
CN104536566A (en) Page content processing method
CN105224680A (en) A kind of method of search for application and terminal
CN104917963A (en) Image processing method and terminal
CN111652792B (en) Local processing method, live broadcasting method, device, equipment and storage medium for image
CN108010009A (en) A kind of method and device for removing interference figure picture
CN112330728A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN111640190A (en) AR effect presentation method and apparatus, electronic device and storage medium
CN108010038B (en) Live-broadcast dress decorating method and device based on self-adaptive threshold segmentation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20150429