CN104574310A - Terminal - Google Patents

Terminal Download PDF

Info

Publication number
CN104574310A
CN104574310A CN201410856536.3A CN201410856536A CN104574310A CN 104574310 A CN104574310 A CN 104574310A CN 201410856536 A CN201410856536 A CN 201410856536A CN 104574310 A CN104574310 A CN 104574310A
Authority
CN
China
Prior art keywords
image
target person
image procossing
processing section
portrait
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410856536.3A
Other languages
Chinese (zh)
Inventor
瞿颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jinli Communication Equipment Co Ltd
Original Assignee
Shenzhen Jinli Communication Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jinli Communication Equipment Co Ltd filed Critical Shenzhen Jinli Communication Equipment Co Ltd
Priority to CN201410856536.3A priority Critical patent/CN104574310A/en
Publication of CN104574310A publication Critical patent/CN104574310A/en
Pending legal-status Critical Current

Links

Abstract

The embodiment of the invention discloses a terminal. The terminal comprises an image acquiring unit used for acquiring the target image of a user in real time, an information extraction unit used for extracting the image characteristic information corresponding to a target processing portion in the target image, a material determining unit used for determining an image processing material matched with the image characteristic information, and a material displaying unit used for displaying the image processing material on the target processing portion in the target image. By the adoption of the terminal, image processing efficiency and intelligentization can be improved.

Description

A kind of terminal
Technical field
The present invention relates to image processing field, particularly relate to a kind of terminal.
Background technology
Along with the development of terminal technology, camera has become the standard configuration of large multi-terminal equipment, for user shooting is provided, the function such as to take pictures, great enjoyment and convenience is brought to people's live and work, and camera pixel is more and more higher, be intended to allow user photograph the more beautiful photo of better quality, pattern or video.
Certain effect is reached in order to make the good photo of shooting or video, a lot of image processing techniques can both carry out post-processed to the photo taken or video now, but often the post-processed cycle is long, and when treatment effect is not good, retake is carried out in the time place being often difficult to return original shooting, make troubles to user and regret, Consumer's Experience is low.
Summary of the invention
The embodiment of the present invention provides a kind of terminal, to can be implemented in shooting process according to the portrait characteristic information in target person image as user recommends image procossing material, improves the efficiency of image procossing and intelligent.
Embodiments provide a kind of terminal, described terminal comprises:
Image acquisition unit, for the target person image of user in real;
Information extraction unit, for extracting the portrait characteristic information of target processing section position correspondence in described target person image;
Material determining unit, for determining and the image procossing material that described portrait characteristic information mates;
Material display unit, shows described image procossing material for target processing section position in described target person image.
The embodiment of the present invention can extract the portrait characteristic information of target processing section position correspondence in the target person image of image acquisition unit Real-time Obtaining by information extraction unit, material determining unit is determined and the image procossing material that described portrait characteristic information mates, described image procossing material is shown in material display unit target processing section position in target person image, to achieve in shooting process according to the portrait characteristic information in target person image as user recommends image procossing material, improve the efficiency of image procossing and intelligent.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the schematic flow sheet of a kind of image processing method that the embodiment of the present invention provides;
Fig. 2 is the schematic flow sheet of a kind of image processing method in another embodiment of the present invention;
Fig. 3 is the schematic flow sheet of a kind of image processing method in further embodiment of this invention;
Fig. 4 is the structural representation of a kind of terminal that the embodiment of the present invention provides;
Fig. 5 is the structural representation of the embodiment of material determining unit in the embodiment of the present invention shown in Fig. 4;
Fig. 6 is the structural representation of the another kind of terminal that the embodiment of the present invention provides.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art are not making the every other embodiment obtained under creative work prerequisite, all belong to the scope of embodiment of the present invention protection.
The realizing scene and can include but are not limited to user and use terminal device when taking pictures to reach the image procossing that certain effect of taking pictures is carried out of the embodiment of the present invention, or when user utilizes that terminal front-facing camera is auxiliary makes up, dressing recommendation is carried out to the character image of camera Real-time Obtaining, and the dressing of recommendation is illustrated in the corresponding position of character image.
See Fig. 1, it is the schematic flow sheet of a kind of image processing method that the embodiment of the present invention provides.Image processing method described in the present embodiment, comprising:
S101, the target person image of user in real.
Concrete, terminal can receive the image processing requests that user triggers, character image in Real time identification camera image pickup scope, if the character image in camera image pickup scope only has one, then above-mentioned character image is defined as target person image, if the character image in camera image pickup scope comprises at least two, then the character image that user chooses is defined as target person image.Optionally, after the target person image in all right Real time identification camera image pickup scope of terminal, obtain the amplification instruction of user for described target person image, and amplify described target person image.Concrete, when the character image in camera image pickup scope comprises at least two, receive user and select information for the portrait of the character image chosen, character image user chosen is defined as target person image.Terminal is by target person Nonlinear magnify, make the target person image after amplifying on display interface, occupy comparatively large regions, the target person image of user in real in the whole process of image procossing, by Real-time Obtaining to target person image real-time exhibition on display interface.
S102, extracts the portrait characteristic information of target processing section position correspondence in described target person image.
Above-mentioned portrait characteristic information includes but are not limited to Skin Color Information, shape of face information, hair style information or clothing color information.Concrete, terminal according to the corresponding relation of the treatment sites preset and portrait characteristic information kind, can be determined that the target person of target processing section position correspondence is as characteristic information kind, extracts the portrait characteristic information that in target person image, target signature information kind is corresponding.Such as, target processing section position is eyebrow position, the corresponding relation of the treatment sites that inquiry is preset and portrait characteristic information kind, obtain the angle that portrait characteristic information type corresponding to eyebrow position is line between point of fixity in shape of face, extracting in target person image in above-mentioned shape of face the angular values of line between point of fixity is portrait characteristic information corresponding to target signature information kind.
Above-mentioned target processing section position includes but are not limited to eyebrow position, rouge position, eye shadow position, vermilion border position, bang position or eyelashes position.Optionally, before step S102, described method also comprises the Local treatment request received for target processing section position in described target person image, determine the portrait characteristic information type of described target processing section position correspondence according to above-mentioned Local treatment request, and then extract the portrait characteristic information that in target person image, target signature information kind is corresponding.Optionally, described treatment sites also can process according to the processing sequence preset successively.
S103, determines and the image procossing material that described portrait characteristic information mates.
Above-mentioned image procossing material includes but are not limited to eyebrow material, rouge material, eye shadow material, lip red pigment material, bang material or eyelashes material.In specific implementation, terminal can be determined and the portrait tagsort that described portrait characteristic information mates, and obtains the image procossing material corresponding with described portrait tagsort.Such as, target processing section position in step S102 is eye shadow position, the portrait characteristic information extracted is the rgb value of clothes color, wherein R=153, G=510, B=0, then the portrait tagsort of mating with described portrait characteristic information is red colour system clothes color, and the image procossing material of answering with red colour system clothes Color pair is large ground colour eye shadow.
Optionally, if the image procossing material corresponding with described portrait tagsort only comprises one, then terminal obtains described image procossing material, if the image procossing material corresponding with described portrait tagsort comprises at least two, then terminal is according to the priority level of the image procossing material corresponding with described portrait tagsort preset, and obtains the image procossing material that priority level is the highest.
S104, in described target person image, described image procossing material is shown in target processing section position.
Concrete, the position of image procossing material target processing section position in target person image that step S103 can obtain by terminal displays, such as above-mentioned image procossing material is eyebrow material, then the eyebrow material obtained in step S103 is illustrated in the position that in goal task image, eyebrow position is corresponding.
Optionally, after step S104, described method can also comprise:
Receive for the change in size request of described image procossing material in described target person image, change the size of described image procossing material according to described change in size request.
Concrete, can carry the dimension scale change information for described image procossing material in above-mentioned change in size request, terminal can change the size of described image procossing material target processing section position display in target person image according to dimension scale change information.
The embodiment of the present invention can by the portrait characteristic information of target processing section position correspondence in the target person image of extraction Real-time Obtaining, and then extract the image procossing material mated with described portrait characteristic information, then in target person image, described image procossing material is shown in target processing section position, to achieve in shooting process according to the portrait characteristic information in target person image as user recommends image procossing material, improve the efficiency of image procossing and intelligent.
See Fig. 2, be the schematic flow sheet of a kind of image processing method in another embodiment of the present invention, the image processing method described in the present embodiment, comprising:
S201, the target person image of user in real.
Concrete, terminal receives the image processing requests that user triggers, character image in Real time identification camera image pickup scope, if the character image in camera image pickup scope only has one, then above-mentioned character image is defined as target person image, if the character image in camera image pickup scope comprises at least two, then the character image that user chooses is defined as target person image.Optionally, terminal amplifies described target person image after the character image that user chooses can also being defined as target person image.Concrete, when the character image in camera image pickup scope comprises at least two, receive user and select information for the portrait of the character image chosen, character image user chosen is defined as target person image.Terminal is by target person Nonlinear magnify, make the target person image after amplifying on display interface, occupy comparatively large regions, the target person image of user in real in the whole process of image procossing, by Real-time Obtaining to target person image real-time exhibition on display interface.
S202, extracts the portrait characteristic information of target processing section position correspondence in described target person image.
Above-mentioned portrait characteristic information includes but are not limited to Skin Color Information, shape of face information, hair style information or clothing color information.Concrete, according to the corresponding relation of the treatment sites preset and portrait characteristic information kind, determine that the target person of target processing section position correspondence is as characteristic information kind, extract the portrait characteristic information that in target person image, target signature information kind is corresponding.Such as, target processing section position is eyebrow position, the corresponding relation of the treatment sites that inquiry is preset and portrait characteristic information kind, obtain the angle that portrait characteristic information type corresponding to eyebrow position is line between point of fixity in shape of face, extracting in target person image in above-mentioned shape of face the angular values of line between point of fixity is portrait characteristic information corresponding to target signature information kind.
Above-mentioned target processing section position includes but are not limited to eyebrow position, rouge position, eye shadow position, vermilion border position or eyelashes position.Optionally, before step S202, described method also comprises the Local treatment request received for target processing section position in described target person image, determine the portrait characteristic information type of described target processing section position correspondence according to above-mentioned Local treatment request, and then extract the portrait characteristic information that in target person image, target signature information kind is corresponding.Optionally, described treatment sites also can process according to the processing sequence preset successively.
S203, determines and the image procossing material that described portrait characteristic information mates.
Above-mentioned image procossing material includes but are not limited to eyebrow material, rouge material, eye shadow material, lip red pigment material or eyelashes material.In specific implementation, terminal can be determined and the portrait tagsort that described portrait characteristic information mates, and obtains the image procossing material corresponding with described portrait tagsort.Such as, target processing section position in step S202 is eye shadow position, the portrait characteristic information extracted is the rgb value of clothes color, wherein R=153, G=510, B=0, then the portrait tagsort of mating with described portrait characteristic information is red colour system clothes color, and the image procossing material of answering with red colour system clothes Color pair is large ground colour eye shadow.
S204, if the image procossing material corresponding with described portrait tagsort comprises at least two, then according to the priority level of the image procossing material corresponding with described portrait tagsort preset, the region beyond described target person image show according to priority level order from high to low described at least two image procossing materials.
Concrete, if image procossing material corresponding to the portrait tagsort obtained in step S203 comprises at least two, then terminal is according to the priority level of the image procossing material corresponding with described portrait tagsort set up in advance, the region beyond target person image is shown in the display interface of display-object character image, above-mentioned at least two image procossing materials are shown according to priority level order from high to low, the priority level of the image procossing material that above-mentioned and described portrait tagsort is corresponding, represent height portrait tagsort respective image process material being recommended to degree.Such as, above-mentioned target processing section position is bang position, above-mentioned portrait tagsort is circular face, the image procossing material corresponding with circular face has long oblique bang, broken oblique bang, matched bang, broken neat bang, heart-shaped bang, ultrashort bang, above-mentioned image procossing material according to the priority preset, is shown the region beyond target person image according to the order list view of long oblique bang, broken oblique bang, heart-shaped bang, broken neat bang, ultrashort bang, matched bang by terminal in display interface.
S205, in described target person image, the image procossing material that user chooses is shown in target processing section position.
Concrete, user can select the target image process material at least two the image procossing materials shown in step S204, terminal receives the selection of materials information for target image process material in above-mentioned at least two image procossing materials of user's input, delete the image procossing material that target processing section position is shown in character image, in character image, the described target process material that user chooses is shown in target processing section position.
The embodiment of the present invention can by the portrait characteristic information of target processing section position correspondence in the target person image of extraction Real-time Obtaining, and then extract the image procossing material mated with described portrait characteristic information, then in target person image, described image procossing material is shown in target processing section position, to achieve in shooting process according to the portrait characteristic information in target person image as user recommends image procossing material, improve the efficiency of image procossing and intelligent.
See Fig. 3, be the schematic flow sheet of a kind of image processing method in further embodiment of this invention, the image processing method described in the present embodiment, comprising:
S301, the target person image of user in real.
Concrete, terminal can receive the image processing requests that user triggers, character image in Real time identification camera image pickup scope, if the character image in camera image pickup scope only has one, then above-mentioned character image is defined as target person image, if the character image in camera image pickup scope comprises at least two, then the character image that user chooses is defined as target person image.Optionally, after character image user chosen is defined as target person image, amplify described target person image.Concrete, when the character image in camera image pickup scope comprises at least two, receive user and select information for the portrait of the character image chosen, character image user chosen is defined as target person image.By target person Nonlinear magnify, make the target person image after amplifying on display interface, occupy comparatively large regions, the target person image of user in real in the whole process of image procossing, by Real-time Obtaining to target person image real-time exhibition on display interface.
S302, extracts the portrait characteristic information of target processing section position correspondence in described target person image.
Above-mentioned portrait characteristic information includes but are not limited to Skin Color Information, shape of face information, hair style information or clothing color information.Concrete, terminal according to the corresponding relation of the treatment sites preset and portrait characteristic information kind, can be determined that the target person of target processing section position correspondence is as characteristic information kind, extracts the portrait characteristic information that in target person image, target signature information kind is corresponding.Such as, target processing section position is eyebrow position, the corresponding relation of the treatment sites that inquiry is preset and portrait characteristic information kind, obtain the angle that portrait characteristic information type corresponding to eyebrow position is line between point of fixity in shape of face, extracting in target person image in above-mentioned shape of face the angular values of line between point of fixity is portrait characteristic information corresponding to target signature information kind.
Above-mentioned target processing section position includes but are not limited to eyebrow position, rouge position, eye shadow position, vermilion border position, bang position or eyelashes position.Optionally, before step S302, described method also comprises the Local treatment request received for target processing section position in described target person image, determine the portrait characteristic information type of described target processing section position correspondence according to above-mentioned Local treatment request, and then extract the portrait characteristic information that in target person image, target signature information kind is corresponding.Optionally, described treatment sites also can process according to the processing sequence preset successively.
S303, determines and the image procossing material that described portrait characteristic information mates.
Above-mentioned image procossing material includes but are not limited to eyebrow material, rouge material, eye shadow material, lip red pigment material, bang material or eyelashes material.In specific implementation, terminal is determined and the portrait tagsort that described portrait characteristic information mates, and obtains the image procossing material corresponding with described portrait tagsort.Such as, target processing section position in step S302 is eye shadow position, the portrait characteristic information that terminal is extracted is the rgb value of clothes color, wherein R=153, G=510, B=0, then the portrait tagsort of mating with described portrait characteristic information is red colour system clothes color, and the image procossing material of answering with red colour system clothes Color pair is large ground colour eye shadow.
Optionally, if the image procossing material corresponding with described portrait tagsort only comprises one, then obtain described image procossing material, if the image procossing material corresponding with described portrait tagsort comprises at least two, then according to the priority level of the image procossing material corresponding with described portrait tagsort preset, obtain the image procossing material that priority level is the highest.
S304, in described target person image, described image procossing material is shown in target processing section position.
Concrete, the position of image procossing material target processing section position in target person image that step S303 obtains by terminal displays, such as above-mentioned image procossing material is eyebrow material, then the eyebrow material obtained in step S303 is illustrated in the position that in goal task image, eyebrow position is corresponding.
S305, amplifies the regional area belonging to described image procossing material and described target processing section position.
Concrete, terminal obtains the amplification instruction of user for the regional area belonging to described image procossing material and described target processing section position, and the regional area belonging to the image procossing material obtained in step S304 and target processing section position is amplified display, the region making above-mentioned image procossing material and the regional area belonging to target processing section position shared on display interface increases.Described user can by phonetic entry for the amplification instruction of the regional area belonging to described image procossing material and described target processing section position, to make user when both hands carry out makeups during inconvenient touch-control input, by the amplification instruction of phonetic entry for the regional area belonging to described image procossing material and described target processing section position.Regional area belonging to above-mentioned target processing section position can be the center of circle centered by target processing section position, with the border circular areas that the half of the longest partial-length in target processing section position is radius, it also can be the minimum rectangle all holding above-mentioned target processing section position.
S306, receive the material confirmation for described image procossing material, in described target person image, target processing section position shows that the profile material that described image procossing material is corresponding, the region beyond described target person image show the color material that described image procossing material is corresponding.
Concrete, above-mentioned image procossing material comprises profile material and color material, terminal receives user for after the material confirmation of image procossing material, profile material corresponding for above-mentioned image procossing material is illustrated in target processing section position in described target person image, color material corresponding for above-mentioned image procossing material is illustrated in display interface the region shown beyond described target person image, the above-mentioned region in order to show color material can be the region of presetting in display interface.Such as, when user utilizes terminal front-facing camera to carry out auxiliary cosmetic, after the eyebrow material recommended is confirmed, the eyebrow outline material that above-mentioned eyebrow material is corresponding is shown to user, the predeterminable area shown in display interface beyond user's character image shows eyebrow color material corresponding to above-mentioned eyebrow outline, to make user select the eyebrow pencil color for thrush according to above-mentioned eyebrow color material, and carry out thrush according to above-mentioned eyebrow outline material.
Optionally, after receiving the material confirmation for described image procossing material, in described target person image, the profile material that described image procossing material is corresponding and color material can also be shown in translucent mode in target processing section position.
The embodiment of the present invention can by the portrait characteristic information of target processing section position correspondence in the target person image of extraction Real-time Obtaining, and then extract the image procossing material mated with described portrait characteristic information, then in target person image, described image procossing material is shown in target processing section position, to achieve in shooting process according to the portrait characteristic information in target person image as user recommends image procossing material, improve the efficiency of image procossing and intelligent.
See Fig. 4, it is the structural representation of a kind of terminal that the embodiment of the present invention provides, the terminal that wherein embodiment of the present invention is mentioned to can comprise mobile phone, panel computer, PC (personal computer, personal computer), car-mounted terminal or Worn type smart machine etc., the terminal that the embodiment of the present invention provides is corresponding with the method shown in Fig. 1, may operate in the executive agent of the image processing method shown in Fig. 1, terminal as shown in the figure in the embodiment of the present invention at least can comprise image acquisition unit 401, information extraction unit 402, material determining unit 403 and material display unit 404, wherein:
Image acquisition unit 401, for the target person image of user in real.
Concrete, image acquisition unit 401 can receive the image processing requests that user triggers, character image in Real time identification camera image pickup scope, if the character image in camera image pickup scope only has one, then above-mentioned character image is defined as target person image, if the character image in camera image pickup scope comprises at least two, then the character image that user chooses is defined as target person image.
Information extraction unit 402, for extracting the portrait characteristic information of target processing section position correspondence in described target person image.
Above-mentioned target processing section position includes but are not limited to eyebrow position, rouge position, eye shadow position, vermilion border position, bang position or eyelashes position.Above-mentioned portrait characteristic information includes but are not limited to Skin Color Information, shape of face information, hair style information or clothing color information.Concrete, information extraction unit 402 is according to the treatment sites preset and the corresponding relation of portrait characteristic information kind, determine that the target person of target processing section position correspondence is as characteristic information kind, extract the portrait characteristic information that in target person image, target signature information kind is corresponding.Such as, target processing section position is eyebrow position, the corresponding relation of the treatment sites that inquiry is preset and portrait characteristic information kind, obtain the angle that portrait characteristic information type corresponding to eyebrow position is line between point of fixity in shape of face, extracting in target person image in above-mentioned shape of face the angular values of line between point of fixity is portrait characteristic information corresponding to target signature information kind.
Optionally, described terminal also can comprise further: request reception unit 407, for before described information extraction unit 402 extracts the portrait characteristic information of target processing section position correspondence in described target person image, receive the Local treatment request for target processing section position in described target person image.Information extraction unit 402 determines the portrait characteristic information type of described target processing section position correspondence according to above-mentioned Local treatment request, and then extracts the portrait characteristic information that in target person image, target signature information kind is corresponding.
Material determining unit 403, for determining and the image procossing material that described portrait characteristic information mates.
Above-mentioned image procossing material includes but are not limited to eyebrow material, rouge material, eye shadow material, lip red pigment material, bang material or eyelashes material.
Material display unit 404, shows described image procossing material for target processing section position in described target person image.
Concrete, the position of the image procossing material target processing section position in target person image above-mentioned material determining unit 403 obtained displays, such as above-mentioned image procossing material is eyebrow material, then the eyebrow material that material determining unit 403 obtains is illustrated in the position that in goal task image, eyebrow position is corresponding.
In a kind of optional embodiment, described terminal can also comprise further:
Portrait amplification instruction acquiring unit 405, for obtaining the amplification instruction of user for described target person image.
Size changes unit 406, after the target person image in described image acquisition unit 401 Real time identification camera image pickup scope, amplifies described target person image.
Concrete, when the character image in camera image pickup scope comprises at least two, portrait amplification instruction acquiring unit 405 obtains the amplification instruction of user for described target person image, above-mentioned size changes unit 406, according to the described amplification instruction for described target person image, the target person Nonlinear magnify that user is chosen, make the target person image after amplifying on display interface, occupy comparatively large regions, the target person image of user in real in the whole process of image procossing, by Real-time Obtaining to target person image real-time exhibition on display interface.
Optionally, described terminal can also comprise:
Region amplification instruction receiving element 408, after described image procossing material is shown in target processing section position in described target person image at described material display unit, obtain the amplification instruction of user for the regional area belonging to described image procossing material and described target processing section position.Described user can by phonetic entry for the amplification instruction of the regional area belonging to described image procossing material and described target processing section position, to make user when both hands carry out makeups during inconvenient touch-control input, by the amplification instruction of phonetic entry for the regional area belonging to described image procossing material and described target processing section position.
Described size changes unit 406, also for after at described material display unit 404, in described target person image, described image procossing material is shown in target processing section position, is amplified by the regional area belonging to described image procossing material and described target processing section position.
Further alternative, described material display unit 404, also for when described material determining unit 403 determines that the image procossing material mated with described portrait characteristic information comprises at least two, according to the priority level of the image procossing material corresponding with described portrait tagsort preset, region beyond described target person image show according to priority level order from high to low described at least two image procossing materials, and the image procossing material that user chooses is shown in target processing section position in described target person image.
Concrete, if the image procossing material mated with described portrait characteristic information that described material determining unit 403 is determined comprises at least two, then according to the priority level of the image procossing material corresponding with described portrait tagsort set up in advance, the region beyond target person image is shown in the display interface of display-object character image, above-mentioned at least two image procossing materials are shown according to priority level order from high to low, the priority level of the image procossing material that above-mentioned and described portrait tagsort is corresponding, represent height portrait tagsort respective image process material being recommended to degree.Such as, above-mentioned target processing section position is bang position, above-mentioned portrait tagsort is circular face, the image procossing material corresponding with circular face has long oblique bang, broken oblique bang, matched bang, broken neat bang, heart-shaped bang, ultrashort bang, by above-mentioned image procossing material according to the priority preset, in display interface, show the region beyond target person image according to the order list view of long oblique bang, broken oblique bang, heart-shaped bang, broken neat bang, ultrashort bang, matched bang.
Concrete, target image process material at least two image procossing materials that user can select material display unit 404 to show, receive the selection of materials information for target image process material in above-mentioned at least two image procossing materials of user's input, delete the image procossing material that target processing section position is shown in character image, in character image, the described target process material that user chooses is shown in target processing section position.
As optional embodiment, described request receiving element 407, also for after at described material display unit 404, in described target person image, described image procossing material is shown in target processing section position, receive for the change in size request of described image procossing material in described target person image.Concrete, the dimension scale change information for described image procossing material can be carried in above-mentioned change in size request.
Described size changes unit 406, also for changing the size of described image procossing material according to described change in size request.The size of described image procossing material target processing section position display in target person image is changed according to the above-mentioned dimension scale change information for described image procossing material.
In another optional embodiment, described request receiving element 407, also for after described image procossing material is shown in target processing section position in described target person image at described material display unit 404, receive the material confirmation for image procossing material.
Concrete, user can select other image procossing materials recommending to show on display interface as required, and then is displayed by the target image process material chosen by material display unit 404.
Described material display unit 404, also shows for target processing section position in described target person image the profile material that described image procossing material is corresponding, and the color material that described image procossing material is corresponding is shown in the region beyond described target person image.
Concrete, above-mentioned image procossing material comprises profile material and color material, profile material corresponding for above-mentioned image procossing material is illustrated in target processing section position in described target person image by material display unit 404, color material corresponding for above-mentioned image procossing material is illustrated in display interface the region shown beyond described target person image, the above-mentioned region in order to show color material can be the region of presetting in display interface.Such as, when user utilizes terminal front-facing camera to carry out auxiliary cosmetic, after the eyebrow material recommended is confirmed, the eyebrow outline material that above-mentioned eyebrow material is corresponding is shown to user, the predeterminable area shown in display interface beyond user's character image shows eyebrow color material corresponding to above-mentioned eyebrow outline, to make user select the eyebrow pencil color for thrush according to above-mentioned eyebrow color material, and carry out thrush according to above-mentioned eyebrow outline material.
Optionally, after the material confirmation that described material display unit 404 can also receive for described image procossing material at described request receiving element 407, in described target person image, the profile material that described image procossing material is corresponding and color material are shown in translucent mode in target processing section position.
The embodiment of the present invention can by the portrait characteristic information of target processing section position correspondence in the target person image of extraction Real-time Obtaining, and then extract the image procossing material mated with described portrait characteristic information, then in target person image, described image procossing material is shown in target processing section position, to achieve in shooting process according to the portrait characteristic information in target person image as user recommends image procossing material, improve the efficiency of image procossing and intelligent.
See Fig. 5, be the structural representation of the embodiment of the material determining unit in the embodiment of the present invention shown in Fig. 4, this material determining unit 403 can comprise: tagsort determining unit 4301 and material obtaining unit 4302.
Tagsort determining unit 4301, for determining and the portrait tagsort that described portrait characteristic information mates.
Such as, above-mentioned target processing section position is eye shadow position, and the portrait characteristic information of extraction is the rgb value of clothes color, wherein R=153, G=510, B=0, then the portrait tagsort of mating with described portrait characteristic information is red colour system clothes color.For another example above-mentioned target processing section position is eyebrow position, and the portrait characteristic information of extraction is chin centre is 65 degree with the angle numerical value of left and right tail of the eye line respectively, then the portrait tagsort of mating with described portrait characteristic information is circular face.
Material obtaining unit 4302, for obtaining the image procossing material corresponding with described portrait tagsort.
Such as, the portrait tagsort of mating with described portrait characteristic information that tagsort determining unit 4301 obtains is red colour system clothes color, and the image procossing material of answering with red colour system clothes Color pair is large ground colour eye shadow.And for example, the portrait tagsort of mating with described portrait characteristic information that tagsort determining unit 4301 obtains is circular face, and the image procossing material corresponding with circular face is yi word pattern eyebrow.
Optionally, above-mentioned material obtaining unit 4302, for when the image procossing material corresponding with described portrait tagsort comprises at least two, according to the priority level of the image procossing material corresponding with described portrait tagsort preset, obtain the image procossing material that priority level is the highest.
The embodiment of the present invention can by the portrait characteristic information of target processing section position correspondence in the target person image of extraction Real-time Obtaining, and then extract the image procossing material mated with described portrait characteristic information, then in target person image, described image procossing material is shown in target processing section position, to achieve in shooting process according to the portrait characteristic information in target person image as user recommends image procossing material, improve the efficiency of image procossing and intelligent.
See Fig. 6, be the structural representation of the another kind of terminal that the embodiment of the present invention provides, the terminal described in the present embodiment can comprise: at least one input media 601, at least one output unit 602, at least one processor 603, such as CPU, storer 604 and at least one bus 605.
Wherein, above-mentioned bus 605 is for connecting above-mentioned input media 601, output unit 602, processor 603 and storer 604.
Wherein, above-mentioned first input media 601 specifically can be the camera of terminal, for the target person image of user in real.Secondary input device 607 specifically can be the contact panel of terminal, comprises touch-screen, touch screen and microphone, the operational order on sense terminals contact panel or the operational order by phonetic entry.
Above-mentioned output establishes device 602 specifically to can be the display screen of terminal, for display-object character image and image procossing material.
Above-mentioned storer 604 can be high-speed RAM storer, also can be non-labile storer (non-volatile memory), such as magnetic disk memory.Above-mentioned storer 604 is for storing batch processing code, and above-mentioned first input media 601, output unit 602, processor 603 and secondary input device 607, for calling the program code stored in storer 604, perform and operate as follows:
Above-mentioned first input media 601, for the target person image of user in real.
Above-mentioned processor 603, for extracting the portrait characteristic information of target processing section position correspondence in described target person image.
Above-mentioned processor 603, also for determining and the image procossing material that described portrait characteristic information mates.
Above-mentioned output unit 602, shows described image procossing material for target processing section position in described target person image.
In a kind of embodiment, above-mentioned secondary input device 607, for receiving the image processing requests that user triggers;
Above-mentioned first input media 601, for the target person image in Real time identification camera image pickup scope.
And then in an alternative embodiment, above-mentioned secondary input device 607, also for after the target person image in the first input media 601 Real time identification image pickup scope, obtains the amplification instruction of user for described target person image;
Above-mentioned processor 603, also for amplifying described target person image.
Further, above-mentioned secondary input device 607, also for extract target processing section position correspondence in target person image at processor 603 portrait characteristic information before, receive the Local treatment request for target processing section position in described target person image.
And then in an alternative embodiment, above-mentioned processor 603, also for determining and the portrait tagsort that described portrait characteristic information mates;
Above-mentioned processor 603, also for obtaining the image procossing material corresponding with described portrait tagsort.
In another kind of embodiment, above-mentioned processor 603, also for when the image procossing material corresponding with described portrait tagsort comprises at least two, according to the priority level of the image procossing material corresponding with described portrait tagsort preset, obtain the image procossing material that priority level is the highest.
In another embodiment, above-mentioned output unit 602, also for when the image procossing material corresponding with described portrait tagsort comprises at least two, according to the priority level of the image procossing material corresponding with described portrait tagsort preset, region beyond described target person image show according to priority level order from high to low described at least two image procossing materials, and the image procossing material that user chooses is shown in target processing section position in described target person image.
In another embodiment, above-mentioned secondary input device, also for after described image procossing material is shown in target processing section position in described target person image at output unit 602, obtain the amplification instruction of user for the regional area belonging to described image procossing material and described target processing section position;
Above-mentioned processor 603, for amplifying the regional area belonging to described image procossing material and described target processing section position.
In another embodiment, above-mentioned secondary input device 607, also for after at output unit 602, in described target person image, described image procossing material is shown in target processing section position, receive for the change in size request of described image procossing material in described target person image;
Above-mentioned processor 603, also for changing the size of described image procossing material according to described change in size request.
In further alternative embodiment, above-mentioned secondary input device 607, also for receiving the material confirmation for image procossing material;
Above-mentioned output unit 602, also shows for target processing section position in described target person image the profile material that described image procossing material is corresponding, and the color material that described image procossing material is corresponding is shown in the region beyond described target person image.
In specific implementation, the first input media 601, output unit 602, processor 603 and secondary input device 607 described in the embodiment of the present invention can perform the implementation in the inventive method embodiment one to five, do not repeat them here.
Module in all embodiments of the present invention or submodule, universal integrated circuit can be passed through, such as CPU (Central Processing Unit, central processing unit), or realized by ASIC (Application SpecificIntegrated Circuit, special IC).
Step in embodiment of the present invention method can be carried out order according to actual needs and be adjusted, merges and delete.
Unit in embodiment of the present invention device can carry out merging, divide and deleting according to actual needs.
One of ordinary skill in the art will appreciate that all or part of flow process realized in above-described embodiment method, that the hardware that can carry out instruction relevant by computer program has come, described program can be stored in a computer read/write memory medium, this program, when performing, can comprise the flow process of the embodiment as above-mentioned each side method.Wherein, described storage medium can be magnetic disc, CD, read-only store-memory body (Read-Only Memory, ROM) or random store-memory body (Random Access Memory, RAM) etc.
Above disclosedly be only present pre-ferred embodiments, certainly can not limit the interest field of the present invention with this, therefore according to the equivalent variations that the claims in the present invention are done, still belong to the scope that the present invention is contained.

Claims (10)

1. a terminal, is characterized in that, comprising:
Image acquisition unit, for the target person image of user in real;
Information extraction unit, for extracting the portrait characteristic information of target processing section position correspondence in described target person image;
Material determining unit, for determining and the image procossing material that described portrait characteristic information mates;
Material display unit, shows described image procossing material for target processing section position in described target person image.
2. terminal according to claim 1, is characterized in that, image acquisition unit, for receiving the image processing requests that user triggers, and the target person image in Real time identification camera image pickup scope.
3. terminal according to claim 2, is characterized in that, also comprises:
Portrait amplification instruction acquiring unit, after the target person image in described image acquisition unit Real time identification camera image pickup scope, obtains the amplification instruction of user for described target person image;
Size changes unit, for amplifying described target person image.
4. terminal according to claim 1, is characterized in that, also comprises:
Request reception unit, before the portrait characteristic information for target processing section position correspondence in described information extraction unit extraction target person image, receives the Local treatment request for target processing section position in described target person image.
5. terminal according to claim 1, is characterized in that, described material determining unit comprises:
Tagsort determining unit, for determining and the portrait tagsort that described portrait characteristic information mates;
Material obtaining unit, for obtaining the image procossing material corresponding with described portrait tagsort.
6. terminal according to claim 5, is characterized in that, described material obtaining unit is used for:
When the image procossing material corresponding with described portrait tagsort comprises at least two, according to the priority level of the image procossing material corresponding with described portrait tagsort preset, obtain the image procossing material that priority level is the highest.
7. terminal according to claim 5, is characterized in that,
Described material display unit, when the image procossing material corresponding with described portrait tagsort also for obtaining at material obtaining unit comprises at least two, according to the priority level of the image procossing material corresponding with described portrait tagsort preset, region beyond described target person image show according to priority level order from high to low described at least two image procossing materials, and the image procossing material that user chooses is shown in target processing section position in described target person image.
8. terminal according to claim 1, is characterized in that, described terminal also comprises:
Region amplification instruction receiving element, after described image procossing material is shown in target processing section position in described target person image at described material display unit, obtain the amplification instruction of user for the regional area belonging to described image procossing material and described target processing section position;
Described size changes unit, also for being amplified by the regional area belonging to described image procossing material and described target processing section position.
9. terminal according to claim 1, is characterized in that,
Described request receiving element, also for after at described material display unit, in described target person image, described image procossing material is shown in target processing section position, receives for the change in size request of described image procossing material in described target person image;
Described size changes unit, also for changing the size of described image procossing material according to described change in size request.
10. terminal according to claim 1, is characterized in that, described image procossing material comprises profile material and color material;
Request reception unit, also for after described image procossing material is shown in target processing section position in described target person image at described material display unit, receives the material confirmation for described image procossing material;
Described material display unit, also show for target processing section position in described target person image the profile material that described image procossing material is corresponding and the color material that described image procossing material is corresponding is shown in region beyond described target person image, or the profile material that described image procossing material is corresponding and color material are shown in translucent mode in target processing section position in described target person image.
CN201410856536.3A 2014-12-31 2014-12-31 Terminal Pending CN104574310A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410856536.3A CN104574310A (en) 2014-12-31 2014-12-31 Terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410856536.3A CN104574310A (en) 2014-12-31 2014-12-31 Terminal

Publications (1)

Publication Number Publication Date
CN104574310A true CN104574310A (en) 2015-04-29

Family

ID=53090291

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410856536.3A Pending CN104574310A (en) 2014-12-31 2014-12-31 Terminal

Country Status (1)

Country Link
CN (1) CN104574310A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107944093A (en) * 2017-11-02 2018-04-20 广东数相智能科技有限公司 A kind of lipstick color matching system of selection, electronic equipment and storage medium
WO2021031147A1 (en) * 2019-08-21 2021-02-25 L'oreal Computing device, method and apparatus for recommending at least one of makeup palette or hair coloration scheme

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101673475A (en) * 2009-09-15 2010-03-17 宇龙计算机通信科技(深圳)有限公司 Method for realizing making-up guidance at terminal and equipment and system
CN102509316A (en) * 2011-09-23 2012-06-20 上海华勤通讯技术有限公司 Mobile terminal and image animating method
US20120243780A1 (en) * 2011-03-21 2012-09-27 Apple Inc. Red-Eye Removal Using Multiple Recognition Channels
CN102708575A (en) * 2012-05-17 2012-10-03 彭强 Daily makeup design method and system based on face feature region recognition
CN103024167A (en) * 2012-12-07 2013-04-03 广东欧珀移动通信有限公司 Photographing method and system for mobile terminal
CN103605975A (en) * 2013-11-28 2014-02-26 小米科技有限责任公司 Image processing method and device and terminal device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101673475A (en) * 2009-09-15 2010-03-17 宇龙计算机通信科技(深圳)有限公司 Method for realizing making-up guidance at terminal and equipment and system
US20120243780A1 (en) * 2011-03-21 2012-09-27 Apple Inc. Red-Eye Removal Using Multiple Recognition Channels
CN102509316A (en) * 2011-09-23 2012-06-20 上海华勤通讯技术有限公司 Mobile terminal and image animating method
CN102708575A (en) * 2012-05-17 2012-10-03 彭强 Daily makeup design method and system based on face feature region recognition
CN103024167A (en) * 2012-12-07 2013-04-03 广东欧珀移动通信有限公司 Photographing method and system for mobile terminal
CN103605975A (en) * 2013-11-28 2014-02-26 小米科技有限责任公司 Image processing method and device and terminal device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107944093A (en) * 2017-11-02 2018-04-20 广东数相智能科技有限公司 A kind of lipstick color matching system of selection, electronic equipment and storage medium
CN109359317A (en) * 2017-11-02 2019-02-19 广东数相智能科技有限公司 A kind of lipstick is matched colors the model building method and lipstick color matching selection method of selection
WO2021031147A1 (en) * 2019-08-21 2021-02-25 L'oreal Computing device, method and apparatus for recommending at least one of makeup palette or hair coloration scheme
JP2022538094A (en) * 2019-08-21 2022-08-31 ロレアル Computing device, method, and apparatus for recommending at least one of makeup palettes and hair dye color schemes

Similar Documents

Publication Publication Date Title
CN104573721A (en) Image processing method
CN109816764B (en) Image generation method and device, electronic equipment and storage medium
CN105488511B (en) The recognition methods of image and device
CN105512605A (en) Face image processing method and device
CN104967784B (en) Mobile terminal calls the method and mobile terminal of the substrate features pattern of camera function
CN109325924B (en) Image processing method, device, terminal and storage medium
CN111091610B (en) Image processing method and device, electronic equipment and storage medium
US20210097651A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN105354792A (en) Method for trying virtual glasses and mobile terminal
CN113570052B (en) Image processing method, device, electronic equipment and storage medium
EP4191513A1 (en) Image processing method and apparatus, device and storage medium
CN108776800B (en) Image processing method, mobile terminal and computer readable storage medium
CN105872252A (en) Image processing method and device
CN105574834B (en) Image processing method and device
CN104766354A (en) Method for augmented reality drawing and mobile terminal
CN104574310A (en) Terminal
CN108171716B (en) Video character decorating method and device based on self-adaptive tracking frame segmentation
CN105094297A (en) Display content zooming method and display content zooming device
CN105426904A (en) Photo processing method, apparatus and device
CN104536566A (en) Page content processing method
CN105224680A (en) A kind of method of search for application and terminal
CN111652792B (en) Local processing method, live broadcasting method, device, equipment and storage medium for image
CN108010009A (en) A kind of method and device for removing interference figure picture
CN104917963A (en) Image processing method and terminal
CN111640190A (en) AR effect presentation method and apparatus, electronic device and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20150429