WO2018076622A1 - 图像处理方法、装置及终端 - Google Patents

图像处理方法、装置及终端 Download PDF

Info

Publication number
WO2018076622A1
WO2018076622A1 PCT/CN2017/080371 CN2017080371W WO2018076622A1 WO 2018076622 A1 WO2018076622 A1 WO 2018076622A1 CN 2017080371 W CN2017080371 W CN 2017080371W WO 2018076622 A1 WO2018076622 A1 WO 2018076622A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
facial
area
display screen
image processing
Prior art date
Application number
PCT/CN2017/080371
Other languages
English (en)
French (fr)
Inventor
郑小红
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2018076622A1 publication Critical patent/WO2018076622A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present disclosure relates to the field of communications, and in particular, to an image processing method, apparatus, and terminal.
  • FIG. 1 is a schematic diagram of a beauty menu of a terminal product in the related art when performing post-stage beauty treatment.
  • the facial beauty processing requires the following steps: firstly, the beauty processing module is selected and the face to be processed is selected, and then the beauty menu at the bottom of the screen is selected to select different beauty methods.
  • the corresponding button for eye treatment appears, or the drag bar, and the beauty treatment of the eye is completed by dragging the drag bar.
  • the face of different faces is treated separately (for example: skin, face, eyes, nose, mouth) until the desired effect is achieved.
  • This processing method has many selection steps, and the beauty step is not intuitive enough to achieve the desired effect of beauty where it is clicked, the use convenience is poor, and the user experience is poor.
  • the image processing method cannot perform image processing on the designated area by clicking the designated area of the click image, thereby causing a problem that the user experience is poor, and an effective solution has not been proposed.
  • An embodiment of the present disclosure provides an image processing method, apparatus, and terminal, so as to at least solve the problem that the image processing method cannot implement a specified area of a click image in the related art, and the image processing can be performed on the designated area, thereby causing poor user experience.
  • an image processing method including:
  • An image processing menu corresponding to the facial features is displayed.
  • the method before detecting the operation of the image to be processed on the display screen, the method further includes:
  • the correspondence between the facial features and the rectangular area of the display screen is obtained, including:
  • Determining position coordinates corresponding to all the facial features wherein the facial features include: forehead, eyebrows, eyes, nose, left cheek, right cheek, lip, chin;
  • determining a facial feature region corresponding to the operation according to the location information corresponding to the operation including:
  • the method further includes:
  • an image processing apparatus comprising:
  • the detecting module is configured to detect an operation of the image to be processed on the display screen, and obtain the location information corresponding to the operation, where the operation includes: a touch operation, a pressing operation, and the image to be processed includes: a facial image;
  • a determining module configured to determine a facial feature corresponding to the operation according to position information corresponding to the operation when performing the operation on the facial image
  • a display module configured to display an image processing menu corresponding to the facial feature.
  • the device further comprises:
  • the identification module is configured to perform an identification operation on the facial image to obtain a correspondence between the facial features and a rectangular area of the display screen, wherein the rectangular area is divided in advance by a specified rule.
  • the identification module comprises:
  • a determining unit configured to determine position coordinates corresponding to all the facial features, wherein the facial features include: forehead, eyebrows, eyes, nose, left cheek, right cheek, lip, chin;
  • a establishing unit configured to establish a correspondence between position coordinates corresponding to all the facial features and the rectangular area.
  • the determining module comprises:
  • a first acquiring unit configured to acquire first image data in a circular area with a coordinate of the position information as a center and a first number of pixels as a radius;
  • the first identifying unit is configured to identify the first image data, and determine the facial feature corresponding to the operation according to the recognition result.
  • the determining module further includes:
  • a second acquiring unit configured to acquire second image data in a circular area with a coordinate of the position information as a center and a second number of pixels as a radius, wherein the second quantity is greater than the first quantity ;
  • a second identifying unit configured to identify the second image data, and determine a location area of the facial feature corresponding to the operation according to the recognition result, where the location area includes: a left side of the facial image Half, the right half of the facial image.
  • an image processing terminal including:
  • a memory for storing instructions executable by the processor
  • the processor is configured to perform an action according to the instruction stored in the memory, the action comprising:
  • a display screen for displaying an image processing menu corresponding to the facial features.
  • the processor is further configured to perform the following actions:
  • a computer storage medium which may store an execution instruction for performing an implementation of the image processing method in the above embodiment.
  • the disclosure detecting an operation of the image to be processed on the display screen, and acquiring position information corresponding to the operation, wherein the image to be processed includes a face image; when the face image is operated, determining and operating according to the position information corresponding to the operation The corresponding facial features are displayed; then the image processing menu corresponding to the facial features is displayed, which solves the problem that the image processing mode cannot realize the image processing of the designated area by clicking the designated area of the click image, thereby causing poor user experience. It can capture the user's real-time operation and provide image processing menus in a targeted manner, effectively realizing the intuitive beauty effect of where to click and ensuring good user experience.
  • FIG. 1 is a schematic view of a beauty menu in the related art
  • FIG. 2 is a flowchart of an image processing method according to Embodiment 1 of the present disclosure
  • FIG. 3 is a schematic diagram (1) of dividing a rectangular region of a face image according to Embodiment 1 of the present disclosure
  • FIG. 4 is a schematic diagram of dividing a rectangular region of a face image according to Embodiment 1 of the present disclosure (2);
  • FIG. 5 is a flowchart of determining a facial feature region corresponding to an operation according to position information corresponding to an operation according to Embodiment 1 of the present disclosure
  • FIG. 6 is a structural block diagram (1) of an image processing apparatus according to Embodiment 2 of the present disclosure.
  • FIG. 7 is a structural block diagram (2) of an image processing apparatus according to Embodiment 2 of the present disclosure.
  • FIG. 8 is a block diagram showing the structure of an image processing terminal according to Embodiment 2 of the present disclosure.
  • FIG. 9 is a flowchart of a photo beauty processing method according to Embodiment 3 of the present disclosure.
  • FIG. 10 is a schematic diagram (1) showing a beauty menu display according to Embodiment 3 of the present disclosure.
  • FIG. 11 is a schematic diagram (2) showing a beauty menu display according to Embodiment 3 of the present disclosure.
  • FIG. 12 is a flowchart of a photo beauty processing method according to Embodiment 4 of the present disclosure.
  • FIG. 13 is a flowchart of a beauty treatment method according to Embodiment 5 of the present disclosure.
  • an image processing method embodiment is provided, and it should be noted that the steps shown in the flowchart of the drawing may be in a computer system or mobile terminal such as a set of computer or mobile terminal executable instructions. The steps shown and described may be performed in a different order than the ones described herein, although the logical order is shown in the flowchart.
  • FIG. 2 is a flowchart of an image processing method according to Embodiment 1 of the present disclosure. As shown in FIG. 1, the method includes:
  • the method before the operation of detecting the image to be processed on the display screen, the method further includes: performing a recognition operation on the facial image to obtain a correspondence between the facial feature and the rectangular region of the display screen, wherein , The rectangular area is previously divided by a specified rule.
  • Obtaining a correspondence between the facial features and the rectangular area of the display screen includes: determining position coordinates corresponding to all facial features, wherein the facial features include: forehead, eyebrows, eyes, nose, left cheek, right cheek, lips, chin Establishing a correspondence between position coordinates corresponding to all facial features and rectangular regions.
  • FIG. 3 is a schematic diagram (1) of dividing a rectangular region of a face image according to Embodiment 1 of the present disclosure.
  • the rectangular region may be divided according to a specified rule, according to facial tectonic features and different parts of the face.
  • the pixel density is different, and the display area of the face image on the display screen is divided into an upper court display area, an atrium display area, and a court display area.
  • FIG. 4 is a schematic diagram (2) of dividing a rectangular region of a face image according to Embodiment 1 of the present disclosure.
  • the rectangular region may be divided according to a specified rule, and may be different according to facial tectonic features and faces.
  • the feature that the pixel density of the part is different, the display area of the face image on the display screen is divided into a plurality of rectangular areas, and each rectangular area displays a certain part of the face.
  • each sub-rectangular region can be implemented by the following method: as shown in FIG. 3, first determining the facial eyebrow The coordinates (x, y) in the screen determine the coordinate range of the rectangular area where the eyebrow is located. According to the facial tectonics, identify the rectangular area where other facial features are located, and then calculate the coordinate range of each rectangular area, and establish each A database of correspondence between rectangular area coordinates and facial features of the area.
  • FIG. 5 is a flowchart of determining a face feature region corresponding to an operation according to position information corresponding to an operation according to Embodiment 1 of the present disclosure. As shown in FIG. 5, in an optional example of the embodiment, determining a facial feature region corresponding to the operation according to the location information corresponding to the operation may be implemented by:
  • S502 Acquire, as a center of the coordinate of the operation corresponding coordinate information, the first image data in a circular area with the first number of pixels as a radius;
  • S504 Identify the first image data, and determine a facial feature corresponding to the operation according to the recognition result.
  • S506 acquiring, by using a coordinate of the operation corresponding position information as a center, and using a second number of pixels as the second image data in a circular area of the radius, wherein the second quantity is greater than the first quantity;
  • an "left and right symmetry" option may be set in the image processing menu for synchronizing the beauty treatment effect of the left half of the face image and the right half of the face image.
  • module may implement a combination of software and/or hardware of a predetermined function.
  • apparatus described in the following embodiments is preferably implemented in software, hardware, or a combination of software and hardware, is also possible and contemplated.
  • FIG. 6 is a structural block diagram (1) of an image processing apparatus according to Embodiment 2 of the present disclosure. As shown in FIG. 6, the method includes:
  • the detecting module 60 is configured to detect an operation of the image to be processed on the display screen, and obtain position information corresponding to the operation, where the operation includes: a touch operation and a pressing operation, and the image to be processed includes: a facial image;
  • a determining module 62 configured to determine a facial feature corresponding to the operation according to the position information corresponding to the operation when the facial image is operated;
  • the display module 64 is configured to display an image processing menu corresponding to the facial features.
  • the detecting module detects an operation of the image to be processed on the display screen, and acquires position information corresponding to the operation, wherein the image to be processed includes a face image; when the face image is operated, the determining module determines the position according to the operation.
  • the information determines the facial features corresponding to the operation; and then the image processing menu corresponding to the facial features is displayed by the display module, and the related art can solve the problem that the image processing mode cannot realize the image processing of the designated area by clicking the designated area of the click image, thereby
  • the problem of poor user experience can capture the user's real-time operation and provide image processing menus in a targeted manner, effectively realizing the intuitive beauty effect of where to click and ensuring good user experience.
  • FIG. 7 is a block diagram (2) of the structure of an image processing apparatus according to Embodiment 2 of the present disclosure.
  • the device further includes: an identification module 66 configured to perform a recognition operation on the facial image to obtain a correspondence between the facial features and the rectangular region of the display screen.
  • the rectangular area is divided in advance by a specified rule.
  • the identification module 66 includes: a determining unit 660, configured to determine position coordinates corresponding to all facial features, wherein the facial features include: forehead, eyebrows, eyes, nose, left cheek, right cheek, lip, chin; establishing unit 662, It is used to establish the correspondence between the position coordinates corresponding to all facial features and the rectangular area.
  • a determining unit 660 configured to determine position coordinates corresponding to all facial features, wherein the facial features include: forehead, eyebrows, eyes, nose, left cheek, right cheek, lip, chin
  • establishing unit 662 It is used to establish the correspondence between the position coordinates corresponding to all facial features and the rectangular area.
  • the rectangular area may be divided according to the specified rule, and the display area of the face image on the display screen is divided into the upper court display area and the atrium according to the features of the facial tectonics and the pixel density of different parts of the face.
  • the display area and the vestibular display area; or the display area of the face image on the display screen is divided into a plurality of rectangular areas according to the features of the facial tectonics and the pixel density of different parts of the face, each The rectangular area shows a part of the face.
  • each sub-rectangular region may be implemented by: first determining the coordinates of the facial eyebrow in the screen (x) 0 , y 0 ), determine the coordinate range of the rectangular area where the eyebrow is located, identify the rectangular area where other facial features are located according to the facial tectonics, and then calculate the coordinate range of each rectangular area, and establish the coordinates of each rectangular area.
  • a database of correspondences with facial features in the area may be implemented by: first determining the coordinates of the facial eyebrow in the screen (x) 0 , y 0 ), determine the coordinate range of the rectangular area where the eyebrow is located, identify the rectangular area where other facial features are located according to the facial tectonics, and then calculate the coordinate range of each rectangular area, and establish the coordinates of each rectangular area.
  • the determining module 62 includes: a first acquiring unit 620, configured to acquire first image data in a circular area with a first number of pixels as a radius and a first number of pixels;
  • the unit 622 is configured to identify the first image data, and determine a facial feature corresponding to the operation according to the recognition result.
  • the determining module 62 further includes: a second obtaining unit 624, configured to acquire second image data in a circular area with a coordinate of the position information as a center and a second number of pixels as a radius, wherein the second quantity is greater than the first
  • the second identification unit 626 is configured to identify the second image data, and determine a location area of the facial feature corresponding to the operation according to the recognition result, where the location area includes: a left half of the facial image, and a facial image Right half.
  • the "left and right symmetry" option can be set in the image processing menu for synchronizing the beauty treatment effect of the left half of the face image and the right half of the face image.
  • the present embodiment further provides an image processing terminal, which is used to implement the image method in the foregoing embodiment and its optional examples.
  • an image processing terminal which is used to implement the image method in the foregoing embodiment and its optional examples.
  • the terminal includes: a processor 82; a memory 84, configured to store instructions executable by the processor;
  • the processor 82 is configured to perform an action according to an instruction stored in the memory 84, including:
  • the operation of the image to be processed on the display screen is detected, and the position information corresponding to the operation is obtained, wherein the operation includes: a touch operation and a pressing operation, and the image to be processed includes: a face image;
  • the display screen 80 is configured to display an image processing menu corresponding to the facial features.
  • the display screen, the processor and the memory work together to detect the operation of the image to be processed on the display screen, and obtain the position information corresponding to the operation, wherein the image to be processed includes the face image; when the facial image is performed During operation, the facial features corresponding to the operation are determined according to the position information corresponding to the operation; and then the image processing menu corresponding to the facial features is displayed, and the related art can solve the specified area by clicking the designated area of the click image in the related art.
  • the image processing results in poor user experience, captures the user's real-time operation, and provides an image processing menu in a targeted manner, effectively realizing the intuitive beauty effect of where to click and ensuring good user experience.
  • the processor 82 is further configured to: perform a recognition operation on the facial image to obtain a correspondence between the facial features and the rectangular area of the display screen, wherein the rectangular area passes the specified rule in advance Division.
  • the processor 82 is further configured to: determine position coordinates corresponding to all facial features, wherein the facial features include: forehead, eyebrows, eyes, nose, left cheek, right cheek, lips And chin; establish the correspondence between the position coordinates corresponding to all facial features and the rectangular area.
  • the processor 82 is further configured to: acquire first image data in a circular area with a coordinate of the position information as a center, a first number of pixels as a radius; and a first image The data is identified, and the facial features corresponding to the operation are judged according to the recognition result.
  • the processor 82 is further configured to: acquire second image data in a circular area with a coordinate of the position information as a center and a second number of pixels as a radius, where the second The number is greater than the first number; the second image data is identified, and the location area of the facial feature corresponding to the operation is determined according to the recognition result, wherein the location area includes: a left half of the facial image and a right half of the facial image section.
  • the present embodiment provides a photo beauty processing method by taking a face beauty as an example.
  • 9 is a flow chart of a photo beauty processing method according to Embodiment 3 of the present disclosure. As shown in FIG. 9, the method includes the following steps:
  • the face recognition module is activated, and the face recognition processing is performed on the photo, and the correspondence relationship between the rectangular regions on the display screen of each part of the face is recognized.
  • the face recognition module recognizes the face parts including: the eyes, the eyebrows, the forehead, the nose, the mouth, the left cheek, the left cheek, the chin, and the like.
  • the face recognition module divides the face into three parts: the upper court display area, the atrium display area, and the lower court display according to the features of the face tectonics and the features of the pixels in different regions of the face.
  • the face display area is divided into a plurality of sub-rectangular areas according to the features of the face tectonics and the features of the pixels of different regions of the face, and each sub-rectangular area displays a part of the face.
  • the face recognition module calculates the coordinates of the face of the face in the center of the screen (x0, y0), and calculates the coordinates of each sub-rectangular area according to the face structure.
  • 4 is a schematic diagram showing a distribution of a rectangular portion of a face portion according to Embodiment 3 of the present disclosure. As shown in FIG. 4, a database of correspondence between the coordinates of each sub-rectangular region and the face portion of the region is established.
  • the first acquiring module acquires a current finger touch parameter, where the touch parameter includes touch position coordinates.
  • the touch parameter can also include the pressure magnitude f, the touch duration t.
  • the first determining module determines whether the touch click (or press) coordinate falls within a certain rectangular area coordinate of a certain part of the human face. When the first determining module determines that the touch click (or pressing) position falls outside the coordinates of the rectangular portion of the face portion, the touch click (or press) action is not responded to.
  • the second determining module is added to determine whether the duration (and/or pressure value) generated by the touch reaches a preset value threshold.
  • the second determining module determines that the touch (or click) reaches a preset threshold. Find a rectangular area coordinate database corresponding to the face part, and find the face part of the current touch position.
  • S908 In the preset screen area, displaying a beauty menu and a beauty operation instruction description corresponding to the currently touched face part, to prompt the user to further perform a beauty operation on the face part.
  • the preset screen area is preferably a non-face display area.
  • FIG. 10 is a schematic diagram (1) of a beauty menu display according to Embodiment 3 of the present disclosure
  • FIG. 11 is a schematic diagram (2) of a beauty menu display according to Embodiment 3 of the present disclosure.
  • the beauty menu of each part of the face is displayed in the menu below the top of the screen.
  • a plurality of menus frontal, eye, nose, mouth, chin of various parts of the face, in step S908, at the same time, only the corresponding corresponding face part of the current click is popped up.
  • the menu, other untouched face parts corresponding to the beauty menu is not visible for hidden attributes.
  • the disclosure pops up the preset beauty menu of the current part according to the touch parameter value of the current part, prompts the user to perform the beauty treatment on the current part, and displays the processed effect in real time, which greatly improves the convenience and entertainment of the beauty treatment. Sex.
  • FIG. 12 is a flowchart of a photo beauty processing method according to Embodiment 4 of the present disclosure. As shown in FIG. 12, the method includes the following steps:
  • S1202 Select and open a photo to be processed, wherein the photo includes one or more facial images.
  • S1204 Start a face recognition module, perform face recognition processing on the photo, and identify a coordinate area of the face on the display screen according to the face structure and the pixel features of each part of the face.
  • a rectangular frame is displayed in the coordinate area where the face image is located to prompt the current face position.
  • the first acquiring module acquires a current finger touch parameter, where the touch parameter includes touch position coordinates.
  • the touch parameter also includes a pressure magnitude f and a touch duration t.
  • the first determining module determines whether the touch click (or press) coordinate falls within the rectangular area coordinates of the face. When the first determining module determines that the touch click (or pressing) position falls outside the coordinates of the rectangular portion of the face portion, the touch click (or press) action is not responded to.
  • the second determining module is added to determine whether the duration (and/or pressure value) generated by the touch reaches a preset value threshold.
  • S1208 acquiring image data in an area where the M pixels are in a radius centering on the touch position coordinates (x 1 , y 1 ), and the face recognition module is in the area according to the face structure and the pixel features of each part of the face. The data is identified to determine the face portion of the current touch location.
  • the image data in the region where the current M pixels are in a radius may correspond to the left part of the face or the right part of the face.
  • the touch center is used as a dot, and the image data in which the N pixels are within a radius is further determined to be the left half area or the right half area of the face.
  • S1210 In the preset screen area, display a beauty menu and a beauty operation instruction description corresponding to the currently touched face part, to prompt the user to further perform a beauty operation on the face part.
  • a "left and right symmetry" option is added to the beauty menu to synchronize the beauty effects on the left and right sides of the face to achieve consistency of the beauty effect.
  • the disclosure pops up the preset beauty menu of the current part according to the touch parameter value of the current part, prompts the user to perform the beauty treatment on the current part, and displays the processed effect in real time, which greatly improves the convenience and entertainment of the beauty treatment. Sex.
  • FIG. 13 is a flowchart of a beauty treatment method according to Embodiment 5 of the present disclosure. As shown in FIG. 13, the method includes:
  • Step 1302 Start a face recognition module, identify a face, and find a screen coordinate (x 2 , y 2 ) where the eyebrow is located according to the eye feature of the face.
  • Step 1304 According to the face tectonics and the pixel characteristics of the face of the face, find the display area where the eyes are located, and establish the correspondence between the coordinates of the rectangular area where the eye is located and the eye; similarly, according to the pixel features of other parts of the face, Find the rectangular area where the other facial features of the face are located; establish the relationship of the coordinates of the rectangular area corresponding to the other facial features.
  • Step 1306 Acquire a current touch click event and determine the current touch parameter.
  • Step 1308 When it is determined that the current touch parameter reaches the beauty preset threshold, the rectangular area database corresponding to the face part is searched according to the current touch coordinate, and the corresponding face part is found.
  • step 1310 a prompt box is popped up in the preset non-divided face area, and the preset beauty treatment menu is corresponding to the eye.
  • a pop-up prompt box is displayed in the preset non-divided face area, as shown in FIG. 11 , for the eye corresponding preset beauty treatment menu.
  • the embodiment provides a photo beauty processing method, which can further perform facial treatment on the eye according to the presented eye beauty menu and operation method, and prompts the maneuverability and convenience of the beauty treatment.
  • Embodiments of the present disclosure also provide a storage medium.
  • the foregoing storage medium may be used to save the program code executed by the image processing method provided in Embodiment 1 above.
  • the foregoing storage medium may be located in any one of the mobile terminal groups in the computer network, or in any one of the mobile terminal groups.
  • the storage medium is arranged to store program code for performing the following steps:
  • the operation of the image to be processed on the display screen is detected, and the location information corresponding to the operation is obtained, where the operation includes: a touch operation and a pressing operation, and the image to be processed includes: a facial image;
  • the disclosed technical contents may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in various embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present disclosure may contribute to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the foregoing storage medium includes: a U disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and the like. .
  • the image processing method provided by the embodiment of the present disclosure can be applied to a mobile terminal, by detecting an operation of an image to be processed on a display screen, and acquiring position information corresponding to the operation, and determining a facial feature corresponding to the operation according to the position information corresponding to the operation. Then, the image processing menu corresponding to the facial features is displayed, which can capture the real-time operation of the user and provide an image processing menu in a targeted manner, thereby effectively realizing the intuitive beauty effect of where to click and ensuring good user experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种图像处理方法、装置及终端,其中,图像处理方法包括:检测显示屏上对待处理图像的操作,并获取操作对应的位置信息,其中,操作包括:触摸操作、按压操作,待处理图像中包括:脸部图像(S202);当对脸部图像进行操作时,根据操作对应的位置信息确定与操作对应的脸部特征(S204);显示脸部特征对应的图像处理菜单(S206)。本方案能够捕捉用户的实时操作,并针对性地提供图像处理菜单,有效实现了点击哪里就美容哪里的直观美颜效果,保证了良好的用户体验性。

Description

图像处理方法、装置及终端 技术领域
本公开涉及通信领域,尤其涉及一种图像处理方法、装置及终端。
背景技术
目前的相关技术中,终端产品的美颜处理功能分成前期预览阶段处理和拍照后期阶段处理,在经过前期处理阶段后,用户往往对当前照片的美颜效果不满意进而在后期阶段再次进行美颜处理。图1是相关技术中终端产品在执行后期阶段美颜处理时的美容菜单示意图。如图1所示,在后期处理阶段,对人脸美颜处理需要如下步骤:首先启动美颜处理模块并选择要处理的人脸,然后选择屏幕下方的美容菜单,选择不同的美颜方法。例如,通过点击“眼睛”菜单,出现对应的对眼睛处理的按键,或者是拖动条,通过对拖动条的拖动完成对眼睛的美颜处理。通过这种方式分别对人脸不同的部位进行美颜处理(比如:美肤,脸,眼,鼻,嘴),直到达到预期效果。这种处理方法选择步骤较多,美颜步骤不够直观,达不到点击哪里就美容哪里的理想效果,使用便利性不佳,用户体验性较差。
针对相关技术中,图像处理方式无法实现点击图像的指定区域就能够对指定区域进行图像处理,从而导致用户体验性差的问题,尚未提出有效的解决方案。
发明内容
本公开实施例提供了一种图像处理方法、装置及终端,以至少解决相关技术中图像处理方式无法实现点击图像的指定区域就能够对指定区域进行图像处理,从而导致用户体验性差的问题。
根据本公开的一个方面,提供了一种图像处理方法,包括:
检测显示屏上对待处理图像的操作,并获取所述操作对应的位置信息,其中,所述操作包括:触摸操作、按压操作,所述待处理图像中包括:脸部图像;
当对所述脸部图像进行所述操作时,根据所述操作对应的位置信息确定与所述操作对应的脸部特征;
显示所述脸部特征对应的图像处理菜单。
优选地,检测显示屏上对待处理图像的操作之前,所述方法还包括:
对所述脸部图像进行识别操作,得到所述脸部特征与所述显示屏的矩形区域的对应关系,其中,所述矩形区域预先通过指定规则划分。
优选地,得到所述脸部特征与所述显示屏的矩形区域的对应关系,包括:
确定所有所述脸部特征对应的位置坐标,其中,所述脸部特征包括:额头、眉毛、眼睛、鼻子、左脸颊、右脸颊、唇、下巴;
建立所有所述脸部特征对应的位置坐标与所述矩形区域的对应关系。
优选地,根据所述操作对应的位置信息确定与所述操作对应的脸部特征区域,包括:
获取以所述位置信息的坐标作为圆心,以第一数量的像素作为半径的圆形区域内的第一图像数据;
对所述第一图像数据进行识别,根据识别结果判断所述操作对应的所述脸部特征。
优选地,对所述第一图像数据进行识别,根据识别结果判断所述操作对应的所述脸部特征之后,还包括:
获取以所述位置信息的坐标作为圆心,以第二数量的像素作为半径的圆形区域内的第二图像数据,其中,所述第二数量大于所述第一数量;
对所述第二图像数据进行识别,根据识别结果判断所述操作对应的所述脸部特征的位置区域,其中,所述位置区域包括:所述脸部图像的左半部分、所述脸部图像的右半部分。
根据本公开的另一个方面,还提供了一种图像处理装置,包括:
检测模块,用于检测显示屏上对待处理图像的操作,并获取所述操作对应的位置信息,其中,所述操作包括:触摸操作、按压操作,所述待处理图像中包括:脸部图像;
确定模块,用于当对所述脸部图像进行所述操作时,根据所述操作对应的位置信息确定与所述操作对应的脸部特征;
显示模块,用于显示所述脸部特征对应的图像处理菜单。
优选地,所述装置还包括:
识别模块,用于对所述脸部图像进行识别操作,得到所述脸部特征与所述显示屏的矩形区域的对应关系,其中,所述矩形区域预先通过指定规则划分。
优选地,所述识别模块包括:
确定单元,用于确定所有所述脸部特征对应的位置坐标,其中,所述脸部特征包括:额头、眉毛、眼睛、鼻子、左脸颊、右脸颊、唇、下巴;
建立单元,用于建立所有所述脸部特征对应的位置坐标与所述矩形区域的对应关系。
优选地,所述确定模块包括:
第一获取单元,用于获取以所述位置信息的坐标作为圆心,以第一数量的像素作为半径的圆形区域内的第一图像数据;
第一识别单元,用于对所述第一图像数据进行识别,根据识别结果判断所述操作对应的所述脸部特征。
优选地,所述确定模块还包括:
第二获取单元,用于获取以所述位置信息的坐标作为圆心,以第二数量的像素作为半径的圆形区域内的第二图像数据,其中,所述第二数量大于所述第一数量;
第二识别单元,用于对所述第二图像数据进行识别,根据识别结果判断所述操作对应的所述脸部特征的位置区域,其中,所述位置区域包括:所述脸部图像的左半部分、所述脸部图像的右半部分。
根据本公开的另一个方面,还提供了一种图像处理终端,包括:
处理器;
存储器,用于存储所述处理器可执行的指令;
所述处理器用于根据所述存储器中存储的所述指令执行动作,所述动作包括:
检测显示屏上对待处理图像的操作,并获取所述操作对应的位置信息,其中,所述操作包括:触摸操作、按压操作,所述待处理图像中包括:脸部图像;
当对所述脸部图像进行所述操作时,根据所述操作对应的位置信息确定与所述操作对应的脸部特征;
显示屏,用于显示所述脸部特征对应的图像处理菜单。
优选地,所述处理器还用于执行以下动作:
对所述脸部图像进行识别操作,得到所述脸部特征与所述显示屏的矩形区域的对应关系,其中,所述矩形区域预先通过指定规则划分。
在本公开实施例中,还提供了一种计算机存储介质,该计算机存储介质可以存储有执行指令,该执行指令用于执行上述实施例中的图像处理方法的实现。
通过本公开,检测显示屏上对待处理图像的操作,并获取操作对应的位置信息,其中待处理图像中包括脸部图像;当对脸部图像进行操作时,根据操作对应的位置信息确定与操作对应的脸部特征;然后显示脸部特征对应的图像处理菜单,解决了相关技术中,图像处理方式无法实现点击图像的指定区域就能够对指定区域进行图像处理,从而导致用户体验性差的问题,能够捕捉用户的实时操作,并针对性地提供图像处理菜单,有效实现了点击哪里就美容哪里的直观美颜效果,保证了良好的用户体验性。
附图说明
此处所说明的附图用来提供对本公开的进一步理解,构成本申请的一部分,本公开的示意性实施例及其说明用于解释本公开,并不构成对本公开的不当限定。在附图中:
图1是相关技术中美颜菜单的示意图;
图2是根据本公开实施例1的图像处理方法的流程图;
图3是根据本公开实施例1的脸部图像矩形区域划分示意图(一);
图4是根据本公开实施例1的脸部图像矩形区域划分示意图(二);
图5是根据本公开实施例1的根据操作对应的位置信息确定与操作对应的脸部特征区域的流程图;
图6是根据本公开实施例2的图像处理装置的结构框图(一);
图7是根据本公开实施例2的图像处理装置的结构框图(二);
图8是根据本公开实施例2的图像处理终端的结构框图;
图9是根据本公开实施例3的照片美颜处理方法的流程图;
图10是根据本公开实施例3的美颜菜单显示示意图(一);
图11是根据本公开实施例3的美颜菜单显示示意图(二);
图12是根据本公开实施例4的照片美颜处理方法的流程图;
图13是根据本公开实施例5的美颜处理方法的流程图。
具体实施方式
下文中将参考附图并结合实施例来详细说明本公开。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。
需要说明的是,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本公开的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
实施例1
根据本公开实施例,提供了一种图像处理方法实施例,需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机或移动终端可执行指令的计算机系统或移动终端中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
图2是根据本公开实施例1的图像处理方法的流程图,如图1所示,该方法包括:
S202,检测显示屏上对待处理图像的操作,并获取操作对应的位置信息,其中,操作包括:触摸操作、按压操作,待处理图像中包括:脸部图像;
S204,当对脸部图像进行操作时,根据操作对应的位置信息确定与操作对应的脸部特征;
S206,显示脸部特征对应的图像处理菜单。
通过上述步骤,检测显示屏上对待处理图像的操作,并获取操作对应的位置信息,其中待处理图像中包括脸部图像;当对脸部图像进行操作时,根据操作对应的位置信息确定与操作对应的脸部特征;然后显示脸部特征对应的图像处理菜单,解决了相关技术中,图像处理方式无法实现点击图像的指定区域就能够对指定区域进行图像处理,从而导致用户体验性差的问题,能够捕捉用户的实时操作,并针对性地提供图像处理菜单,有效实现了点击哪里就美容哪里的直观美颜效果,保证了良好的用户体验性。
在本实施例的一个可选示例中,检测显示屏上对待处理图像的操作时,先判断该操作是否落在脸部图像的显示区域内,若没有落在脸部图像的显示区域内,则不对该操作进行响应。
在本实施例的一个可选示例中,检测显示屏上对待处理图像的操作之前,上述方法还包括:对脸部图像进行识别操作,得到脸部特征与显示屏的矩形区域的对应关系,其中, 矩形区域预先通过指定规则划分。
得到脸部特征与显示屏的矩形区域的对应关系,包括:确定所有脸部特征对应的位置坐标,其中,脸部特征包括:额头、眉毛、眼睛、鼻子、左脸颊、右脸颊、唇、下巴;建立所有脸部特征对应的位置坐标与矩形区域的对应关系。
图3是根据本公开实施例1的脸部图像矩形区域划分示意图(一),如图3所示,此处的根据指定规则划分矩形区域可以是,根据脸部构造学特征和脸部不同部位的像素密集度不同等特征,将脸部图像在显示屏上的显示区域划分为上庭显示区域、中庭显示区域和下庭显示区域。
图4是根据本公开实施例1的脸部图像矩形区域划分示意图(二),如图4所示,此处的根据指定规则划分矩形区域也可以是,根据脸部构造学特征和脸部不同部位的像素密集度不同的特征,将脸部图像在显示屏上的显示区域划分为若干矩形区域,每一矩形区域显示脸部某一部位。
在本实施例的一个可选示例中,建立所有脸部特征对应的位置坐标与矩形区域的对应关系每一子矩形区域,可以通过以下方法实现:如图3所示,先确定脸部眉心在屏幕中的坐标(x,y),确定眉心所在矩形区域的坐标范围,根据脸部构造学特征,识别出其他脸部特征所在的矩形区域,然后计算出每一矩形区域的坐标范围,建立每一矩形区域坐标和所在区域脸部特征的对应关系数据库。
图5是根据本公开实施例1的根据操作对应的位置信息确定与操作对应的脸部特征区域的流程图。如图5所示,在本实施例的一个可选示例中,根据操作对应的位置信息确定与操作对应的脸部特征区域,可以通过以下方式实现:
S502,获取以操作对应的位置信息的坐标作为圆心,以第一数量的像素作为半径的圆形区域内的第一图像数据;
S504,对第一图像数据进行识别,根据识别结果判断操作对应的脸部特征。
进一步的优选步骤中,还包括:
S506,获取以操作对应的位置信息的坐标作为圆心,以第二数量的像素作为半径的圆形区域内的第二图像数据,其中,第二数量大于所述第一数量;
S508,对第二图像数据进行识别,根据识别结果判断操作对应的脸部特征的位置区域,其中,位置区域包括:脸部图像的左半部分、脸部图像的右半部分。
进一步的优选步骤中,图像处理菜单中可以设置“左右对称”的选项,用于同步脸部图像的左半部分和脸部图像的右半部分的美颜处理效果。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机, 计算机,服务器,或者网络设备等)执行本公开各个实施例的方法。
实施例2
在本实施例中还提供了一种图像处理装置,该装置用于实现上述实施例及可选示例,已经进行过说明的不再赘述。如以下所使用的,术语“模块”可以实现预定功能的软件和/或硬件的组合。尽管以下实施例所描述的装置较佳地以软件来实现,但是硬件,或者软件和硬件的组合的实现也是可能并被构想的。
图6是根据本公开实施例2的图像处理装置的结构框图(一),如图6所示,包括:
检测模块60,用于检测显示屏上对待处理图像的操作,并获取操作对应的位置信息,其中,操作包括:触摸操作、按压操作,待处理图像中包括:脸部图像;
确定模块62,用于当对脸部图像进行操作时,根据操作对应的位置信息确定与操作对应的脸部特征;
显示模块64,用于显示脸部特征对应的图像处理菜单。
通过上述步骤,检测模块检测显示屏上对待处理图像的操作,并获取操作对应的位置信息,其中待处理图像中包括脸部图像;当对脸部图像进行操作时,确定模块根据操作对应的位置信息确定与操作对应的脸部特征;然后通过显示模块显示脸部特征对应的图像处理菜单,解决了相关技术中,图像处理方式无法实现点击图像的指定区域就能够对指定区域进行图像处理,从而导致用户体验性差的问题,能够捕捉用户的实时操作,并针对性地提供图像处理菜单,有效实现了点击哪里就美容哪里的直观美颜效果,保证了良好的用户体验性。
图7是根据本公开实施例2的图像处理装置的结构框图(二)。
如图7所示,在本实施例的一个可选示例中,上述装置还包括:识别模块66,用于对脸部图像进行识别操作,得到脸部特征与显示屏的矩形区域的对应关系,其中,矩形区域预先通过指定规则划分。
识别模块66包括:确定单元660,用于确定所有脸部特征对应的位置坐标,其中,脸部特征包括:额头、眉毛、眼睛、鼻子、左脸颊、右脸颊、唇、下巴;建立单元662,用于建立所有脸部特征对应的位置坐标与矩形区域的对应关系。
此处的根据指定规则划分矩形区域可以是,根据脸部构造学特征和脸部不同部位的像素密集度不同等特征,将脸部图像在显示屏上的显示区域划分为上庭显示区域、中庭显示区域和下庭显示区域;也可以是,根据脸部构造学特征和脸部不同部位的像素密集度不同的特征,将脸部图像在显示屏上的显示区域划分为若干矩形区域,每一矩形区域显示脸部某一部位。
在本实施例的一个可选示例中,建立所有脸部特征对应的位置坐标与矩形区域的对应关系每一子矩形区域,可以通过以下方法实现:先确定脸部眉心在屏幕中的坐标(x0,y0),确定眉心所在矩形区域的坐标范围,根据脸部构造学特征,识别出其他脸部特征所在的矩形区域,然后计算出每一矩形区域的坐标范围,建立每一矩形区域坐标和所在区域脸部特 征的对应关系数据库。
如图7所示,确定模块62包括:第一获取单元620,用于获取以位置信息的坐标作为圆心,以第一数量的像素作为半径的圆形区域内的第一图像数据;第一识别单元622,用于对第一图像数据进行识别,根据识别结果判断操作对应的脸部特征。
确定模块62还包括:第二获取单元624,用于获取以位置信息的坐标作为圆心,以第二数量的像素作为半径的圆形区域内的第二图像数据,其中,第二数量大于第一数量;第二识别单元626,用于对第二图像数据进行识别,根据识别结果判断操作对应的脸部特征的位置区域,其中,位置区域包括:脸部图像的左半部分、脸部图像的右半部分。图像处理菜单中可以设置“左右对称”的选项,用于同步脸部图像的左半部分和脸部图像的右半部分的美颜处理效果。
为了更好地理解本公开实施例的上述技术方案,本实施例还提供了一种图像处理终端,用于实现上述实施例及其可选示例中的图像方法,已经进行过的说明此处不再赘述。图8是根据本公开实施例2的图像处理终端的结构框图。如图8所示,该终端包括:处理器82;存储器84,用于存储处理器可执行的指令;
处理器82用于根据存储器84中存储的指令执行动作,包括:
检测显示屏上对待处理图像的操作,并获取操作对应的位置信息,其中,操作包括:触摸操作、按压操作,待处理图像中包括:脸部图像;
当对脸部图像进行所述操作时,根据操作对应的位置信息确定与操作对应的脸部特征;
显示屏80,用于显示脸部特征对应的图像处理菜单。
通过上述终端,显示屏、处理器和存储器之间分工合作,检测显示屏上对待处理图像的操作,并获取操作对应的位置信息,其中待处理图像中包括脸部图像;当对脸部图像进行操作时,根据操作对应的位置信息确定与操作对应的脸部特征;然后显示脸部特征对应的图像处理菜单,解决了相关技术中,图像处理方式无法实现点击图像的指定区域就能够对指定区域进行图像处理,从而导致用户体验性差的问题,能够捕捉用户的实时操作,并针对性地提供图像处理菜单,有效实现了点击哪里就美容哪里的直观美颜效果,保证了良好的用户体验性。
在本实施例的一个可选示例中,处理器82还用于:对脸部图像进行识别操作,得到脸部特征与所述显示屏的矩形区域的对应关系,其中,矩形区域预先通过指定规则划分。
在本实施例的一个可选示例中,处理器82还用于:确定所有脸部特征对应的位置坐标,其中,脸部特征包括:额头、眉毛、眼睛、鼻子、左脸颊、右脸颊、唇、下巴;建立所有脸部特征对应的位置坐标与矩形区域的对应关系。
在本实施例的一个可选示例中,处理器82还用于:获取以位置信息的坐标作为圆心,以第一数量的像素作为半径的圆形区域内的第一图像数据;对第一图像数据进行识别,根据识别结果判断操作对应的所述脸部特征。
在本实施例的一个可选示例中,处理器82还用于:获取以位置信息的坐标作为圆心,以第二数量的像素作为半径的圆形区域内的第二图像数据,其中,第二数量大于第一数量;对第二图像数据进行识别,根据识别结果判断所述操作对应的脸部特征的位置区域,其中,位置区域包括:脸部图像的左半部分、脸部图像的右半部分。
实施例3
为了更好地理解本公开实施例的技术方案,本实施例以人脸美颜为例,提供一种照片美颜处理方法。图9是根据本公开实施例3的照片美颜处理方法的流程图。如图9所示,该方法包括以下步骤:
S902,选择并打开需要处理的照片,其中该照片包括一个或者多个人脸图像。
S904,启动人脸识别模块,对上述照片进行人脸识别处理,识别到人脸各个部位在显示屏上的矩形区域对应关系。
人脸识别模块识别到人脸部位包括:眼部、眉部、额部、鼻部、嘴部,左脸颊部,左脸颊部,下巴部等。人脸识别模块根据人脸构造学特征和人脸不同区域像素的特征将人脸划分成上庭显示区域、中庭显示区域和下庭显示三部分。
进一步根据人脸构造学特征和人脸不同区域像素的特征将人脸显示区域划分为若干子矩形区域,每一子矩形区域显示人脸某一部位。人脸识别模块计算人脸眉心在屏幕中心坐标(x0,y0),根据人脸构造学计算出每一子矩形区域坐标。图4是根据本公开实施例3的人脸部位子矩形区域分布示意图。如图4所示,建立每一子矩形区域坐标和所在区域人脸部位的对应关系数据库。
S906,第一获取模块获取当前手指触摸参数,触摸参数包括触摸位置坐标。
触摸参数也可以包括压力大小f,触摸时长t。第一判断模块判断触摸点击(或者按压)坐标是否落在人脸的某一部位对应某一矩形区域坐标内。当第一判断模块判断触摸点击(或者按压)位置落在人脸部位矩形区域坐标外,则不响应该触摸点击(或者按压)动作。
可选地,增加第二判断模块,判断触摸所产生的持续时间(和/或压力值)是否达到预设值阈值。
可选地,当第二判断模块判断触摸(或者点击)达到预设阈值的情况下。查找人脸部位对应矩形区域坐标数据库,找到当前触摸位置对于的人脸部位。
S908,在预设屏幕区域,显示当前触摸人脸部位对应的美颜菜单和美颜操作指导说明,以提示用户进一步对所述人脸部位进行美颜操作。
预设屏幕区域优选为非人脸显示区域。当用户点再次击,重新开始执行第一步操作。
当用户打开预处理的照片后,可以先进行放大和或缩小到适当比例后,再启动美颜处理,以得到更好的美颜效果。
图10是根据本公开实施例3的美颜菜单显示示意图(一),图11是根据本公开实施例3的美颜菜单显示示意图(二)。如图10和11所示,在本实施例的可选示例中,可以 将人脸各个部位的美颜菜单在屏幕顶部以下拉菜单方式显示。
在本实施例的一个可选示例中,人脸各个部位的多个菜单(额部,眼睛,鼻子,嘴巴,下巴),在步骤S908中,同一时刻,只弹出当前点击对应人脸部位对应的菜单,其他未点击人脸部位对应的美颜菜单为隐藏属性不可见。
本公开根据当前部位的触摸参数值弹出当前部位的预设美颜菜单,提示用户对当前部位进行美颜处理,并实时显示处理后的效果,极大的提高了美颜处理的便利性和娱乐性。
实施例4
为了更好地理解本公开实施例的技术方案,本实施例以人脸美颜为例,提供一种照片美颜处理方法。图12是根据本公开实施例4的照片美颜处理方法的流程图。如图12所示,该方法包括以下步骤:
S1202,选择并打开需要处理的照片,其中该照片包括一个或者多个人脸图像。
S1204,启动人脸识别模块,对上述照片进行人脸识别处理,根据人脸构造学和人脸各个部位的像素特征,识别人脸在显示屏上的坐标区域。
可选地,在人脸图像所在坐标区域显示一矩形框提示当前人脸位置。
S1206,第一获取模块获取当前手指触摸参数,触摸参数包括触摸位置坐标。
可选地,触摸参数也包括压力大小f,触摸时长t。第一判断模块判断触摸点击(或者按压)坐标是否落在人脸的所述矩形区域坐标内。当第一判断模块判断触摸点击(或者按压)位置落在人脸部位矩形区域坐标外,则不响应该触摸点击(或者按压)动作。
可选地,增加第二判断模块,判断触摸所产生的持续时间(和/或压力值)是否达到预设值阈值。
S1208,以触摸位置坐标(x1,y1)为中心,获取M个像素为半径的区域内的图像数据,人脸识别模块根据人脸构造学和人脸各个部位的像素特征对该区域内数据进行识别,进而判断当前触摸位置的人脸部位。
可选地,由于人脸构造学的对称性,当前M个像素为半径的区域内的图像数据,可能对应人脸左边部位,也可能对应人脸的右边部位。进一步以触摸中心为圆点,获取N个像素为半径内的图像数据进一步判断是人脸左边半边区域还是右边半边区域。
S1210,在预设屏幕区域,显示当前触摸人脸部位对应的美颜菜单和美颜操作指导说明,以提示用户进一步对所述人脸部位进行美颜操作。
可选地,由于人脸构造学的对称性,在美颜菜单中增加“左右对称”选项,以同步人脸左边和右边的美颜效果,达到美颜效果的一致性。
本公开根据当前部位的触摸参数值弹出当前部位的预设美颜菜单,提示用户对当前部位进行美颜处理,并实时显示处理后的效果,极大的提高了美颜处理的便利性和娱乐性。
实施例5
为了更好地理解本公开实施例的技术方案,本实施例眼部美颜实施例进行阐述。图13是根据本公开实施例5的美颜处理方法的流程图。如图13所示,该方法包括:
步骤1302,启动人脸识别模块,对人脸进行识别,根据人脸眼部特征找到眉心所在屏幕坐标(x2,y2)。
步骤1304,根据人脸构造学以及在人脸眼部的像素特性找到人脸双眼所在显示区域,建立眼部所在矩形区域坐标和眼部的对应关系;同理根据人脸其他部位的像素特征,找到人脸其他五官部位所在矩形区域;建立其他五官部位对应的矩形区域坐标的关系。
步骤1306,获取当前触摸点击事件,并对当前触摸参数进行判断。
步骤1308,当判断当前触摸参数达到美颜预设阈值的情况下,根据当前触摸坐标搜索人脸部位对应的矩形区域数据库,找到对应的人脸部位。
步骤1310,在预设非分人脸区域弹出提示框,为眼部对应预设美颜处理菜单。
在预设非分人脸区域弹出提示框,如图11所示,为眼部对应预设美颜处理菜单。
本实施例提供一种照片美颜处理方法,可以根据提示的眼部美颜菜单和操作方法,对眼部进行进一步美颜处理,提示了美颜处理的可操纵性和便利性。
实施例6
本公开的实施例还提供了一种存储介质。可选地,在本实施例中,上述存储介质可以用于保存上述实施例一所提供的图像处理方法所执行的程序代码。
可选地,在本实施例中,上述存储介质可以位于计算机网络中移动终端群中的任意一个移动终端中,或者位于移动终端群中的任意一个移动终端中。
可选地,在本实施例中,存储介质被设置为存储用于执行以下步骤的程序代码:
S1,检测显示屏上对待处理图像的操作,并获取操作对应的位置信息,其中,操作包括:触摸操作、按压操作,待处理图像中包括:脸部图像;
S2,当对脸部图像进行操作时,根据操作对应的位置信息确定与操作对应的脸部特征;
S3,显示脸部特征对应的图像处理菜单。
上述本公开实施例序号仅仅为了描述,不代表实施例的优劣。
在本公开的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅为本公开的优选实施例而已,并不用于限制本公开,对于本领域的技术人员来说,本公开可以有各种更改和变化。凡在本公开的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本公开的保护范围之内。
工业实用性
本公开实施例提供的图像处理方法,可应用于移动终端中,通过检测显示屏上对待处理图像的操作,并获取操作对应的位置信息,根据操作对应的位置信息确定与操作对应的脸部特征;然后显示脸部特征对应的图像处理菜单,能够捕捉用户的实时操作,并针对性地提供图像处理菜单,有效实现了点击哪里就美容哪里的直观美颜效果,保证了良好的用户体验性。

Claims (12)

  1. 一种图像处理方法,包括:
    检测显示屏上对待处理图像的操作,并获取所述操作对应的位置信息,其中,所述操作包括:触摸操作、按压操作,所述待处理图像中包括:脸部图像;
    当对所述脸部图像进行所述操作时,根据所述操作对应的位置信息确定与所述操作对应的脸部特征;
    显示所述脸部特征对应的图像处理菜单。
  2. 根据权利要求1所述的方法,其中,检测显示屏上对待处理图像的操作之前,所述方法还包括:
    对所述脸部图像进行识别操作,得到所述脸部特征与所述显示屏的矩形区域的对应关系,其中,所述矩形区域预先通过指定规则划分。
  3. 根据权利要求2所述的方法,其中,得到所述脸部特征与所述显示屏的矩形区域的对应关系,包括:
    确定所有所述脸部特征对应的位置坐标,其中,所述脸部特征包括:额头、眉毛、眼睛、鼻子、左脸颊、右脸颊、唇、下巴;
    建立所有所述脸部特征对应的位置坐标与所述矩形区域的对应关系。
  4. 根据权利要求1所述的方法,其中,根据所述操作对应的位置信息确定与所述操作对应的脸部特征区域,包括:
    获取以所述位置信息的坐标作为圆心,以第一数量的像素作为半径的圆形区域内的第一图像数据;
    对所述第一图像数据进行识别,根据识别结果判断所述操作对应的所述脸部特征。
  5. 根据权利要求4所述的方法,其中,对所述第一图像数据进行识别,根据识别结果判断所述操作对应的所述脸部特征之后,还包括:
    获取以所述位置信息的坐标作为圆心,以第二数量的像素作为半径的圆形区域内的第二图像数据,其中,所述第二数量大于所述第一数量;
    对所述第二图像数据进行识别,根据识别结果判断所述操作对应的所述脸部特征的位置区域,其中,所述位置区域包括:所述脸部图像的左半部分、所述脸部图像的右半部分。
  6. 一种图像处理装置,包括:
    检测模块,设置为检测显示屏上对待处理图像的操作,并获取所述操作对应的位置信息,其中,所述操作包括:触摸操作、按压操作,所述待处理图像中包括:脸部图像;
    确定模块,设置为当对所述脸部图像进行所述操作时,根据所述操作对应的位置信息确定与所述操作对应的脸部特征;
    显示模块,设置为显示所述脸部特征对应的图像处理菜单。
  7. 根据权利要求6所述的装置,其中,所述装置还包括:
    识别模块,设置为对所述脸部图像进行识别操作,得到所述脸部特征与所述显示屏的 矩形区域的对应关系,其中,所述矩形区域预先通过指定规则划分。
  8. 根据权利要求7所述的装置,其中,所述识别模块包括:
    确定单元,设置为确定所有所述脸部特征对应的位置坐标,其中,所述脸部特征包括:额头、眉毛、眼睛、鼻子、左脸颊、右脸颊、唇、下巴;
    建立单元,设置为建立所有所述脸部特征对应的位置坐标与所述矩形区域的对应关系。
  9. 根据权利要求6所述的装置,其中,所述确定模块包括:
    第一获取单元,设置为获取以所述位置信息的坐标作为圆心,以第一数量的像素作为半径的圆形区域内的第一图像数据;
    第一识别单元,设置为对所述第一图像数据进行识别,根据识别结果判断所述操作对应的所述脸部特征。
  10. 根据权利要求9所述的装置,其中,所述确定模块还包括:
    第二获取单元,设置为获取以所述位置信息的坐标作为圆心,以第二数量的像素作为半径的圆形区域内的第二图像数据,其中,所述第二数量大于所述第一数量;
    第二识别单元,设置为对所述第二图像数据进行识别,根据识别结果判断所述操作对应的所述脸部特征的位置区域,其中,所述位置区域包括:所述脸部图像的左半部分、所述脸部图像的右半部分。
  11. 一种图像处理终端,包括:
    处理器;
    存储器,设置为存储所述处理器可执行的指令;
    所述处理器,设置为根据所述存储器中存储的所述指令执行动作,所述动作包括:
    检测显示屏上对待处理图像的操作,并获取所述操作对应的位置信息,其中,所述操作包括:触摸操作、按压操作,所述待处理图像中包括:脸部图像;
    当对所述脸部图像进行所述操作时,根据所述操作对应的位置信息确定与所述操作对应的脸部特征;
    显示屏,设置为显示所述脸部特征对应的图像处理菜单。
  12. 根据权利要求11所述的终端,其中,所述处理器还设置为执行以下动作:
    对所述脸部图像进行识别操作,得到所述脸部特征与所述显示屏的矩形区域的对应关系,其中,所述矩形区域预先通过指定规则划分。
PCT/CN2017/080371 2016-10-28 2017-04-13 图像处理方法、装置及终端 WO2018076622A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610976690.3 2016-10-28
CN201610976690.3A CN108021308A (zh) 2016-10-28 2016-10-28 图像处理方法、装置及终端

Publications (1)

Publication Number Publication Date
WO2018076622A1 true WO2018076622A1 (zh) 2018-05-03

Family

ID=62024545

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/080371 WO2018076622A1 (zh) 2016-10-28 2017-04-13 图像处理方法、装置及终端

Country Status (2)

Country Link
CN (1) CN108021308A (zh)
WO (1) WO2018076622A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109118427A (zh) * 2018-09-07 2019-01-01 Oppo广东移动通信有限公司 图像光效处理方法和装置、电子设备、存储介质
CN111353470A (zh) * 2020-03-13 2020-06-30 北京字节跳动网络技术有限公司 图像的处理方法、装置、可读介质和电子设备
CN111462205A (zh) * 2020-03-30 2020-07-28 广州虎牙科技有限公司 图像数据的变形、直播方法、装置、电子设备和存储介质
CN111507925A (zh) * 2020-04-29 2020-08-07 北京字节跳动网络技术有限公司 修图处理方法、装置、设备和存储介质
CN111840039A (zh) * 2020-07-05 2020-10-30 杜兴林 利用参数检测的自动化瘦脸治疗系统
CN113329252A (zh) * 2018-10-24 2021-08-31 广州虎牙科技有限公司 一种基于直播的人脸处理方法、装置、设备和存储介质

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476864A (zh) * 2019-01-23 2020-07-31 北京奇虎科技有限公司 图像处理方法、装置、计算机设备及存储介质
CN110084219B (zh) * 2019-05-07 2022-06-24 厦门美图之家科技有限公司 界面交互方法及装置
CN110855887B (zh) * 2019-11-18 2021-06-08 深圳传音控股股份有限公司 基于镜面的图像处理方法、终端及计算机可读存储介质
CN114529445A (zh) * 2020-10-30 2022-05-24 北京字跳网络技术有限公司 妆容特效绘制方法、装置、电子设备及存储介质
CN112508777A (zh) * 2020-12-18 2021-03-16 咪咕文化科技有限公司 一种美颜方法、电子设备及存储介质
CN113282207B (zh) * 2021-06-15 2024-03-22 咪咕文化科技有限公司 菜单展示方法、装置、设备、存储介质及产品

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104951770A (zh) * 2015-07-02 2015-09-30 广东欧珀移动通信有限公司 人脸图像数据库的构建方法、应用方法及相应装置
CN105068748A (zh) * 2015-08-12 2015-11-18 上海影随网络科技有限公司 触屏智能设备的摄像头实时画面中用户界面交互方法
CN105250136A (zh) * 2015-10-28 2016-01-20 广东小天才科技有限公司 一种智能提醒穴位按摩的方法、装置及设备
CN105303523A (zh) * 2014-12-01 2016-02-03 维沃移动通信有限公司 一种图像处理方法及移动终端

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8908904B2 (en) * 2011-12-28 2014-12-09 Samsung Electrônica da Amazônia Ltda. Method and system for make-up simulation on portable devices having digital cameras

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105303523A (zh) * 2014-12-01 2016-02-03 维沃移动通信有限公司 一种图像处理方法及移动终端
CN104951770A (zh) * 2015-07-02 2015-09-30 广东欧珀移动通信有限公司 人脸图像数据库的构建方法、应用方法及相应装置
CN105068748A (zh) * 2015-08-12 2015-11-18 上海影随网络科技有限公司 触屏智能设备的摄像头实时画面中用户界面交互方法
CN105250136A (zh) * 2015-10-28 2016-01-20 广东小天才科技有限公司 一种智能提醒穴位按摩的方法、装置及设备

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109118427A (zh) * 2018-09-07 2019-01-01 Oppo广东移动通信有限公司 图像光效处理方法和装置、电子设备、存储介质
CN109118427B (zh) * 2018-09-07 2023-05-05 Oppo广东移动通信有限公司 图像光效处理方法和装置、电子设备、存储介质
CN113329252A (zh) * 2018-10-24 2021-08-31 广州虎牙科技有限公司 一种基于直播的人脸处理方法、装置、设备和存储介质
CN113329252B (zh) * 2018-10-24 2023-01-06 广州虎牙科技有限公司 一种基于直播的人脸处理方法、装置、设备和存储介质
CN111353470A (zh) * 2020-03-13 2020-06-30 北京字节跳动网络技术有限公司 图像的处理方法、装置、可读介质和电子设备
CN111353470B (zh) * 2020-03-13 2023-08-01 北京字节跳动网络技术有限公司 图像的处理方法、装置、可读介质和电子设备
CN111462205A (zh) * 2020-03-30 2020-07-28 广州虎牙科技有限公司 图像数据的变形、直播方法、装置、电子设备和存储介质
CN111462205B (zh) * 2020-03-30 2024-03-08 广州虎牙科技有限公司 图像数据的变形、直播方法、装置、电子设备和存储介质
CN111507925A (zh) * 2020-04-29 2020-08-07 北京字节跳动网络技术有限公司 修图处理方法、装置、设备和存储介质
CN111507925B (zh) * 2020-04-29 2023-05-12 抖音视界有限公司 修图处理方法、装置、设备和存储介质
CN111840039A (zh) * 2020-07-05 2020-10-30 杜兴林 利用参数检测的自动化瘦脸治疗系统

Also Published As

Publication number Publication date
CN108021308A (zh) 2018-05-11

Similar Documents

Publication Publication Date Title
WO2018076622A1 (zh) 图像处理方法、装置及终端
JP7052079B2 (ja) 画像処理方法、装置、コンピュータ装置及びコンピュータプログラム
TWI751161B (zh) 終端設備、智慧型手機、基於臉部識別的認證方法和系統
CN109242765B (zh) 一种人脸图像处理方法、装置和存储介质
WO2016180224A1 (zh) 一种人物图像处理方法及装置
TWI773096B (zh) 妝容處理方法、裝置、電子設備及儲存媒體
US10373348B2 (en) Image processing apparatus, image processing system, and program
US10846514B2 (en) Processing images from an electronic mirror
US11308548B2 (en) Information processing methods and device for trying on clothes
JP6369246B2 (ja) 似顔絵生成装置、似顔絵生成方法
US20220383389A1 (en) System and method for generating a product recommendation in a virtual try-on session
Szwoch FEEDB: a multimodal database of facial expressions and emotions
WO2024114470A1 (zh) 商品虚拟试用效果展示方法及电子设备
JP2019048026A (ja) 生体情報解析装置及び手肌解析方法
CN110866139A (zh) 一种化妆处理方法、装置及设备
CN112190921A (zh) 一种游戏交互方法及装置
WO2017000217A1 (zh) 活体检测方法及设备、计算机程序产品
WO2018059258A1 (zh) 采用增强现实技术提供手掌装饰虚拟图像的实现方法及其装置
TW201447641A (zh) 一種使螢幕中的游標移至可按物件的方法及用於實現該方法的電腦系統與電腦程式產品
US9501710B2 (en) Systems, methods, and media for identifying object characteristics based on fixation points
CN111354478B (zh) 整形模拟信息处理方法、整形模拟终端和整形服务终端
CN110321009A (zh) Ar表情处理方法、装置、设备和存储介质
US11481940B2 (en) Structural facial modifications in images
WO2021155666A1 (zh) 用于生成图像的方法和装置
CN114913575A (zh) 活体验证方法、装置以及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17864083

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17864083

Country of ref document: EP

Kind code of ref document: A1