Disclosure of Invention
The embodiment of the invention provides an eye processing method of a self-shot video Thor picture and a related product, which can completely cover the eye area of a person by the Thor picture and have the advantage of improving the user experience.
In a first aspect, an embodiment of the present invention provides an eye processing method for a self-portrait video catch picture, where the method includes the following steps:
when the terminal determines to enter a self-photographing page of a self-photographing video application, determining a selected Thor picture;
the method comprises the steps that a terminal collects a first picture and determines an eye area of the first picture;
the terminal extracts the discharge eyes of the thunder image, adjusts the size of the discharge eyes to the size of the human eye area to obtain an adjusted thunder image, and superposes the adjusted thunder image and the first image to obtain a second image.
Optionally, the determining the eye region of the first picture specifically includes:
determining a face area of a first picture, constructing a rectangular frame in the upper area of the face area, wherein the length of the rectangular frame is equal to the length of the face area, identifying each pixel point in the rectangular frame to determine an RGB value of each pixel point, reserving the pixel points with the RGB values being black and the pixel points with the RGB values being white, filtering out the black pixel points and the white pixel points which are not adjacent, constructing the continuous black pixel points into one area to obtain two black pixel point areas, constructing the continuous white pixel points into one area to obtain two white pixel point areas, connecting the connected black pixel point area with the white pixel point area to obtain two eye areas, and determining the two eye areas as the eye areas.
Optionally, the method further includes:
determining a vertical central line of a face region, determining an RGB value of each pixel point of the face region, keeping the RGB values of black pixel points, combining the distance between the black pixel points at set distance pixel points into a region to obtain a plurality of regions, searching 2 regions which are in a set area and symmetrical along the vertical central line from the plurality of regions, and determining the eyebrow region, if the positions of the eyebrow region and the eye region in the x-axis direction are in a preset range, determining the eye region to be accurate.
Optionally, the method further includes:
and confirming the position of the mole in the face area of the second picture, and performing transparentization treatment on the mole.
In a second aspect, a terminal is provided, which includes: a processor, a camera and a display screen,
the display screen is used for determining a selected Thor picture when a self-timer page of a self-timer video application is entered;
the camera is used for collecting a first picture,
the processor is used for determining a human eye area of the first picture; and extracting the discharge eyes of the thunder image, adjusting the size of the discharge eyes to the size of the human eye area to obtain an adjusted thunder image, and overlapping the adjusted thunder image and the first image to obtain a second image.
Optionally, the processor is specifically configured to determine a face region of the first picture, construct a rectangular frame in an upper region of the face region, where the length of the rectangular frame is equal to the length of the face region, identify each pixel point in the rectangular frame to determine an RGB value of each pixel point, reserve a pixel point whose RGB value is black and a pixel point whose RGB value is white, filter out non-adjacent black and white pixel points, construct continuous black pixel points into a region to obtain two black pixel point regions, construct continuous white pixel points into a region to obtain two white pixel point regions, connect a connected black pixel point region with a white pixel point region to obtain two eye regions, and determine that the two eye regions are the eye regions.
Optionally, the processor is further configured to determine a vertical center line of the face region, determine an RGB value of each pixel point in the face region, retain the RGB values of the black pixel points, combine the distance between the black pixel points at a set distance from the pixel points into a plurality of regions, find 2 regions that are within a set area and symmetrical along the vertical center line from the plurality of regions, and determine the eyebrow region, where the eyebrow region and the eye region are located in a preset range in the x-axis direction, if the eyebrow region and the eye region are located in the preset range, then determine the eye region accurately.
Optionally, the processor is further configured to determine a position of a mole in the face area of the second picture, and perform a transparentization process on the mole.
Optionally, the terminal is: a smart phone or a tablet computer.
In a third aspect, a computer-readable storage medium is provided, which stores a program for electronic data exchange, wherein the program causes a terminal to execute the method provided in the first aspect.
The embodiment of the invention has the following beneficial effects:
it can be seen that, when the self-photographing page of the self-photographing application is determined, the selected catch picture is determined, after the first picture is collected, the human eye area is determined, the discharge eyes of the catch picture are extracted, the size of the discharge eyes is adjusted to the size of the human eye area to obtain the adjusted catch picture, and the adjusted catch picture and the first picture are overlapped to obtain the second picture, so that the discharge eyes can be automatically adjusted along with the size of the human eye area, the human eyes are completely covered, the picture effect is improved, and the user experience is improved.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of the invention and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, result, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 provides a terminal, which may specifically be a smart phone, a tablet computer, a computer, and a server, where the smart phone may be a terminal of an IOS system, an android system, and the terminal may specifically include: the device comprises a processor, a memory, a camera and a display screen, wherein the components can be connected through a bus or in other ways, and the application is not limited to the specific way of the connection.
Referring to fig. 2, fig. 2 provides an eye processing method of a self-portrait video thunderbolt picture, which is shown in fig. 2 and executed by the terminal shown in fig. 1, and includes the following steps:
step S201, when the terminal determines to enter a self-shooting page of a self-shooting video application, determining a selected catch picture (a tremble catch picture);
step S202, a terminal collects a first picture and determines an eye area of the first picture;
step S203, the terminal extracts the discharge eyes of the Thor picture, adjusts the size of the discharge eyes to the size of the human eye area to obtain an adjusted Thor picture, and superposes the adjusted Thor picture and the first picture to obtain a second picture.
The application provides a technical scheme is when confirming the auto heterodyne page that auto heterodyne was used, confirm the catch picture of selection, after gathering first picture, confirm the people's eye region, draw the eyes that discharge of catch picture, the catch picture after will discharging eyes's size adjustment to people's eye region's size obtains the adjustment, catch picture after will adjusting and first picture stack obtain the second picture, the eyes that discharge just so can be along with people's eye region's size automatic adjustment, cover people's eyes completely, the effect of picture has been improved, user experience degree has been improved.
Optionally, the manner of determining the face region of the first picture may be determined by a face recognition algorithm, which includes but is not limited to: a Baidu face recognition algorithm, a Google face recognition algorithm, and the like.
Optionally, the determining the eye region of the first picture specifically may include:
determining a face region of the first picture (which may be determined by a face recognition algorithm, such as face recognition of hundred, google, etc.), constructing a rectangular box over the upper region of the face region (such as the upper half of the face region), the length of the rectangular frame is equal to the length of the human face area, each pixel point in the rectangular frame is identified to determine the RGB value of each pixel point, the pixel points with black RGB values and the pixel points with white RGB values are reserved, the non-adjacent black pixel points and white pixel points are filtered, the continuous black pixel points are constructed into one area to obtain two black pixel point areas, the continuous white pixel points are constructed into one area to obtain two white pixel point areas, the connected black pixel point area and the white pixel point area are connected to obtain two eye areas, and the two eye areas are determined to be human eye areas.
Optionally, the method may further include:
determining a vertical center line of a face region, determining an RGB value of each pixel point in the face region, keeping the RGB values of black pixel points, combining the pixel points with a set distance (smaller, for example 1mm) between the black pixel points into a plurality of regions, searching 2 regions which are in a set area (moderate, smaller than a hair region, larger than a mole region, for example 5 x 20mm) and are symmetrical along a vertical center line from the plurality of regions, determining the eyebrow region, and determining the eye region to be accurate if the positions of the eyebrow region and the eye region in an x-axis (namely horizontal) direction are in a preset range. Which verifies the eye area by the eyebrow area.
Optionally, the method may further include:
and confirming the position of the nevus in the face area, and performing transparentization treatment on the nevus.
Optionally, the method may further include:
and if the second picture comprises double chin, carrying out flattening treatment on the double chin in the second picture to obtain a third picture.
The above leveling the double chin in the second picture may specifically include:
acquiring a second picture line route, determining a region with 2 adjacent line lines and the distance between the 2 line lines being less than the set distance as a region to be determined, constructing equidistant lines of lambda line lines in the region to be determined, extracting pictures between the equidistant lines, comparing and determining whether the equidistant lines are consistent, if the two regions are consistent (for example, the two regions are non-double-chin regions, the pictures thereof are consistent, the picture comparison method can be realized by the existing comparison method), determining the region to be determined as the non-double-chin region, if the two regions are inconsistent (the double-chin regions are not distributed uniformly due to the meat, so the pictures of each region are different), determining the region to be determined as the double-chin region, and constructing an upper equidistant line in the upper region of the double-chin region, extracting a gamma picture between the upper equidistant line and the upper grain line, extending the gamma picture downwards by lambda-1 (namely, replacing each equidistant line with the gamma picture locally), and then replacing the double-chin region to finish flattening treatment to obtain a third picture.
The obtaining of the second picture grain line may specifically include:
marking the RGB values of all pixel points of the second picture into different colors, determining different color line segments in the same color, if the number of the different color line segments in the same color exceeds 2, obtaining the distance between the different color line segments, and if the distance is smaller than a set distance threshold, determining the different color line segments as grain lines.
For the line path, it divides the skin into a plurality of areas, the RGB value of the areas is the same because it is the skin, then when marking with color, the color of the skin is the same color, but for the line path, because of its folding and light, the line path is different from the RGB value of the skin, because of marking different color, the color of the line path is inconsistent with the color of the skin area, and it is a line segment, in addition, the line path with double chin has at least 2 or more, therefore the number is also more than 2. The grain lines can be distinguished by this method.
Referring to fig. 3, fig. 3 provides a terminal including: a camera, a processor and a display screen,
the display screen is used for determining a selected Thor picture when a self-timer page of a self-timer video application is entered;
the camera is used for collecting a first picture,
the processor is used for determining a human eye area of the first picture; and extracting the discharge eyes of the thunder image, adjusting the size of the discharge eyes to the size of the human eye area to obtain an adjusted thunder image, and overlapping the adjusted thunder image and the first image to obtain a second image.
An embodiment of the present invention also provides a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, the computer program causing a computer to execute a part or all of the steps of any one of the self-portrait video catch picture eye processing methods as set forth in the above method embodiments.
Embodiments of the present invention also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform part or all of the steps of any one of the self-portrait video catch picture eye processing methods as recited in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules illustrated are not necessarily required to practice the invention.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above embodiments of the present invention are described in detail, and the principle and the implementation of the present invention are explained by applying specific embodiments, and the above description of the embodiments is only used to help understanding the method of the present invention and the core idea thereof; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.