CN108846807B - Light effect processing method and device, terminal and computer-readable storage medium - Google Patents

Light effect processing method and device, terminal and computer-readable storage medium Download PDF

Info

Publication number
CN108846807B
CN108846807B CN201810501410.2A CN201810501410A CN108846807B CN 108846807 B CN108846807 B CN 108846807B CN 201810501410 A CN201810501410 A CN 201810501410A CN 108846807 B CN108846807 B CN 108846807B
Authority
CN
China
Prior art keywords
image
area
processed
effect processing
brightness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810501410.2A
Other languages
Chinese (zh)
Other versions
CN108846807A (en
Inventor
袁全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810501410.2A priority Critical patent/CN108846807B/en
Publication of CN108846807A publication Critical patent/CN108846807A/en
Application granted granted Critical
Publication of CN108846807B publication Critical patent/CN108846807B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Image Processing (AREA)

Abstract

The embodiment of the application relates to a light effect processing method, a light effect processing device, a terminal and a computer-readable storage medium. The method comprises the following steps: acquiring an image to be processed, and identifying a face area in the image to be processed; determining the position of five sense organs in the face region; acquiring a brightening position acting on the image to be processed; and carrying out light effect processing on the face region according to the relative relation between the brightening position and the position of the five sense organs. By the method, the effects of shadow, highlight and the like can be added to the facial features while the illumination effect is added to the image to be processed, so that the facial features have a three-dimensional effect, and the expressive force of the image is improved.

Description

Light effect processing method and device, terminal and computer-readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a light effect processing method, an apparatus, a terminal, and a computer-readable storage medium.
Background
With the continuous development of internet technology, the intellectualization of the mobile terminal brings great convenience to users, for example, the picture taking function is higher and higher, the picture taking effect is even comparable to that of a professional photographic instrument, and the mobile terminal has the convenience in carrying and using, so that the picture taking through the mobile terminal becomes an indispensable entertainment item in the life of people.
In the process of taking a picture or processing an image, the image is usually required to be subjected to light effect processing so as to improve the viewing effect of the image. In a traditional processing mode, the five sense organs of the face in the image are flat, and the stereoscopic effect cannot be reflected.
Disclosure of Invention
The embodiment of the application provides a light effect processing method, a light effect processing device, a terminal and a computer readable storage medium, which can adjust the brightness of the five sense organs in a face area according to a brightening position, so that the five sense organs of a face have a three-dimensional effect.
A light effect processing method, comprising:
acquiring an image to be processed, and identifying a face area in the image to be processed;
determining the position of five sense organs in the face region;
acquiring a brightening position acting on the image to be processed;
and carrying out light effect processing on the face region according to the relative relation between the brightening position and the position of the five sense organs.
A light effect processing apparatus, comprising:
the face recognition module is used for acquiring an image to be processed and recognizing a face area in the image to be processed;
the facial feature determination module is used for determining the positions of facial features in the face region;
the position acquisition module is used for acquiring a brightening position acting on the image to be processed;
and the light effect processing module is used for carrying out light effect processing on the face region according to the relative relation between the brightening position and the position of the five sense organs.
A terminal comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to carry out the method as described above.
A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method as set forth above.
According to the light effect processing method, the light effect processing device, the terminal and the computer readable storage medium, the image to be processed is obtained, the face region in the image to be processed is identified, the position of the five sense organs in the face region is determined, the brightening position acting on the image to be processed is obtained, the light effect processing is performed on the face region according to the relative relation between the brightening position and the position of the five sense organs, the effects of shadow, highlight and the like can be added to the five sense organs of the face while the illumination effect is added to the image to be processed, the parts of the five sense organs have the three-dimensional effect, and the expressive force of the image is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram of an application environment of a light effect processing method according to an embodiment;
fig. 2 is a schematic diagram of the internal structure of the terminal in one embodiment;
FIG. 3 is a flow chart illustrating a light effect processing method according to an embodiment;
FIG. 4 is a flow chart of a light effect processing method according to another embodiment;
FIG. 5 is a flow chart of a light effect processing method according to another embodiment;
FIG. 6 is a flow chart of a light effect processing method according to another embodiment;
FIG. 7 is a flow chart of a light effect processing method according to another embodiment;
FIG. 8 is a schematic view of a light effect processing model in an embodiment;
FIG. 9 is a block diagram of a light effect processing apparatus according to an embodiment;
FIG. 10 is a schematic diagram of an image processing circuit in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
Fig. 1 is an application environment diagram of a light effect processing method in an embodiment. Referring to fig. 1, the terminal 110 may use a camera thereon to perform shooting, such as scanning an object 120 in an environment in real time to obtain a frame image, and generating a shot image according to the frame image. Optionally, the camera includes a first camera module 112 and a second camera module 124, and the first camera module 112 and the second camera module 124 jointly perform shooting. The terminal 110 is provided with a plurality of photographing modes, such as light effect modes, and different photographing modes can be enabled to photograph the object 120 during photographing, so that images with different effects can be obtained.
The terminal 110 may use the frame image or the generated image as an image to be processed. Identifying a face area in a shooting scene in the image to be processed, and further determining the position of five sense organs in the face area; acquiring a brightening position acting on the image to be processed; and carrying out light effect processing on the face region according to the relative relation between the brightening position and the position of the five sense organs.
Further, the application environment may further include a user, the terminal 110 may display the image to be processed, and the user may select any area in the image to be processed displayed by the terminal 110 through the trigger instruction. The trigger instruction may be initiated according to a touch operation, a physical key operation, a voice control operation, a shaking operation, or the like. After detecting the trigger instruction, the terminal 110 obtains a brightening position of the image to be processed selected according to the trigger instruction, and determines a central pixel point according to the brightening position; acquiring a light effect processing model according to the central pixel point; calculating the brightness enhancement coefficient of each pixel point in the image to be processed according to the light effect processing model; and carrying out light effect processing on each pixel point in the image to be processed according to the brightness enhancement coefficient. The terminal 110 is an electronic device located at the outermost periphery of a computer network and mainly used for inputting user information and outputting a processing result, and it can be understood that in other embodiments provided in the present application, an application environment of the light effect processing method may only include the terminal 110.
Fig. 2 is a schematic diagram of an internal structure of the terminal in one embodiment. As shown in fig. 2, the terminal 110 includes a processor, a memory, a display screen, and a camera connected through a system bus. Wherein the processor is configured to provide computing and control capabilities to support the operation of the entire terminal 110. The memory is used for storing data, programs and the like, and the memory stores at least one computer program which can be executed by the processor to realize the light effect processing method suitable for the terminal 110 provided in the embodiments of the present application. The Memory may include a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random-Access-Memory (RAM). For example, in one embodiment, the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program is executable by a processor for implementing a light effect processing method provided by the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium. The camera comprises the first camera module and the second camera module, and both can be used for generating frame images. The display screen may be a touch screen, such as a capacitive screen or a resistive screen, and is used for displaying visual information such as a frame image or a shot image, and may also be used for detecting a touch operation applied to the display screen to generate a corresponding instruction. The terminal 110 may be a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales mobile terminal), a vehicle-mounted computer, a wearable device, and the like.
Those skilled in the art will appreciate that the configuration shown in fig. 2 is a block diagram of only a portion of the configuration associated with the present application and does not constitute a limitation on the terminal 110 to which the present application is applied, and that a particular terminal 110 may include more or less components than those shown, or combine certain components, or have a different arrangement of components.
In one embodiment, as shown in fig. 3, a light effect processing method is provided, which can adjust the brightness of the five sense organs in the face area according to the brightening position, so as to make the five sense organs of the face have a stereoscopic effect. The embodiment is mainly explained by applying the method to the terminal shown in fig. 1, and the method includes the following steps 302 to 308:
step 302: acquiring an image to be processed, and identifying a face area in the image to be processed.
The image to be processed is an image which needs to be subjected to light effect processing, can be an image which is generated and stored by shooting, and can also be a frame image obtained by real-time scanning of a camera in a shooting mode. The terminal can extract related feature data from the image to be processed, detect whether the feature data is matched with the face feature, and if the feature data is matched with the face feature, further obtain the area of the detected face in the image to be processed, wherein the area is the face area.
When the image to be processed is a frame image, the terminal receives an instruction for starting the camera, and the camera can be called to scan and enter a shooting state. The camera comprises a first camera module and a second camera module. The first camera module and/or the second camera module can be used for scanning objects in the shooting environment to form the frame image. Alternatively, the frame image may be generated in real time at a corresponding frame rate, and a lighting effect may be added to the image in generating the frame image.
When the image to be processed is the generated image which is shot, the terminal can receive the light effect processing instruction of the image to be processed. The light effect processing instruction may be a light effect processing instruction for a generated image, which is automatically triggered after the photographed image is generated, and the generated image is the image to be processed. And receiving a light effect processing instruction of the user on the selected image, wherein the selected image is the image to be processed. The light effect processing instruction can be triggered by detected related touch operation, pressing operation of a physical key, voice control operation or shaking operation of the equipment and the like. The touch operation may be a touch click operation, a touch long press operation, a touch slide operation, a multi-point touch operation, and the like. The terminal can provide an opening button for triggering light effect processing, and when the clicking operation of the opening button is detected, a light effect processing instruction is triggered. The terminal can also preset opening voice information for triggering the light effect processing instruction. The corresponding voice information is received by calling the voice receiving device, and when the voice information is matched with the opening voice information, the light effect processing instruction can be triggered. Through analysis, the voice information can be judged to be matched with the preset opening voice information, so that the light effect processing instruction can be triggered.
The terminal can carry out face recognition on the image to be processed, judge whether the image to be processed contains a face or not, and determine the face area of the image to be processed if the image to be processed contains the face. The terminal can extract the image characteristics of the image to be processed, analyze the image characteristics through a preset face recognition model and judge whether the image to be processed contains a face. The image features may include shape features, spatial features, edge features, and the like, where the shape features refer to local shapes in the image to be processed, the spatial features refer to mutual spatial positions or relative directional relationships between a plurality of regions divided from the image to be processed, and the edge features refer to boundary pixels constituting two regions in the image to be processed.
Optionally, the face recognition model may be a decision model constructed in advance through machine learning, when the face recognition model is constructed, a large number of sample images may be obtained, the sample images include face images and unmanned images, the sample images may be labeled according to whether each sample image includes a face, the labeled sample images are used as input of the face recognition model, and the face recognition model is obtained through machine learning and training.
Step 304: and determining the positions of five sense organs in the face region.
In this embodiment, the pixel position of the feature point is obtained by extracting the feature point corresponding to the position of the five sense organs in the face region, and the position of the five sense organs is determined according to the pixel position.
Specifically, the terminal may extract feature points corresponding to five sense organ parts of the face region, where the five sense organ parts may be represented by a plurality of feature points, and the feature points may be used to describe shapes, positions, contours, and the like of the five sense organ parts. The position of the five sense organs can be described by using the coordinate value of each feature point, wherein the coordinate value can be represented by the pixel position corresponding to the feature point, for example, the coordinate value of the feature point is the X-th row and the Y-th column of the corresponding pixel position.
It is understood that the position of the five sense organs includes the pixel position of each of the five sense organs, i.e., the pixel positions of eyes, ears, nose, mouth and eyebrows.
Step 306: a highlight position acting on the image to be processed is acquired.
The terminal may obtain a highlight location, which may refer to a highlight center location where the image to be processed is highlighted, which may be considered as a location where the added light intensity is highest. With the highlight location as the center, the intensity of the light effect added to the periphery of the highlight location may gradually decrease. Alternatively, the highlight location may be a fixed point preset by the terminal. For example, the highlight location may be the center point of the image to be processed. The terminal can obtain the length and the width of the image to be processed, and determine the center point of the image to be processed according to the length and the width, and the position of the center point of the image to be processed can be the middle value of the width and the middle value of the length. If the width of the image to be processed is W and the length is L, the position of the center point can be represented by (L/2, W/2). The highlight position may be other preset fixed points, but is not limited thereto.
Alternatively, the highlight position may be a position selected by the user himself, and the user may select a desired highlight position by touching an arbitrary position of the image to be processed. The terminal can receive touch operation of a user, acquire a touch position according to the received touch operation, and use the touch position as a brightening position. The user can select the brightening position according to actual requirements, the requirements of different users are met, and the added light effect can be effectively improved.
Alternatively, the highlight position may also be the center position of the portrait area in the image to be processed. After the terminal determines the face region, the center position of the face region can be obtained and used as the brightening position. The highlight position may also be a specific part of the image area, for example, the forehead area of a human face may be used as the highlight position. After the terminal determines the face region, feature points of the face region can be extracted, and the feature points can be used for describing the shape and position of five sense organs, the face contour and the like of the face region. The terminal can determine the forehead area according to the characteristic points, select the center point of the forehead area and use the center point of the forehead area as a brightening position. The specific part in the face area is selected as the brightening position, so that the light effect added to the image to be processed is better.
It will be appreciated that the highlight locations may be obtained in other ways, and are not limited to the ones described above.
Step 308: and carrying out light effect processing on the face region according to the relative relation between the brightening position and the position of the five sense organs.
Wherein, the illumination effect can be added to the five sense organs of the human face according to the brightening position. The light effect processing on the human face area can be processing of adding shadow, highlight and other illumination effects on the five sense organs.
Specifically, direct lighting effect of light rays emitted from the brightening position on the five-sense-organ parts is simulated, the processing area of the human face area on the five-sense-organ parts is determined, and brightness adjustment is carried out on the processing area according to a preset adjusting strategy. That is, the brightening position is used as the illumination starting point, the illumination effect generated by irradiating the light to the five sense organs in the face region is simulated from the brightening position, the processing region of the five sense organs in the face region needs to be further determined, the processing region can be different parts in the preset five sense organs, such as the bridge of the nose, the corners of the eyes, the corners of the mouth and the like, the brightness of the processing region is adjusted, the effects of highlight, shadow and the like can be increased, and therefore the five sense organs in the image to be processed are more three-dimensional.
Optionally, the position information of the brightening position corresponding to the position of the five sense organs can be obtained, the position information comprises an angle value and a distance value, the angle value and the distance value are substituted into a preset light effect processing model to determine the shadow region of the position of the five sense organs, and the brightness of the shadow region is adjusted according to preset adjusting parameters. The shadow area in the facial features is determined according to the brightening position, the brightness enhancement processing is carried out on the image at the brightening position, the closer pixel point is to the brightening position, the stronger the brightness enhancement amplitude is, the farther the pixel point is from the brightening position, the smaller the brightness enhancement amplitude is, at the moment, the orientation information of the brightening position and the facial features is determined by the light effect processing model according to the angle value and the distance value of the brightening position and the facial features, the area far away from the brightening position is selected as the shadow area at the facial features, the brightness value of the shadow area is reduced, the facial region presents the shadow effect, and the facial features are more three-dimensional in appearance.
Further, the lighting effects may also include lighting type, lighting intensity, lighting brightness, lighting color, and the like. The embodiment performs light effect processing on the face region of the image to be processed according to the determined brightening position. Specifically, the light effect processing method for the face area may further include adding different types of light effects, increasing brightness of the image, adjusting color of the image, adding a light distribution effect, and the like.
According to the light effect processing method, the image to be processed is obtained, the face region in the image to be processed is identified, the position of the five sense organs in the face region is determined, the brightening position acting on the image to be processed is obtained, and the light effect processing is performed on the face region according to the relative relation between the brightening position and the position of the five sense organs, so that the effects of shadow, highlight and the like can be added to the five sense organs of the face while the illumination effect is added to the image to be processed, the parts of the five sense organs have a three-dimensional effect, and the expressive force of the image is improved.
In one embodiment, as shown in fig. 4, the step 308 of performing light effect processing on the face region according to the relative relationship between the brightness enhancement position and the position of the five sense organs includes:
step 402: and simulating the direct projection effect of the light rays emitted from the brightening position on the five-sense-organ parts, and determining the processing area of the human face area on the five-sense-organ parts.
The illumination effect refers to simulating a scene that light rays are emitted from the brightening position and irradiate to the five sense organs based on the position relation between the brightening position and the five sense organs, and determining a processing area generated at the five sense organs of the human face according to the light irradiation principle, wherein the processing area refers to an area needing light effect processing.
Specifically, three-dimensional face features corresponding to a face in the image to be processed may be acquired by emitting structured light. The terminal can emit the structured light by calling the camera in the process of generating the picture to be processed so as to identify the distance between each pixel in the face area on the shot image and the camera, and the three-dimensional face feature of the face in the picture to be processed can be obtained according to the distance. Compared with two-dimensional face features such as the sizes of organs, the distances between organs and the like presented by a common picture, the three-dimensional face features and the face template features further comprise three-dimensional position information of each preset organ corresponding to a face, namely the three-dimensional position information of each organ. For example, the distance between each organ point on the face and a reference plane can be obtained by taking a certain reference plane as a reference, and three-dimensional position information of the organ, such as the height of the nose bridge, the depth of the eye socket and the like, can be embodied according to the distance. Alternatively, the emitted structured light may be infrared structured light.
Further, an area of the face area, which receives the structured light, is selected as a highlight processing area, and an area of the face area, which does not receive the structured light, is selected as a shadow processing area. According to the principle of light irradiation and shadow imaging, when a certain inclination angle exists between the light irradiation position and the facial features, areas with light rays blocked by the facial features appear in the facial region, namely areas without receiving structured light, and the areas without receiving the structured light are selected as shadow processing areas; and the area closer to the light source is influenced by the illumination intensity to have larger brightness, so that an area with larger brightness appears at the five sense organs close to the brightening position in the face area, namely the area receiving the direct projection of the structured light, and the area receiving the structured light is selected as a highlight processing area. For example, the imaging effect is obtained by calculating the angular relationship between the brightness enhancement position and the position of each of the five sense organs in the three-dimensional space, the brightness enhancement position and the position of the five sense organs can be connected in a three-dimensional coordinate system, the length of the connection line and the angle formed by the connection line and the reference plane are calculated, the region with the shorter connection line length in each of the five sense organs is selected as the highlight processing region, and the region with the smaller angle formed by the connection line and the reference plane in each of the five sense organs is selected as the shadow processing region. And carrying out light effect processing on the selected highlight processing area and the selected shadow processing area.
Step 404: and adjusting the brightness of the processing area according to a preset adjusting strategy.
Specifically, the present embodiment obtains the depth of a preset organ in the processing region, and calculates a depth threshold; increasing the brightness of the preset organ that is less than the depth threshold and decreasing the brightness of the preset organ that exceeds the depth threshold.
The terminal can calculate the depth of each part of a preset organ in the region to be processed, and calculate a corresponding depth threshold value according to the depth of each part, wherein the depth threshold value can be a weighted average value of the depths of the parts. And increasing the brightness of the part with the depth exceeding the depth threshold value and decreasing the brightness of the part with the depth smaller than the depth. Optionally, the calculated depth of each region may be subtracted by the depth threshold, and a depth difference for each region may be further calculated. The increased brightness is larger for the part with the depth difference smaller than 0; for the part with the depth difference larger than 0 and the larger the depth difference, the larger the reduced brightness. The adjustment value of the Y data in the YUV (also called YCrCb, a color coding method adopted by european television systems) data on the pixel corresponding to the portion can be calculated according to the depth difference. The Y data represents brightness (Luma or Luma), i.e., a gray scale value. The adjustment value is added to the corresponding Y data to effect adjustment of the brightness. The smaller the depth difference, the larger the adjustment value.
Furthermore, the terminal also sets a corresponding relation between the depth difference value and the adjustment value, and according to the corresponding relation, the adjustment value corresponding to the different depth difference value can be inquired, so that the Y data of the part corresponding to the depth difference value is added with the adjustment value, and the brightness presented by the part is adjusted.
According to the light effect processing method, the smaller the depth difference is, the higher the corresponding part is. By further adjusting the brightness of the preset organs, highlight is added to a higher face area, and shadow is added to a lower face area, so that the effect of polishing the face is formed, and the stereoscopic impression of the face is enhanced.
In one embodiment, as shown in fig. 5, the light effect processing is performed on the face region according to the relative relationship between the brightness enhancement position and the position of the five sense organs, and the method further includes:
step 502: acquiring orientation information of the brightening position corresponding to the position of the five sense organs, wherein the orientation information comprises an angle value and a distance value.
When the image to be processed is an imaged and stored image, the orientation information of the brightening position and the positions of the five sense organs on the two-dimensional plane is calculated, for example, the orientation information is determined by calculating the angle value and the distance value of the brightening position relative to each area in the positions of the five sense organs, and the irradiation effect of the light irradiation emitted at the brightening position on the positions of the five sense organs can be simulated by determining the orientation information of the brightening position relative to the positions of the five sense organs.
Step 504: substituting the angle value and the distance value into a preset light effect processing model to determine a shadow region of the five sense organs;
the light effect processing model can be a preset shadow region calculation algorithm, and the angle value and the distance value of the brightening position relative to each region in the five sense organs position are substituted into the algorithm, so that a shadow region generated in a human face region when light rays are emitted through the brightening position can be simulated. For example, the area with a longer distance value and a smaller angle value from the brightening position in the five-sense organ part can be selected as the shadow area, the size of the selected shadow area can be determined according to the brightness enhancement range of the brightening position, and the larger the brightness enhancement range is, the larger the area of the selected shadow area is.
Step 506: and adjusting the brightness of the shadow area according to preset adjustment parameters.
Specifically, the preset adjustment parameters may be preset according to the light effect processing model, or may be set according to the user's requirements. And reducing the brightness value of the shadow area according to the preset adjusting parameters so that the face area presents a shadow effect and the five sense organs are more three-dimensional in representation.
For example, if the added brightening position of the lighting effect is at the upper right position of the image to be processed, it can be determined that there is a shadow region on the left side of the nose wing in the face region, and a shadow effect is added to the left side of the nose wing in the face region, so that the portrait has a stereoscopic effect.
In one embodiment, as shown in fig. 6, the light effect processing method further includes:
step 602: and performing brightness enhancement treatment on the image to be processed according to the brightness enhancement position.
The image to be processed is brightened by taking the brightening position as the center, the intensity of the light effect added to the periphery of the brightening position is gradually reduced, and the brightness value and the brightness adjusting range after the brightening treatment can be preset or can be set according to the selection of a user.
Step 604: and detecting the brightness value of a pixel in the processed human face area, and determining an area to be processed according to the magnitude relation between the brightness value of the pixel and a preset brightness value.
Each pixel has corresponding brightness, the brightness value of the pixel reflects the relative brightness of the pixel, the brightness value of the pixel is between 0 and 255, the brightness of the pixel close to 255 is higher, the brightness close to 0 is lower, and the rest part belongs to middle tone. This distinction in luminance is an absolute distinction, i.e. pixels near 255 are high, pixels near 0 are dark, and the middle is around 128.
The method includes the steps of performing region division according to the size relation between pixel brightness values and preset brightness values, specifically, dividing a region, in a face region, of which the pixel brightness value is smaller than a first preset brightness value into a shadow region, dividing a region, in the face region, of which the pixel brightness value is larger than a second preset brightness value into a highlight region, and dividing a region, in the face region, of which the pixel brightness value is larger than the first preset brightness value and smaller than the second preset brightness value into a transition region. For example, the area with the pixel brightness value less than 50 is selected as the shadow area, the area with the pixel brightness value greater than 50 and less than 100 is selected as the transition area, and the area with the pixel brightness value greater than 100 is selected as the highlight area.
Alternatively, the regions may be divided according to the relative values of the pixel brightness values, for example, a relatively dark 20% region in the face region is selected as a shadow region.
Step 606: and adjusting the brightness of the area to be processed according to a preset brightness adjustment proportion.
For example, the brightness value of the highlight region may be kept unchanged, the brightness value of the shadow region may be adjusted to be reduced by 20%, and the brightness reduction degree may be calculated in the transition region according to the brightness difference. It can be understood that the brightness of the to-be-processed area may also be adjusted according to different parameters or according to user selection, which is not limited in this embodiment.
In one embodiment, as shown in fig. 7, the brightening treatment of the image to be processed according to the brightening position includes the following steps:
step 702: and constructing a light effect processing model.
The light effect processing model is a model for performing light effect processing on an image to be processed, and can simulate a curve of light intensity change emitted by a light source. And acquiring a light effect processing model according to the brightening position, namely using the brightening position as a light source to simulate the model of light intensity change of the position of each pixel point. The terminal rod can be pre-stored with a light effect processing reference model, and the light effect processing reference model can be a model taking any reference pixel point in an image as a light source. After the brightening position is obtained, the displacement of the brightening position relative to the reference pixel point can be obtained, and the light effect processing model corresponding to the brightening position is obtained after the light effect processing reference model is displaced.
For example, a light effect processing reference model P (x, y) with a reference pixel point with coordinates (0,0) as a model of a light source may be stored in the terminal in advance. Suppose that the selected highlight position is (x)0,y0) If so, then the displacement of the highlight location relative to the reference pixel is (-x)0,-y0) Then the light effect processing model corresponding to the brightness enhancement position obtained according to the displacement is P (x-x)0,y-y0). The obtained light effect processing model P (x-x)0,y-y0) In the highlight position (x)0,y0) Is a light effect processing model of a light source.
Further, the light effect processing model further comprises a brightness enhancement factor, which can be used to determine the added light intensity, wherein the greater the brightness enhancement factor, the higher the added light intensity.
Step 704: and determining the distribution center of the light effect processing model according to the brightening position, and determining the distribution amplitude according to the brightness enhancement coefficient in the light effect processing model.
In this embodiment, the light effect processing model is a two-dimensional gaussian distribution function, and the terminal may determine the distribution center of the light effect processing model according to the brightness enhancement position and determine the distribution amplitude according to the brightness enhancement coefficient. The distribution center of the light effect processing model can be used for determining the position of the light effect processing model, the terminal can use the brightening position as the distribution center of the light effect processing model, and the distribution center can be the highest point in the two-dimensional Gaussian distribution function. The distribution amplitude of the light effect processing model can be used to describe the shape of the two-dimensional gaussian distribution function. The shape of the light effect processing model may be "thin and tall" the larger the brightness enhancement factor is, and the shape of the light effect processing model may be "thin and small" the smaller the brightness enhancement factor is.
In one embodiment, the two-dimensional gaussian distribution function of the light effect processing model can be represented by equation (1):
Figure BDA0001670461280000121
wherein z represents a pixel point in the image to be processed; p (z) represents the brightness enhancement amplitude when the pixel point is subjected to the brightness enhancement treatment; d is a standard deviation, the brightness enhancement coefficient can influence the size of d, the larger the brightness enhancement coefficient is, the smaller d can be, and the smaller the brightness enhancement coefficient is, the larger d can be; μ denotes the center of distribution of the light effect processing model, which may alternatively be the acquired highlight position. In the light effect processing model, pixel points at different positions of an image to be processed have different corresponding brightness enhancement amplitudes, and the pixel points closer to the distribution center mu have stronger brightness enhancement amplitudes, and the pixel points farther from the distribution center mu have smaller brightness enhancement amplitudes.
Step 706: and constructing a two-dimensional Gaussian distribution function according to the distribution center and the distribution amplitude.
The terminal can construct a two-dimensional Gaussian distribution function according to the determined distribution center and the distribution amplitude, and brighten the image to be processed according to the constructed two-dimensional Gaussian distribution function.
FIG. 8 is a diagram of a light effect processing model in an embodiment. As shown in fig. 8, the light effect processing model is a two-dimensional gaussian distribution function, and both edge distributions of the two-dimensional gaussian distribution function are in the form of one-dimensional normal distribution. In the light effect processing model, an x axis and a y axis can be used for representing position coordinates of pixel points in an image to be processed, and a z axis can be used for representing brightness enhancement amplitude of the pixel points. The distribution center 402 is a pixel point with position coordinates of (x0, y0), the terminal can acquire the highlight position and use the highlight position as the distribution center 402, and the distribution center 402 is a point with the maximum brightness enhancement amplitude in the light effect processing model. The brightness enhancement coefficient can be used for influencing the distribution amplitude of the light effect processing model, and the larger the brightness enhancement coefficient is, the larger the brightness enhancement amplitude of the pixel points in the light effect processing model is, and the larger the brightness of the pixel points is; the smaller the brightness enhancement coefficient is, the smaller the brightness enhancement amplitude of the pixel points in the light effect processing model is, and the smaller the brightness improved by the pixel points is.
Step 608: and adding an illumination effect to the image to be processed according to the two-dimensional Gaussian distribution function.
The terminal can calculate the brightness enhancement amplitude of the pixel points according to the two-dimensional Gaussian distribution function, and the brightness enhancement amplitude is multiplied by the original brightness values of the pixel points, so that the brightness values after the brightness enhancement processing can be calculated. The terminal can perform brightening treatment on the pixel points according to the calculated brightness values, and adds light effects to the image to be processed.
In this embodiment, the image to be processed can be processed by the two-dimensional gaussian distribution function to add the light effect, and the brightness enhancement amplitudes of the pixel points at different positions are different, so that the image has a better light effect, and the added light effect is more real and natural.
It should be understood that although the various steps in the flow charts of fig. 3-7 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 3-7 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
As shown in fig. 9, in one embodiment, there is provided a light effect processing apparatus, the apparatus comprising: a face recognition module 910, a facial feature determination module 920, a position acquisition module 930, and a light effect processing module 940.
The face recognition module 910 is configured to acquire an image to be processed and recognize a face region in the image to be processed.
A facial features determining module 920, configured to determine the positions of facial features in the face region.
A position obtaining module 930, configured to obtain the brightness enhancement position acting on the image to be processed.
And the light effect processing module 940 is configured to perform light effect processing on the face region according to the relative relationship between the brightening position and the position of the five sense organs.
According to the light effect processing device, the image to be processed is obtained, the face region in the image to be processed is identified through the face identification module 910, the position of the five sense organs in the face region is determined, the brightening position acting on the image to be processed is obtained, the light effect processing is performed on the face region according to the relative relation between the brightening position and the position of the five sense organs, the effects of shadow, highlight and the like can be added to the five sense organs of the face while the illumination effect is added to the image to be processed, the parts of the five sense organs have the three-dimensional effect, and the expressive force of the image is improved.
In one embodiment, the facial features determining module 920 is further configured to extract feature points corresponding to facial features in the face region; and acquiring the pixel position of the feature point, and determining the position of the five sense organs according to the pixel position.
In one embodiment, the light effect processing module 940 is further configured to simulate a direct effect of the light emitted from the brightening position on the five sense organs, and determine a processing area for the five sense organs in the face area; and adjusting the brightness of the processing area according to a preset adjusting strategy.
In one embodiment, the light effect processing module 940 is further configured to acquire a three-dimensional face feature corresponding to a face in the image to be processed by emitting the structured light; and selecting an area which receives the structured light in the face area as a highlight processing area, and selecting an area which does not receive the structured light in the face area as a shadow processing area.
In one embodiment, the light effect processing module 940 is further configured to obtain the depth of the preset five sense organs in the processing area, and calculate a depth threshold; and increasing the brightness of the preset organ smaller than the depth threshold value, and reducing the brightness of the preset five sense organs larger than the depth threshold value.
In one embodiment, the light effect processing module 940 is further configured to obtain orientation information of the brightness enhancement position corresponding to the position of the five sense organs, where the orientation information includes an angle value and a distance value; substituting the angle value and the distance value into a preset light effect processing model to determine a shadow region of the five sense organ parts; and adjusting the brightness of the shadow area according to preset adjustment parameters.
In one embodiment, the light effect processing device further comprises a brightness adjusting module, configured to perform brightness enhancement processing on the image to be processed according to the brightness enhancement position; detecting pixel brightness values in the processed human face area, and determining an area to be processed according to the magnitude relation between the pixel brightness values and preset brightness values; and adjusting the brightness of the area to be processed according to a preset brightness adjustment proportion.
In one embodiment, the brightness adjustment module is further configured to construct a light effect processing model; determining the distribution center of the light effect processing model according to the brightening position, and determining the distribution amplitude according to the brightness enhancement coefficient in the light effect processing model; constructing a two-dimensional Gaussian distribution function according to the distribution center and the distribution amplitude; and adding an illumination effect to the image to be processed according to the two-dimensional Gaussian distribution function.
The division of each module in the light effect processing apparatus is only used for illustration, and in other embodiments, the signal processing apparatus may be divided into different modules as needed to complete all or part of the functions of the light effect processing apparatus.
For specific definition of the light effect processing device, reference may be made to the above definition of the light effect processing method, which is not described herein again. All or part of the modules in the light effect processing device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the terminal, and can also be stored in a memory in the terminal in a software form, so that the processor can call and execute operations corresponding to the modules.
The implementation of each module in the light effect processing apparatus provided in the embodiments of the present application may be in the form of a computer program. The computer program may be run on a terminal or a server. The program modules constituted by the computer program may be stored on the memory of the terminal or the server. The computer program, when being executed by a processor, realizes the steps of the light effect processing method described in the embodiments of the present application.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform light effect processing methods as described in the embodiments above.
The embodiment of the application also provides a computer program product. A computer program product comprising instructions which, when run on a computer, cause the computer to perform the light effect processing method described in the embodiments above.
The embodiment of the application also provides the terminal equipment. The terminal device includes therein an Image Processing circuit, which may be implemented using hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 10 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 10, for convenience of explanation, only aspects of the image processing technology related to the embodiments of the present application are shown.
As shown in fig. 10, the image processing circuit includes an ISP processor 1040 and control logic 1050. The image data captured by the imaging device 1010 is first processed by the ISP processor 1040, and the ISP processor 1040 analyzes the image data to capture image statistics that may be used to determine and/or control one or more parameters of the imaging device 1010. The imaging device 1010 may include a camera having one or more lenses 1012 and an image sensor 1014. The image sensor 1014 may include an array of color filters (e.g., Bayer filters), and the image sensor 1014 may acquire light intensity and wavelength information captured with each imaging pixel of the image sensor 1014 and provide a set of raw image data that may be processed by the ISP processor 1040. The sensor 1020 (e.g., a gyroscope) may provide parameters of the acquired image processing (e.g., anti-shake parameters) to the ISP processor 1040 based on the type of sensor 1020 interface. The sensor 1020 interface may utilize an SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
In addition, the image sensor 1014 may also send raw image data to the sensor 1020, the sensor 1020 may provide the raw image data to the ISP processor 1040 based on the type of interface of the sensor 1020, or the sensor 1020 may store the raw image data in the image memory 1030.
The ISP processor 1040 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and ISP processor 1040 may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
ISP processor 1040 may also receive image data from image memory 1030. For example, the sensor 1020 interface sends raw image data to the image memory 1030, and the raw image data in the image memory 1030 is then provided to the ISP processor 1040 for processing. The image Memory 1030 may be part of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from image sensor 1014 interface or from sensor 1020 interface or from image memory 1030, ISP processor 1040 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 1030 for additional processing before being displayed. ISP processor 1040 may also receive processed data from image memory 1030 for image data processing in the raw domain and in the RGB and YCbCr color spaces. The processed image data may be output to a display 1080 for viewing by a user and/or for further Processing by a Graphics Processing Unit (GPU). Further, the output of ISP processor 1040 can also be sent to image memory 1030, and display 1080 can read image data from image memory 1030. In one embodiment, image memory 1030 may be configured to implement one or more frame buffers. In addition, the output of the ISP processor 1040 may be transmitted to the encoder/decoder 1070 in order to encode/decode image data. The encoded image data may be saved and decompressed before being displayed on the display 1080 device.
The steps of the ISP processor 1040 processing the image data include: the image data is subjected to VFE (Video Front End) Processing and CPP (Camera Post Processing). The VFE processing of the image data may include modifying the contrast or brightness of the image data, modifying digitally recorded lighting status data, performing compensation processing (e.g., white balance, automatic gain control, gamma correction, etc.) on the image data, performing filter processing on the image data, etc. CPP processing of image data may include scaling an image, providing a preview frame and a record frame to each path. Among other things, the CPP may use different codecs to process the preview and record frames. The image data processed by the ISP processor 1040 may be sent to the light effect processing module 1060 for light effect processing of the image before being displayed. The light effect processing module 1060 light effect processing on the image data may include: clear lighting, cloudy lighting, natural light, studio light, stage light, contour light, and the like. The light effect Processing module 1060 can be a Central Processing Unit (CPU), a GPU, a coprocessor, or the like in the mobile terminal. The data processed by the light effect processing module 1060 may be transmitted to the encoder/decoder 1070 in order to encode/decode the image data. The encoded image data may be saved and decompressed before being displayed on the display 1080 device. The light effect processing module 1060 can also be located between the encoder/decoder 1070 and the display 1080, that is, the light effect processing module 1060 performs light effect processing on the imaged image. The encoder/decoder 1070 may be a CPU, GPU, coprocessor, or the like in a mobile terminal.
The statistics determined by the ISP processor 1040 may be sent to the control logic 1050 unit. For example, the statistical data may include image sensor 1014 statistics such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 1012 shading correction, and the like. Control logic 1050 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of imaging device 1010 and ISP processor 1040 based on the received statistical data. For example, the control parameters of the imaging device 1010 may include sensor 1020 control parameters (e.g., gain, integration time for exposure control), camera flash control parameters, lens 1012 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), and lens 1012 shading correction parameters.
The light effect processing method as described above can be implemented using the image processing technique in fig. 10. By the light effect processing method, the effects of shadow, highlight and the like can be added to the facial features while the illumination effect is added to the image to be processed, so that the facial features have a three-dimensional effect, and the expressive force of the image is improved.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. Suitable non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (14)

1. A light effect processing method, comprising:
acquiring an image to be processed, and identifying a face area in the image to be processed;
determining the position of five sense organs in the face region;
acquiring a brightening position acting on the image to be processed; the brightening position is a brightening center position for brightening the processor;
carrying out light effect processing on the face region according to the relative relation between the brightening position and the position of the five sense organs;
the lighting effect processing is carried out on the human face region according to the relative relation between the brightening position and the position of the five sense organs, and the lighting effect processing comprises the following steps: simulating the direct projection effect of the light rays emitted from the brightening position on the five sense organs, and determining the processing area of the human face area on the five sense organs; adjusting the brightness of the processing area according to a preset adjusting strategy;
or the like, or, alternatively,
the lighting effect processing is carried out on the human face region according to the relative relation between the brightening position and the position of the five sense organs, and the lighting effect processing comprises the following steps: acquiring orientation information of the brightening position corresponding to the position of the five sense organs, wherein the orientation information comprises an angle value and a distance value; substituting the angle value and the distance value into a preset light effect processing model to determine a shadow region of the five sense organ parts; and adjusting the brightness of the shadow area according to preset adjustment parameters.
2. The method of claim 1, wherein the determining the position of the five sense organs in the face region comprises:
extracting feature points corresponding to the five sense organs in the face region;
and acquiring the pixel position of the feature point, and determining the position of the five sense organs according to the pixel position.
3. The method of claim 1, wherein the simulating the direct effect of the light emitted from the highlight position on the five-sense-organ part to determine the processing area of the face region for the five-sense-organ part comprises:
acquiring three-dimensional face features corresponding to the face in the image to be processed by emitting structured light;
and selecting an area which receives the structured light in the face area as a highlight processing area, and selecting an area which does not receive the structured light in the face area as a shadow processing area.
4. The method of claim 1, wherein the dimming the processing region according to a preset dimming strategy comprises:
acquiring the depth of preset five sense organs in the processing area, and calculating a depth threshold value;
and increasing the brightness of the preset organ smaller than the depth threshold value, and reducing the brightness of the preset five sense organs larger than the depth threshold value.
5. The method of claim 1, further comprising:
brightening the image to be processed according to the brightening position;
detecting pixel brightness values in the processed human face area, and determining an area to be processed according to the magnitude relation between the pixel brightness values and preset brightness values;
and adjusting the brightness of the area to be processed according to a preset brightness adjustment proportion.
6. The method according to claim 5, wherein said highlighting the image to be processed according to the highlight position comprises:
constructing a light effect processing model;
determining the distribution center of the light effect processing model according to the brightening position, and determining the distribution amplitude according to the brightness enhancement coefficient in the light effect processing model;
constructing a two-dimensional Gaussian distribution function according to the distribution center and the distribution amplitude;
and adding an illumination effect to the image to be processed according to the two-dimensional Gaussian distribution function.
7. A light effect processing apparatus, comprising:
the face recognition module is used for acquiring an image to be processed and recognizing a face area in the image to be processed;
the facial feature determination module is used for determining the positions of facial features in the face region;
the position acquisition module is used for acquiring a brightening position acting on the image to be processed; the brightening position is a brightening center position for brightening the processor;
the light effect processing module is used for simulating the direct projection effect of the light rays emitted by the brightening position on the five sense organs and determining the processing area of the human face area on the five sense organs; adjusting the brightness of the processing area according to a preset adjusting strategy;
or the like, or, alternatively,
the light effect processing module is used for acquiring the position information of the brightening position corresponding to the position of the five sense organs, and the position information comprises an angle value and a distance value; substituting the angle value and the distance value into a preset light effect processing model to determine a shadow region of the five sense organ parts; and adjusting the brightness of the shadow area according to preset adjustment parameters.
8. The apparatus of claim 7,
the facial feature determination module is used for extracting feature points corresponding to facial feature parts in the face region; and acquiring the pixel position of the feature point, and determining the position of the five sense organs according to the pixel position.
9. The apparatus of claim 7,
the light effect processing module is also used for acquiring three-dimensional face features corresponding to the face in the image to be processed by emitting structured light; and selecting an area which receives the structured light in the face area as a highlight processing area, and selecting an area which does not receive the structured light in the face area as a shadow processing area.
10. The apparatus of claim 7,
the light effect processing module is further used for obtaining the depth of preset five sense organs in the processing area and calculating a depth threshold value; and increasing the brightness of the preset organ smaller than the depth threshold value, and reducing the brightness of the preset five sense organs larger than the depth threshold value.
11. The apparatus of claim 7, further comprising:
the brightness adjusting module is used for performing brightness enhancement processing on the image to be processed according to the brightness enhancement position; detecting pixel brightness values in the processed human face area, and determining an area to be processed according to the magnitude relation between the pixel brightness values and preset brightness values; and adjusting the brightness of the area to be processed according to a preset brightness adjustment proportion.
12. The apparatus of claim 11,
the brightness adjusting module is also used for constructing a light effect processing model; determining the distribution center of the light effect processing model according to the brightening position, and determining the distribution amplitude according to the brightness enhancement coefficient in the light effect processing model; constructing a two-dimensional Gaussian distribution function according to the distribution center and the distribution amplitude; and adding an illumination effect to the image to be processed according to the two-dimensional Gaussian distribution function.
13. A terminal comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 5.
14. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 5.
CN201810501410.2A 2018-05-23 2018-05-23 Light effect processing method and device, terminal and computer-readable storage medium Active CN108846807B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810501410.2A CN108846807B (en) 2018-05-23 2018-05-23 Light effect processing method and device, terminal and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810501410.2A CN108846807B (en) 2018-05-23 2018-05-23 Light effect processing method and device, terminal and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN108846807A CN108846807A (en) 2018-11-20
CN108846807B true CN108846807B (en) 2021-03-02

Family

ID=64213368

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810501410.2A Active CN108846807B (en) 2018-05-23 2018-05-23 Light effect processing method and device, terminal and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN108846807B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111327814A (en) * 2018-12-17 2020-06-23 华为技术有限公司 Image processing method and electronic equipment
CN110223237A (en) * 2019-04-23 2019-09-10 维沃移动通信有限公司 Adjust the method and terminal device of image parameter
CN111563850B (en) * 2020-03-20 2023-12-05 维沃移动通信有限公司 Image processing method and electronic equipment
CN111598813B (en) * 2020-05-25 2023-05-19 抖音视界有限公司 Face image processing method and device, electronic equipment and computer readable medium
CN111951176A (en) * 2020-06-30 2020-11-17 重庆灵翎互娱科技有限公司 Method and equipment for removing local shadows of face image
CN111914775B (en) * 2020-08-06 2023-07-28 平安科技(深圳)有限公司 Living body detection method, living body detection device, electronic equipment and storage medium
CN113762212A (en) * 2021-09-27 2021-12-07 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN114119614B (en) * 2022-01-27 2022-07-26 天津风霖物联网科技有限公司 Method for remotely detecting cracks of building
CN116456199B (en) * 2023-06-16 2023-10-03 Tcl通讯科技(成都)有限公司 Shooting light supplementing method, shooting light supplementing device, electronic equipment and computer readable storage medium
CN116993133B (en) * 2023-09-27 2024-01-26 尚云(广州)信息科技有限公司 Intelligent work order system based on face recognition

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107635124A (en) * 2017-10-31 2018-01-26 广东欧珀移动通信有限公司 White balancing treatment method, device and the equipment of face shooting
CN107730448A (en) * 2017-10-31 2018-02-23 北京小米移动软件有限公司 U.S. face method and device based on image procossing

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101308572B (en) * 2008-06-24 2011-07-13 北京中星微电子有限公司 Luminous effect processing method and apparatus
JP2010191592A (en) * 2009-02-17 2010-09-02 Seiko Epson Corp Image processing apparatus for detecting coordinate position of characteristic portion of face
CN103605975B (en) * 2013-11-28 2018-10-19 小米科技有限责任公司 A kind of method, apparatus and terminal device of image procossing
CN104537612A (en) * 2014-08-05 2015-04-22 华南理工大学 Method for automatically beautifying skin of facial image
CN105719234B (en) * 2016-01-26 2018-12-11 厦门美图之家科技有限公司 Glossy method, system and camera terminal are removed automatically for human face region
JP6685827B2 (en) * 2016-05-09 2020-04-22 キヤノン株式会社 Image processing apparatus, image processing method and program
CN107580209B (en) * 2017-10-24 2020-04-21 维沃移动通信有限公司 Photographing imaging method and device of mobile terminal
CN107730445B (en) * 2017-10-31 2022-02-18 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, storage medium, and electronic device
CN107833178A (en) * 2017-11-24 2018-03-23 维沃移动通信有限公司 A kind of image processing method, device and mobile terminal

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107635124A (en) * 2017-10-31 2018-01-26 广东欧珀移动通信有限公司 White balancing treatment method, device and the equipment of face shooting
CN107730448A (en) * 2017-10-31 2018-02-23 北京小米移动软件有限公司 U.S. face method and device based on image procossing

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Illumination normalization for robust face recognition against varying lighting conditions;Shiguang Shan等;《2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443)》;20031027;第1-8页 *
基于图像处理的自动调光系统;楚广生;《应用天地》;20151231;第34卷(第12期);第69-72页 *

Also Published As

Publication number Publication date
CN108846807A (en) 2018-11-20

Similar Documents

Publication Publication Date Title
CN108846807B (en) Light effect processing method and device, terminal and computer-readable storage medium
CN107730445B (en) Image processing method, image processing apparatus, storage medium, and electronic device
US11948282B2 (en) Image processing apparatus, image processing method, and storage medium for lighting processing on image using model data
CN108537155B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN108537749B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
WO2019085792A1 (en) Image processing method and device, readable storage medium and electronic device
CN108810406B (en) Portrait light effect processing method, device, terminal and computer readable storage medium
CN107886484B (en) Beautifying method, beautifying device, computer-readable storage medium and electronic equipment
CN107945135B (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN107451969B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN108734676B (en) Image processing method and device, electronic equipment and computer readable storage medium
US10304164B2 (en) Image processing apparatus, image processing method, and storage medium for performing lighting processing for image data
JP4519708B2 (en) Imaging apparatus and method, and program
CN107730444B (en) Image processing method, image processing device, readable storage medium and computer equipment
CN108419028B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN113766125B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN107730446B (en) Image processing method, image processing device, computer equipment and computer readable storage medium
KR20200044093A (en) Image processing methods and devices, electronic devices and computer-readable storage media
CN107993209B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
JP6576083B2 (en) Image processing apparatus, image processing method, and program
KR20200023651A (en) Preview photo blurring method and apparatus and storage medium
CN108717530B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
WO2019105305A1 (en) Image brightness processing method, computer readable storage medium and electronic device
CN109242794B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN107945106B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant