CN109242794B - Image processing method, image processing device, electronic equipment and computer readable storage medium - Google Patents

Image processing method, image processing device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN109242794B
CN109242794B CN201810997702.XA CN201810997702A CN109242794B CN 109242794 B CN109242794 B CN 109242794B CN 201810997702 A CN201810997702 A CN 201810997702A CN 109242794 B CN109242794 B CN 109242794B
Authority
CN
China
Prior art keywords
light effect
brightness
image
processed
enhancement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810997702.XA
Other languages
Chinese (zh)
Other versions
CN109242794A (en
Inventor
罗玲玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810997702.XA priority Critical patent/CN109242794B/en
Publication of CN109242794A publication Critical patent/CN109242794A/en
Application granted granted Critical
Publication of CN109242794B publication Critical patent/CN109242794B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application relates to an image processing method, an image processing device, electronic equipment and a computer readable storage medium. The method comprises the following steps: carrying out face recognition on the image to be processed, and determining a face area of the image to be processed; acquiring brightness information and depth information of a face region; determining a light effect enhancement coefficient in the light effect model according to the brightness information and the depth information; and carrying out light effect enhancement processing on the human face region according to the light effect enhancement coefficient. The image processing method, the image processing device, the electronic equipment and the computer readable storage medium can dynamically adjust the light effect intensity, improve the light effect of the portrait image and are simple, convenient and quick to operate.

Description

Image processing method, image processing device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of electronic technology, more and more electronic devices have a shooting function, and users can shoot through cameras of the electronic devices. If a person with a good effect needs to be shot, light is generally required to be arranged around the shot person, so that a good light effect is produced. The method for making the shot portrait have good lighting effect is complex to operate.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, electronic equipment and a computer-readable storage medium, which can dynamically adjust the light effect intensity, improve the light effect of a portrait image and are simple, convenient and quick to operate.
An image processing method comprising:
carrying out face recognition on an image to be processed, and determining a face area of the image to be processed;
acquiring brightness information of the face area;
determining a brightness enhancement coefficient in a light effect model according to the brightness information;
and adding a light effect to the image to be processed according to the light effect model, wherein the brightness enhancement coefficient is used for adjusting the intensity of the light effect.
An image processing apparatus comprising:
the face recognition module is used for carrying out face recognition on an image to be processed and determining a face area of the image to be processed;
the brightness acquisition module is used for acquiring brightness information of the face area;
the coefficient determining module is used for determining a brightness enhancement coefficient in the light effect model according to the brightness information;
and the processing module is used for adding a light effect to the image to be processed according to the light effect model, and the brightness enhancement coefficient is used for adjusting the intensity of the light effect.
An electronic device comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to carry out the method as described above.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method as set forth above.
According to the image processing method, the image processing device, the electronic equipment and the computer readable storage medium, the face identification is carried out on the image to be processed, the face area of the image to be processed is determined, the brightness information of the face area is obtained, the brightness enhancement coefficient in the light effect model is determined according to the brightness information, the light effect adding processing is carried out on the image to be processed according to the light effect model, the light effect intensity can be dynamically adjusted according to the brightness information of the face, the portrait image has a better light effect, and the operation is simple, convenient and rapid.
Drawings
FIG. 1 is a block diagram of an electronic device in one embodiment;
FIG. 2 is a flow diagram illustrating a method for image processing according to one embodiment;
FIG. 3 is a schematic flow chart of determining a light effect enhancement system in a light effect model according to the brightness information and the depth information in one embodiment;
FIG. 4 is a schematic diagram of a process of determining the light effect enhancement coefficient according to the brightness enhancement factor and the depth information in one embodiment;
FIG. 5 is a schematic flow chart of constructing a first distribution function and constructing a second distribution function in one embodiment;
FIG. 6 is a flowchart illustrating an image processing method according to another embodiment;
FIG. 7 is a flowchart illustrating an image processing method according to another embodiment;
FIG. 8 is a block diagram of an image processing apparatus in one embodiment;
FIG. 9 is a block diagram of an image processing apparatus according to another embodiment;
FIG. 10 is a schematic diagram of an image processing circuit in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first client may be referred to as a second client, and similarly, a second client may be referred to as a first client, without departing from the scope of the present application. Both the first client and the second client are clients, but they are not the same client.
FIG. 1 is a block diagram of an electronic device in one embodiment. As shown in fig. 1, the electronic device includes a processor, a memory, a display screen, and an input device connected through a system bus. The memory may include, among other things, a non-volatile storage medium and a processor. The non-volatile storage medium of the electronic device stores an operating system and a computer program, which is executed by a processor to implement an image processing method provided in an embodiment of the present application. The processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The internal memory in the electronic device provides an environment for the execution of the computer program in the nonvolatile storage medium. The display screen of the electronic device may be a liquid crystal display screen or an electronic ink display screen, and the input device may be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on a housing of the electronic device, or an external keyboard, a touch pad or a mouse. The electronic device may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device, etc. Those skilled in the art will appreciate that the architecture shown in fig. 1 is a block diagram of only a portion of the architecture associated with the subject application, and does not constitute a limitation on the electronic devices to which the subject application may be applied, and that a particular electronic device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
As shown in fig. 2, in one embodiment, there is provided an image processing method including the steps of:
step 202, performing face recognition on the image to be processed, and determining a face area of the image to be processed.
In one embodiment, the scene is often very complicated during shooting, especially the light of the shooting scene is complicated and changeable, and the shooting scene cannot be changed by the user during shooting, so that the effect desired by the user can be achieved only by post-processing. The electronic equipment can obtain an image to be processed, the image to be processed is an image needing light effect enhancement, and the light effect is image enhancement simulating a light source effect. Specifically, the light source effect may be natural light, stage light, studio light, film light, contour light, newspaper light, tyndall light effect, or the like.
Optionally, the electronic device may be provided with a portrait lighting effect switch on the interface, and the user may trigger the portrait lighting effect switch to select whether to perform portrait lighting effect processing on the image to be processed, where the portrait lighting effect processing refers to adding a light effect in the image to be processed, which may simulate a light distribution effect in a studio, and polish the portrait in the image to be processed to make a good light effect. The user can also select the mode of portrait lighting effect, and the mode of portrait lighting effect can include but not limited to contour light, stage light, studio light etc. also can realize the light effect of different colours etc. and the user can select by oneself according to actual demand.
In an embodiment, the electronic device may perform face recognition on an image to be processed, and may determine a face region of the image to be processed through a face detection algorithm, where the face detection algorithm may include a detection method based on geometric features, a feature face detection method, a linear discriminant analysis method, a detection method based on a hidden markov model, and the like.
In one embodiment, the electronic device may extract image features of the image to be processed, analyze the image features through a preset face recognition model, and determine whether the image to be processed includes a face. The face recognition model can be a decision model constructed in advance through machine learning, when the face recognition model is constructed, a large number of sample images can be obtained, the sample images comprise face images and unmanned images, the sample images can be marked according to whether each sample image comprises a face or not, the marked sample images are used as the input of the face recognition model, and the face recognition model is obtained through machine learning and training.
The face region may be a rectangular region divided according to image features, and the rectangular region includes a face. The face area may also be an irregular area composed of edge contours of the face, and the electronic device may obtain the edge contours according to edge features of the face, thereby determining the face area.
It should be noted that the image to be processed may be a preview image in a preview process of an image captured by the electronic device, may also be an image captured by the electronic device, or an image pre-stored in the electronic device, and is not limited herein.
And step 204, acquiring brightness information and depth information of the face area.
The electronic equipment can acquire the brightness information of the face area, and the brightness information can be used for the brightness of colors. The luminance information may be represented by the luminance value of the face region. The brightness value of the face region may be an average brightness of the face region. For example, the electronic device may obtain a brightness value of each pixel point in the face region, and calculate an average brightness according to the brightness value of each pixel point, where the average brightness may be used as the brightness value of the face region. The electronic equipment can also divide the face area into a plurality of sub-areas, calculate the average brightness of each sub-area, and perform weighting and calculation on the average brightness of each sub-area according to the weight distributed by each sub-area to obtain the brightness value of the face area. It is to be understood that other manners may be used to obtain the brightness information of the face region, and the present invention is not limited to the above manners.
The electronic device can also acquire depth information of the face region. In the process of collecting the image, the electronic equipment can simultaneously acquire a depth map corresponding to the shot object, and the depth map carries the distance information of the shot object from the electronic equipment. That is, the depth information may be understood as distance information between a photographic subject and an imaging device of the electronic apparatus. For example, the depth information may be acquired based on two cameras of the electronic device, may also be acquired by a TOF technique of an infrared camera, and may also be acquired by structured light, and the obtained depth information of the photographic object may be 10 centimeters, 50 centimeters, 1 meter, 2 meters, 3 meters, or the like. Therefore, after the electronic device determines the face region, the electronic device can acquire depth information corresponding to the face region from the depth map.
And step 206, determining a light effect enhancement coefficient in the light effect model according to the brightness information and the depth information.
The electronic equipment can construct a light effect model in advance, the light effect model can be used for adding light effect processing to the image to be processed, the report light effect of the stage report is simulated, and the effect of polishing the portrait of the image to be processed is achieved. Alternatively, the light effect model may be a superposition of two one-dimensional distribution functions, and the weighting factor of each one-dimensional distribution function is different, for example, one of the one-dimensional distribution functions may be used to express the distribution characteristics of the face region in the horizontal axis direction, and the other one-dimensional distribution function may be used to express the distribution characteristics of the face region in the vertical axis direction, but is not limited thereto.
It should be noted that the horizontal and vertical directions can be understood as the width direction of the human face area, that is, the width direction can be the direction from the left hairline to the right hairline; the longitudinal axis may be understood as the length direction of the face area, i.e. the length direction may be the direction from the hairline to the chin.
The light effect model can comprise a light effect enhancement coefficient, the light effect enhancement coefficient can be in an incidence relation with the added light effect intensity, and the larger the light effect enhancement coefficient is, the higher the added light intensity is. The light efficiency enhancement factor may be associated with a distribution amplitude in the light efficiency model, the larger the light efficiency enhancement factor is, the larger the distribution amplitude may be, the smaller the light efficiency enhancement factor is, the smaller the distribution amplitude may be. After the electronic equipment acquires the brightness information and the depth information of the face region, a light effect enhancement coefficient in the light effect model can be determined according to the brightness information and the depth information. The brightness information of the face area can be in a negative correlation relation with the light effect enhancement coefficient, the larger the brightness information of the face area is, the smaller the light effect enhancement coefficient can be, and the smaller the brightness information of the face area is, the larger the light effect enhancement coefficient can be; the depth information of the face area can be in a negative correlation relation with the light effect enhancement coefficient, the greater the depth information of the face area is, the smaller the light effect enhancement coefficient can be, the smaller the depth information of the face area is, and the larger the light effect enhancement coefficient can be. For example, the light effect enhancement coefficient of the face region at the nose bridge is large and the light effect enhancement coefficient of the cheek region is small along the width direction; the face area is along length width direction, and the light efficiency enhancement coefficient of its forehead department is big, and the light efficiency enhancement coefficient of jaw department is little.
And 208, carrying out light effect enhancement processing on the face region according to the light effect enhancement coefficient, wherein the light effect enhancement coefficient is a parameter influencing the light effect enhancement processing intensity.
After the electronic equipment determines the light effect enhancement coefficient of the light effect model according to the brightness information and the depth information, light effect enhancement processing, namely processing of adding light effect, can be carried out on the face area according to the determined light effect enhancement coefficient. In one embodiment, the light effect enhancement process or the process of adding the light effect can be understood as a process of enhancing the brightness of the image. The light effect enhancement coefficient can be used for determining the added light intensity, and the larger the light effect enhancement coefficient is, the higher the added light intensity is. And according to the determined light effect enhancement coefficient, carrying out light effect enhancement processing on each pixel point in the image to be processed according to the light effect enhancement coefficient. Specifically, the light effect enhancement processing can be performed in a manner of superimposing or multiplying the light effect enhancement coefficients on the image to be processed.
Optionally, the light effect model performs light effect adding processing on the image to be processed, and may further include changing a color of the image to be processed, and the like. Changing the color of the image to be processed may refer to changing the color value of each pixel point in the image to be processed. The color value may be a value in a color space such as RGB (red, green, blue), HSV (hue, saturation, brightness), and the like. The electronic equipment can set light rays with different colors, such as candle light rays, according to the needs of a user, the color value of each pixel point after adjustment under the light ray color is calculated according to the light effect model, and each pixel point is adjusted according to the calculated color value, so that the effect of printing light rays with different colors in the image to be processed can be achieved.
In this embodiment, the face recognition is performed on the image to be processed, and a face area of the image to be processed is determined; acquiring brightness information and depth information of a face region; determining a light effect enhancement coefficient in the light effect model according to the brightness information and the depth information; the light effect is added to the face region according to the light effect enhancement coefficient, the light effect enhancement coefficient is used for adjusting the intensity of the light effect, the light effect intensity can be dynamically adjusted according to the brightness information and the depth information of the face, the portrait image has a better light effect, and the operation is simple, convenient and fast.
As shown in fig. 3, in an embodiment, the step 206 of determining a light effect enhancement system in a light effect model according to the brightness information and the depth information comprises the steps of:
step 302, obtaining a brightness enhancement factor according to the brightness information.
The electronic device can set a preset brightness value, and the preset brightness value can be used for achieving a good effect of the brightness information of the image to be processed under the preset brightness value. The preset brightness value can be set according to the requirement of a user, and in the embodiment of the application, the specific value of the preset brightness value is not further limited. The electronic device can obtain the brightness enhancement factor according to the obtained brightness information and the preset brightness value. The brightness enhancement factor is a brightness parameter that affects the intensity of the light effect enhancement process.
In one embodiment, the brightness enhancement factor may be represented by a ratio of a preset brightness value to brightness information, such as: k is L1/L2Where k is a luminance enhancement factor, L1Is a preset brightness value; l is2Is luminance information of the face region. The electronic device can adjust the brightness enhancement factor k according to the current brightness information, and can prevent the added light from being too dark or too bright.
And step 304, determining the light effect enhancement coefficient according to the brightness enhancement factor and the depth information.
The electronic equipment can also acquire depth information of a face area in the image to be processed, and the brightness enhancement coefficient of the light effect model can be determined according to the depth information. Wherein the brightness enhancement factor is another brightness parameter that affects the intensity of the light effect enhancement process. The human face area comprises a plurality of pixel points, and the image corresponding to each pixel point has depth information.
In an embodiment, the light effect model comprises a first distribution function and a second distribution function. And calculating a first brightness enhancement coefficient of the face region according to the first distribution function and the depth information, and calculating a second brightness enhancement coefficient of the face region according to the second distribution function and the depth information.
Specifically, the electronic device may also establish a rectangular coordinate system in the image, in which angular depth information between the feature points is calculated. The electronic device establishes an xy coordinate system with two mutually perpendicular straight lines in the image. For example, the coordinate value of one pixel point P is (P)x,Py) Corresponding depth information is dpAccording to the pixelPoint PxAbscissa P ofxAnd depth information dpCalculating a first brightness enhancement coefficient, and then calculating a first brightness enhancement coefficient according to the pixel point PxOrdinate P ofyAnd depth information dpA second luminance enhancement coefficient is calculated. Wherein the first luminance enhancement coefficient decreases as the depth information increases. The second luminance enhancement coefficient may be a stepwise coefficient, and within the first range, the second luminance enhancement coefficient increases as the depth information increases; in the second range, the second luminance enhancement coefficient decreases as the depth information increases.
It should be noted that the first range may be understood as a vertical coordinate range corresponding to a forehead area in the face area; the second range may be understood as a range of the ordinate corresponding to the other region of the face region than the forehead region.
The electronic device can further obtain the brightness enhancement coefficient of the light effect model according to the weighting factor corresponding to the first brightness enhancement coefficient and the weighting factor corresponding to the second brightness enhancement coefficient. The weighting factor is a parameter that affects the corresponding luminance enhancement coefficient, and the larger the weighting factor is, the larger the influence is. Specifically, the weighting factor corresponding to the second luminance enhancement coefficient is larger than the weighting factor corresponding to the first luminance enhancement coefficient.
The electronic device can determine the light effect enhancement coefficient of the light effect model according to the product of the acquired brightness enhancement factor and the acquired intensity enhancement coefficient.
In the embodiment provided by the application, the light effect enhancement coefficient of the light effect model can be adjusted, and the light effect enhancement coefficient according to the light effect model can be adjusted according to the brightness enhancement factor and the brightness enhancement coefficient.
As shown in fig. 4, in an embodiment, determining the light effect enhancement coefficient according to the brightness enhancement factor and the depth information comprises the following steps:
step 402, obtaining a first brightness enhancement coefficient according to the depth information and a first distribution function.
In an embodiment, the light effect model comprises a first distribution function and a second distribution function, the first distribution function and the second distribution function having different weight factors. Specifically, the light effect model can be expressed by the following formula:
P(x,y)=k[af1(x)+bf2(y)]
where k is a luminance enhancement factor, f1(x) Is a first distribution function, a is a weight factor of the first distribution function; bf2(y) is a second distribution function, and b is a weighting factor of the second distribution function.
In particular, a first distribution function f1(x) Can be expressed as:
Figure GDA0002814107140000081
in the formula, xmidThe horizontal coordinate of the pixel point with the minimum depth information in the face region is represented by x, the horizontal coordinate of any pixel point in the face region is represented by w, and the total width of the face region is represented by w. Wherein x ismidIt is understood to be the abscissa of the apex of the nose tip, or the abscissa of the apex of the forehead region. w can be understood as the distance information from the leftmost hairline to the rightmost hairline.
The electronic device can bring the depth information of the image corresponding to each pixel point and the abscissa information of each pixel point into the first distribution function, and then the first brightness enhancement coefficient corresponding to each pixel point can be calculated.
Step 404, obtaining a second brightness enhancement coefficient according to the depth information and a second distribution function.
In particular, the second distribution function f2(y) can be expressed as:
Figure GDA0002814107140000082
in the formula, ynThe depth information in the face region is the vertical coordinate of the corresponding pixel point when the depth information is a preset value, y is the vertical coordinate of any pixel point in the face region, and l is the total length of the face region. Wherein, the vertical coordinate of the pixel point corresponding to the preset value can be understood as the position of the eyebrow with the maximum lengthThe vertical coordinate of the pixel point of the large depth information. l may be distance information between the forehead hairline and the chin.
The electronic device can bring the depth information of the image corresponding to each pixel point and the ordinate information of each pixel point into the second distribution function, and then can calculate the second brightness enhancement coefficient corresponding to each pixel point.
And 406, obtaining the lighting effect enhancement coefficient according to the brightness enhancement factor, the first brightness enhancement coefficient and the second brightness enhancement coefficient.
The electronic equipment can bring the depth information of the image corresponding to each pixel point and the horizontal and vertical coordinate information of each pixel point into the light effect model, and the light effect enhancement coefficient corresponding to each pixel point can be calculated.
As shown in fig. 5, in one embodiment, before obtaining the first luminance enhancement coefficient and obtaining the second luminance enhancement coefficient, the steps of constructing the first distribution function and constructing the second distribution function are further included.
Constructing a first distribution function and constructing a second distribution function, comprising the steps of:
a first brightness enhanced region having a first orientation and a second brightness enhanced region having a second orientation are obtained, step 502.
The image to be processed is composed of a plurality of pixel points which are arranged into a two-dimensional pixel point matrix according to a certain rule. If a coordinate system is established by taking the pixel point at the leftmost lower corner of the image as an origin, the position of any pixel point in the image can be represented by a two-dimensional xy coordinate system. The pixels in each horizontal direction (the direction parallel to the x-axis) may be referred to as a pixel row, and the pixels in each vertical direction (the direction parallel to the y-axis) may be referred to as a pixel column.
Where the first direction is understood to be a direction parallel to the y-axis (longitudinal direction), a first highlighted region having the first direction is understood to be the same abscissa for all pixels of the first highlighted region, i.e., the first highlighted region is a certain pixel column. Accordingly, the second direction may be understood as a direction parallel to the x-axis (transverse direction), and a second highlighted region having a second direction may be understood as the same ordinate for all pixels of the second highlighted region, i.e., the second highlighted region is a certain pixel row.
The electronic device may acquire a first highlight region, which may refer to a certain pixel column region that is highlight-processed for the image being processed. The first brightness enhancement region brightness enhancement location may be considered to be the pixel column region in the x-axis direction where the added light effects are most intense. With the first highlighted region as the center, the intensity of the light effect added to both sides of the first highlighted region may gradually decrease.
The electronic device may acquire a second highlight region, which may refer to a certain pixel row region that is highlighted for the image being processed. The second brightness enhancement region brightness enhancement location may be considered to be the pixel column region in the y-axis direction where the added light has the highest intensity effect. With the second highlighted region as the center, the intensity of the light effect added to one side of the second highlighted region may be gradually reduced, with the intensity of the light effect added to the other side of the second highlighted region unchanged.
Alternatively, the first and second brightness enhancing regions may be fixed rows or columns of pixels that are pre-arranged by the electronic device. For example, the first highlighted region may be the center row of pixels in the x-axis direction of the face region and the second highlighted region may be the row of pixels in the y-axis direction of the region of the face region where the eyebrows are located.
In one embodiment, the electronic device may acquire the first enhanced region and the second enhanced region according to the acquired depth information. The electronic equipment can acquire the depth map corresponding to the image at the same time when acquiring the image, and the pixel points in the depth map correspond to the pixel points in the image. And the pixel points in the depth map represent depth information of corresponding pixels in the image, and the depth information is depth information from an object corresponding to the pixel points to the electronic equipment. According to the depth map, the depth information of each pixel point in the face area can be obtained. The electronic device may obtain first average depth information of each pixel column in the face region and second average depth information of each pixel row in the face region according to the depth map, and define the pixel column with the largest first average depth information as a first enhancement region and the pixel column with the largest second average depth information as a second enhancement region. Of course, the first enhanced region may be other preset pixel columns, and the second enhanced region may be other preset pixel rows, which is not limited to this.
Alternatively, the first highlighted area and the second enhanced area may be positions selected by the user himself, and the user may perform a sliding operation on the image to be processed to select the desired first highlighted area and second enhanced area. The electronic device may receive a sliding operation by a user, and obtain a sliding trajectory according to the received sliding operation, and may determine a first brightness enhanced region in a first direction and a second brightness enhanced region in a second direction according to the sliding trajectory.
Specifically, the direction of the sliding track and the coordinate information of the passed pixel point can be obtained. When the included angle between the sliding track and the y-axis is smaller than a first preset value, it can be considered that the corresponding selected brightness enhanced region has a first direction. Meanwhile, the abscissa information of the pixel points passing through the sliding track may be obtained, for example, if the number of the pixel points passing through the sliding track is 500, the abscissas of the 500 pixel points are obtained, the number of the pixel points having the same abscissa information is counted, and the pixel column having the largest number of the abscissas is taken as the first brightness enhancement region. Correspondingly, when the included angle between the sliding track and the x-axis is smaller than a second preset value, it can be considered that the corresponding selected brightness enhanced region has a second direction. Meanwhile, the ordinate information of the pixel points passing through the sliding track may be obtained, for example, if the number of the pixel points passing through the sliding track is 500, the ordinate of the 500 pixel points is obtained, the number of the pixel points having the same ordinate information is counted, and the pixel row where the ordinate with the largest number is located is used as the second brightness enhancement region.
The user can select the first brightening region and the second brightening region according to actual requirements, so that the requirements of different users are met, and the added light effect can be effectively improved.
It will be appreciated that the first and second brightness enhancing regions may be obtained in other ways, and are not limited to the ones described above.
A first distribution center is determined from the first highlighted region and a second distribution center is determined from the second highlighted region, step 504.
Step 506, constructing the first distribution function according to the first distribution center, and constructing the second distribution function according to the second distribution center.
In an embodiment, the light effect model comprises a first distribution function and a second distribution function, the first distribution function and the second distribution function having different weight factors.
Where k is a luminance enhancement factor, f1(x) Is a first distribution function, a is a weight factor of the first distribution function; f. of2(y) is a second distribution function, and b is a weighting factor of the second distribution function.
The electronic device may also determine a total width w and a total length l of the face region, where w may be understood as distance information from the leftmost hairline to the rightmost hairline and l may be distance information between the forehead hairline and the chin.
The electronic device, in response to determining the first highlight region, may determine a first distribution center, which may be understood as a first distribution function f1(x) X in (2)midWherein x ismidIs the abscissa information of the first highlighted region. The first brightness enhancement coefficient is maximum at the first distribution center, and the farther away from the first distribution center, the smaller the corresponding first brightness enhancement coefficient, and the electronic device can determine the first distribution function f according to the variation trend of the first brightness enhancement coefficient and the first distribution center1(x)。
In particular, a first distribution function f1(x) Can be expressed as:
Figure GDA0002814107140000111
the electronic device, in accordance with the determination of the second highlight region, may determine a second distribution center, the second distribution centerCloth center can be understood as a second distribution function f2Y in (y)nWherein, ynIs the ordinate information of the second highlighted area. The second brightness enhancement coefficient is maximum at the second distribution center, the second brightness enhancement coefficient remains unchanged in the first range, the farther away from the second distribution center, the smaller the corresponding second brightness enhancement coefficient is in the second range, and the electronic device can determine the second distribution function f according to the variation trend of the second brightness enhancement coefficient and the second distribution center2(y)。
In particular, the second distribution function f2(y) can be expressed as:
Figure GDA0002814107140000121
thus, the electronic device may determine a first distribution center from the first highlighted region and a second distribution center from the second highlighted region, and construct the first distribution function from the first distribution center and the second distribution function from the second distribution center.
Further, the electronic device may further determine a weighting factor a of the first distribution function and a weighting factor b of the second distribution function, and determine the light effect model according to the determined first distribution function and the second distribution function.
Specifically, the light effect model can be expressed by the following formula:
P(x,y)=k[af1(x)+bf2(y)]
the electronic equipment can construct a light effect model according to the determined first distribution center and the second distribution center, and brighten the image to be processed according to the constructed light effect model.
The electronic equipment can calculate the light effect enhancement coefficient of each pixel point in the face area according to the light effect model, and the light effect enhancement coefficient is used as the brightness value after the brightness enhancement processing. The electronic equipment can perform brightening treatment on the pixel points according to the light effect enhancement coefficient, and adds a light effect to the image to be processed.
In this embodiment, the electronic device can add the light effect to the image to be processed by forming the light effect model through the superposition of the first distribution function and the second distribution function, and the brightness enhancement amplitudes of the pixel points at different positions are different, so that the image has a better light effect, and the added light effect is more real and natural.
In one embodiment, before the luminance information and the depth information of the face region are acquired, a step of removing illumination information of the image to be processed is further included.
The electronic equipment can carry out maximum (minimum) value filtering on the image to be processed to obtain an illumination map preliminarily; and filtering the mean value (or Gaussian) of the illumination map to obtain a final illumination distribution map, and then removing illumination information in the image to be processed according to the image to be processed and the illumination distribution map. Optionally, the electronic device may further remove illumination information in the image to be processed based on algorithms such as RGB normalization and gamma conversion.
In the embodiment, the electronic equipment can remove the illumination information in the image to be processed, and then remove the face area without the illumination information to add the lighting effect, so that the image has a better light effect, and the added light effect is more real and natural. Wherein, the illumination information may refer to the degree to which the object is illuminated.
As shown in fig. 6, the image processing method further includes the steps of:
step 602, detecting a human face region of the image to be processed according to the human face region.
The electronic equipment can detect a portrait area of the image to be processed according to the recognized face area, wherein the portrait area refers to the whole area containing the collected portrait in the image to be processed. The face region may belong to a part of a portrait region, and the portrait region may include collected limbs, trunk, and the like of the person in addition to the face.
In an embodiment, after the face region is determined, the electronic device may obtain depth information corresponding to the face region from the depth map, then may obtain depth information corresponding to the portrait region according to the depth information corresponding to the face region, and then may obtain the portrait region in the image to be processed according to the depth information corresponding to the portrait region.
It is understood that the portrait area may also be obtained by other methods, which are not limited in this embodiment. For example, the portrait area may be obtained by artificial intelligence, a region growing method, or the like.
Step 604, segmenting the portrait area from the image to be processed to obtain a background area.
Step 606, blurring the background area.
After the electronic device detects the portrait area, the portrait area can be segmented from the image to be processed to obtain a background area except the portrait area. The electronic equipment can reduce the brightness value of the background area except the portrait area, darken the background area, and re-synthesize the processed portrait area and the darkened background area to obtain a processed image. Optionally, the electronic device may perform blurring processing on the background area, and re-synthesize the processed portrait area and the darkened background area to obtain a processed image.
In this embodiment, the electronic device can make the portrait area in the image to be processed after the background blurring process have a better light effect, so that the light effect of the screen announcement in the simulated stage light screen is more real.
As shown in fig. 7, the image processing method includes the steps of:
step 702, removing illumination information of the image to be processed;
step 704, performing face recognition on the image to be processed, and determining a face area of the image to be processed;
step 706, detecting a face region of the image to be processed according to the face region;
step 708, segmenting the portrait area from the image to be processed to obtain a background area;
step 710, blurring the background area;
step 712, acquiring brightness information and depth information of the face region;
step 714, determining a light effect enhancement coefficient in the light effect model according to the brightness information and the depth information;
and 716, performing light effect enhancement processing on the face region according to the light effect enhancement coefficient, wherein the light effect enhancement coefficient is a parameter influencing the intensity of the light effect enhancement processing.
In this embodiment, the electronic device can remove the illumination information of the image to be processed, perform lighting effect enhancement processing on the face region, and perform blurring processing on the background region, so that the face region has a better light effect, and the light effect of the screen announcement in the simulated stage light screen is more real.
It should be understood that although the various steps in the flow charts of fig. 3-7 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 3-7 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
As shown in fig. 8, in an embodiment, an image processing apparatus is provided, which includes a face recognition module 810, a depth acquisition module 820, a coefficient determination module 830, and a light effect enhancement module 840.
A face recognition module 810, configured to perform face recognition on the image to be processed, and determine a face area of the image to be processed;
a depth obtaining module 820, configured to obtain brightness information and depth information of the face region;
a coefficient determining module 830, configured to determine a light efficiency enhancement coefficient in a light efficiency model according to the brightness information and the depth information;
and the lighting effect enhancement module 840 is used for adding a light effect to the face region according to the lighting effect enhancement coefficient, and the lighting effect enhancement coefficient is used for adjusting the intensity of the light effect.
In this embodiment, a face recognition is performed on an image to be processed, and a face area of the image to be processed is determined; acquiring brightness information and depth information of the face area; determining a light effect enhancement coefficient in a light effect model according to the brightness information and the depth information; the light effect enhancement processing is carried out on the face region according to the light effect enhancement coefficient, the light effect enhancement coefficient is a parameter influencing the light effect enhancement processing intensity, the light effect intensity can be dynamically adjusted according to the brightness information and the depth information of the face, the portrait image has a better light effect, and the operation is simple, convenient and fast.
In one embodiment, the coefficient determination module 830 includes:
a first coefficient determining unit for obtaining a brightness enhancement factor according to the brightness information; the brightness enhancement factor is a first brightness parameter which influences the light effect enhancement processing intensity;
and the second coefficient determining unit is used for determining the light effect enhancement coefficient according to the brightness enhancement factor and the depth information.
In one embodiment, the light effect model comprises a first distribution function and a second distribution function;
the second coefficient determining unit is used for acquiring a first brightness enhancement coefficient according to the depth information and the first distribution function; acquiring a second brightness enhancement coefficient according to the depth information and a second distribution function; and acquiring the light effect enhancement coefficient according to the brightness enhancement factor, the first brightness enhancement coefficient and the second brightness enhancement coefficient.
In one embodiment, the second coefficient determination unit further obtains a first highlighted region in a first direction and a second highlighted region in a second direction from the depth information; determining a first distribution center from the first highlighted region and a second distribution center from the second highlighted region; and constructing the first distribution function according to the first distribution center, and constructing the second distribution function according to the second distribution center.
In one embodiment, the second coefficient determination unit is further configured to receive a sliding operation of a user; a first highlighted region in a first direction and a second highlighted region in a second direction are determined from the sliding trajectory.
As shown in fig. 9, in an embodiment, the image processing apparatus includes a face recognition module 910, a depth obtaining module 920, a coefficient determining module 930, and a light effect enhancing module 940, and further includes:
and an illumination removing module 950, configured to remove illumination information of the image to be processed.
In the embodiment, the electronic equipment can remove the illumination information in the image to be processed, and then remove the face area without the illumination information to add the lighting effect, so that the image has a better light effect, and the added light effect is more real and natural. Wherein, the illumination information may refer to the degree to which the object is illuminated.
In one embodiment, the image processing apparatus further includes:
a portrait detection module 960, configured to detect a portrait area of the image to be processed according to the face area;
a background confirmation module 970, configured to segment the portrait area from the image to be processed to obtain a background area;
a background blurring module 980, configured to blur the background region.
In this embodiment, the electronic device can make the portrait area in the image to be processed after the background blurring process have a better light effect, so that the light effect of the screen announcement in the simulated stage light screen is more real.
The embodiment of the application also provides the electronic equipment. The electronic device includes therein an Image Processing circuit, which may be implemented using hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 10 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 10, for convenience of explanation, only aspects of the image processing technology related to the embodiments of the present application are shown.
As shown in fig. 10, the image processing circuit includes an ISP processor 1040 and control logic 1050. The image data captured by the imaging device 1010 is first processed by the ISP processor 1040, and the ISP processor 1040 analyzes the image data to capture image statistics that may be used to determine and/or control one or more parameters of the imaging device 1010. The imaging device 1010 may include a camera having one or more lenses 1012 and an image sensor 1014. The image sensor 1014 may include an array of color filters (e.g., Bayer filters), and the image sensor 1014 may acquire light intensity and wavelength information captured with each imaging pixel of the image sensor 1014 and provide a set of raw image data that may be processed by the ISP processor 1040. The sensor 1020 (e.g., a gyroscope) may provide parameters of the acquired image processing (e.g., anti-shake parameters) to the ISP processor 1040 based on the type of sensor 1020 interface. The sensor 1020 interface may utilize an SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
In addition, the image sensor 1014 may also send raw image data to the sensor 1020, the sensor 1020 may provide the raw image data to the ISP processor 1040 based on the type of interface of the sensor 1020, or the sensor 1020 may store the raw image data in the image memory 1030.
The ISP processor 1040 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and ISP processor 1040 may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
ISP processor 1040 may also receive image data from image memory 1030. For example, the sensor 1020 interface sends raw image data to the image memory 1030, and the raw image data in the image memory 1030 is then provided to the ISP processor 1040 for processing. The image Memory 1030 may be part of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from image sensor 1014 interface or from sensor 1020 interface or from image memory 1030, ISP processor 1040 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 1030 for additional processing before being displayed. ISP processor 1040 may also receive processed data from image memory 1030 for image data processing in the raw domain and in the RGB and YCbCr color spaces. The processed image data may be output to a display 1080 for viewing by a user and/or for further Processing by a Graphics Processing Unit (GPU). Further, the output of ISP processor 1040 can also be sent to image memory 1030, and display 1080 can read image data from image memory 1030. In one embodiment, image memory 1030 may be configured to implement one or more frame buffers. In addition, the output of the ISP processor 1040 may be transmitted to the encoder/decoder 1070 in order to encode/decode image data. The encoded image data may be saved and decompressed before being displayed on the display 1080 device.
The steps of the ISP processor 1040 processing the image data include: the image data is subjected to VFE (Video Front End) Processing and CPP (Camera Post Processing). The VFE processing of the image data may include modifying the contrast or brightness of the image data, modifying digitally recorded lighting status data, performing compensation processing (e.g., white balance, automatic gain control, gamma correction, etc.) on the image data, performing filter processing on the image data, etc. CPP processing of image data may include scaling an image, providing a preview frame and a record frame to each path. Among other things, the CPP may use different codecs to process the preview and record frames.
The image data processed by the ISP processor 1040 may be sent to the light effect module 1060 for processing the image according to the light effect model to add light effects before being displayed. The light effect module 1060 can be a Central Processing Unit (CPU), a GPU, a coprocessor, or the like in the electronic device. The data processed by the light effect module 1060 may be transmitted to the encoder/decoder 1070 in order to encode/decode image data. The encoded image data may be saved and decompressed before being displayed on the display 1080 device. The light effect module 1060 can also be located between the encoder/decoder 1070 and the display 1080, that is, the light effect module 1060 adds light effect processing to the imaged image. The encoder/decoder 1070 may be a CPU, GPU, coprocessor, or the like in an electronic device.
The statistics determined by the ISP processor 1040 may be sent to the control logic 1050 unit. For example, the statistical data may include image sensor 1014 statistics such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 1012 shading correction, and the like. Control logic 1050 may include a processor and/or microcontroller executing one or more routines, such as firmware, that may determine control parameters of imaging device 1010 and ISP processor 1040 based on the received statistical data. For example, the control parameters of the imaging device 1010 may include sensor 1020 control parameters (e.g., gain, integration time for exposure control), camera flash control parameters, lens 1012 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), and lens 1012 shading correction parameters.
In this embodiment, the image processing method described above can be implemented by using the image processing technique shown in fig. 10.
In one embodiment, an electronic device is provided, comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of:
carrying out face recognition on the image to be processed, and determining a face area of the image to be processed;
acquiring brightness information of a face region;
determining a brightness enhancement coefficient in the light effect model according to the brightness information;
and performing light ray effect adding treatment on the image to be treated according to the light effect model, wherein the brightness enhancement coefficient is used for adjusting the intensity of the light ray effect.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, realizes the above-mentioned image processing method.
In one embodiment, a computer program product is provided that comprises a computer program, which, when run on an electronic device, causes the electronic device to perform the image processing method described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), or the like.
Any reference to memory, storage, database, or other medium as used herein may include non-volatile and/or volatile memory. Suitable non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An image processing method, comprising:
carrying out face recognition on an image to be processed, and determining a face area of the image to be processed;
acquiring brightness information and depth information of the face area;
acquiring a brightness enhancement factor according to the brightness information, wherein the brightness enhancement factor is represented by a ratio of a preset brightness value to the brightness information; the brightness enhancement factor is a first brightness parameter which influences the light effect enhancement processing intensity;
determining a light effect enhancement coefficient in a light effect model according to the brightness enhancement factor and the depth information; the light effect model is used for adding light effect processing to the image to be processed, and the light effect enhancement coefficient and the added light effect intensity have a positive correlation;
and carrying out light effect enhancement processing on the face region according to the light effect enhancement coefficient, wherein the light effect enhancement coefficient is a parameter influencing the light effect enhancement processing intensity.
2. The method of claim 1, wherein the light effect model comprises a first distribution function and a second distribution function;
the determining the light effect enhancement coefficient according to the brightness enhancement factor and the depth information comprises:
acquiring a first brightness enhancement coefficient according to the depth information and a first distribution function;
acquiring a second brightness enhancement coefficient according to the depth information and a second distribution function;
and acquiring the light effect enhancement coefficient according to the brightness enhancement factor, the first brightness enhancement coefficient and the second brightness enhancement coefficient.
3. The method of claim 2, wherein before obtaining the first luminance enhancement coefficient, further comprising:
acquiring a first brightness enhanced region in a first direction and a second brightness enhanced region in a second direction;
determining a first distribution center from the first highlighted region and a second distribution center from the second highlighted region;
and constructing the first distribution function according to the first distribution center, and constructing the second distribution function according to the second distribution center.
4. A method as recited in claim 3, wherein said obtaining a first brightness enhanced region in a first direction and a second brightness enhanced region in a second direction, further comprises:
receiving sliding operation of a user;
a first highlighted region in a first direction and a second highlighted region in a second direction are determined from the sliding trajectory.
5. The method of claim 2, wherein the first distribution function and the second distribution function have different weighting factors; wherein, the light effect model is expressed by the following formula:
P(x,y)=k[af1(x)+bf2(y)]wherein k is the brightness enhancement factor, f1(x) A is a weight factor of the first distribution function; f. of2(y) is the second distribution function, b is a weighting factor for the second distribution function.
6. The method according to claim 1, wherein before obtaining the brightness information and the depth information of the face region, the method further comprises:
and removing the illumination information of the image to be processed.
7. The method according to any one of claims 1-6, further comprising:
detecting a human face area of the image to be processed according to the human face area;
segmenting the portrait area from the image to be processed to obtain a background area;
and blurring the background area.
8. An image processing apparatus characterized by comprising:
the face recognition module is used for carrying out face recognition on an image to be processed and determining a face area of the image to be processed;
the depth acquisition module is used for acquiring brightness information and depth information of the face area;
a coefficient determining module, configured to obtain a brightness enhancement factor according to the brightness information, where the brightness enhancement factor is represented by a ratio of a preset brightness value to the brightness information; the brightness enhancement factor is a first brightness parameter which influences the light effect enhancement processing intensity;
determining a light effect enhancement coefficient in a light effect model according to the brightness enhancement factor and the depth information; the light effect model is used for adding light effect processing to the image to be processed, and the light effect enhancement coefficient and the added light effect intensity have a positive correlation;
and the lighting effect enhancement module is used for adding a light effect to the face region according to the lighting effect enhancement coefficient, and the lighting effect enhancement coefficient is used for adjusting the intensity of the light effect.
9. An electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to carry out the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN201810997702.XA 2018-08-29 2018-08-29 Image processing method, image processing device, electronic equipment and computer readable storage medium Expired - Fee Related CN109242794B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810997702.XA CN109242794B (en) 2018-08-29 2018-08-29 Image processing method, image processing device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810997702.XA CN109242794B (en) 2018-08-29 2018-08-29 Image processing method, image processing device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN109242794A CN109242794A (en) 2019-01-18
CN109242794B true CN109242794B (en) 2021-05-11

Family

ID=65068721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810997702.XA Expired - Fee Related CN109242794B (en) 2018-08-29 2018-08-29 Image processing method, image processing device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN109242794B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276290B (en) * 2019-06-17 2024-04-19 深圳市繁维科技有限公司 Quick face model acquisition method and quick face model acquisition device based on TOF module
CN110354499B (en) * 2019-07-15 2023-05-16 网易(杭州)网络有限公司 Contour light control method and device
CN110706162A (en) * 2019-09-02 2020-01-17 深圳传音控股股份有限公司 Image processing method and device and computer storage medium
CN111314618B (en) * 2020-03-17 2021-09-28 Tcl移动通信科技(宁波)有限公司 Shooting method, shooting device, storage medium and mobile terminal
CN112102207A (en) * 2020-10-29 2020-12-18 北京澎思科技有限公司 Method and device for determining temperature, electronic equipment and readable storage medium
CN113096231B (en) * 2021-03-18 2023-10-31 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN114615440A (en) * 2022-03-08 2022-06-10 维沃移动通信有限公司 Photographing method, photographing apparatus, electronic device, and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104349072A (en) * 2013-08-09 2015-02-11 联想(北京)有限公司 Control method, device and electronic equipment
CN107018323A (en) * 2017-03-09 2017-08-04 广东欧珀移动通信有限公司 Control method, control device and electronic installation
CN107241558A (en) * 2017-06-16 2017-10-10 广东欧珀移动通信有限公司 Exposure processing method, device and terminal device
CN107451969A (en) * 2017-07-27 2017-12-08 广东欧珀移动通信有限公司 Image processing method, device, mobile terminal and computer-readable recording medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101308572B (en) * 2008-06-24 2011-07-13 北京中星微电子有限公司 Luminous effect processing method and apparatus
CN102499711B (en) * 2011-09-28 2013-07-10 无锡祥生医学影像有限责任公司 Three-dimensional or four-dimensional automatic ultrasound image optimization and adjustment method
WO2017108703A1 (en) * 2015-12-24 2017-06-29 Unilever Plc Augmented mirror
CN107241557A (en) * 2017-06-16 2017-10-10 广东欧珀移动通信有限公司 Image exposure method, device, picture pick-up device and storage medium
CN107730445B (en) * 2017-10-31 2022-02-18 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, storage medium, and electronic device
CN108419028B (en) * 2018-03-20 2020-07-17 Oppo广东移动通信有限公司 Image processing method, image processing device, computer-readable storage medium and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104349072A (en) * 2013-08-09 2015-02-11 联想(北京)有限公司 Control method, device and electronic equipment
CN107018323A (en) * 2017-03-09 2017-08-04 广东欧珀移动通信有限公司 Control method, control device and electronic installation
CN107241558A (en) * 2017-06-16 2017-10-10 广东欧珀移动通信有限公司 Exposure processing method, device and terminal device
CN107451969A (en) * 2017-07-27 2017-12-08 广东欧珀移动通信有限公司 Image processing method, device, mobile terminal and computer-readable recording medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
一种复杂光照条件下的人眼定位方法;王称意;《电子测试》;20130331(第5期);25-27 *
变化光照环境下的人脸识别;杨梅;《中国优秀硕士学位论文全文数据库 信息科技辑》;20141215(第12期);I138-350 *
基于深度数据的人脸旋转角度估计及三维人脸识别的研究;胡珍珍;《中国优秀硕士学位论文全文数据库 信息科技辑》;20110915(第09期);I138-1140 *

Also Published As

Publication number Publication date
CN109242794A (en) 2019-01-18

Similar Documents

Publication Publication Date Title
CN108537155B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN109242794B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN107680128B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN108537749B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN108419028B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN107451969B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN107886484B (en) Beautifying method, beautifying device, computer-readable storage medium and electronic equipment
CN107730444B (en) Image processing method, image processing device, readable storage medium and computer equipment
CN107945135B (en) Image processing method, image processing apparatus, storage medium, and electronic device
JP6903816B2 (en) Image processing method and equipment
CN107730446B (en) Image processing method, image processing device, computer equipment and computer readable storage medium
CN107993209B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN108846807B (en) Light effect processing method and device, terminal and computer-readable storage medium
CN108111749B (en) Image processing method and device
CN107509031B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN108717530B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN113766125B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN108734676B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110149482A (en) Focusing method, device, electronic equipment and computer readable storage medium
CN108055452A (en) Image processing method, device and equipment
CN107862658B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN109191403A (en) Image processing method and device, electronic equipment, computer readable storage medium
CN108322651B (en) Photographing method and device, electronic equipment and computer readable storage medium
CN107945106B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN109040618A (en) Video generation method and device, storage medium, electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210511

CF01 Termination of patent right due to non-payment of annual fee