CN112367470B - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN112367470B
CN112367470B CN202011200199.4A CN202011200199A CN112367470B CN 112367470 B CN112367470 B CN 112367470B CN 202011200199 A CN202011200199 A CN 202011200199A CN 112367470 B CN112367470 B CN 112367470B
Authority
CN
China
Prior art keywords
image
face
processed
target
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011200199.4A
Other languages
Chinese (zh)
Other versions
CN112367470A (en
Inventor
王嗣舜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202011200199.4A priority Critical patent/CN112367470B/en
Publication of CN112367470A publication Critical patent/CN112367470A/en
Application granted granted Critical
Publication of CN112367470B publication Critical patent/CN112367470B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body

Abstract

The application discloses an image processing method, an image processing device and electronic equipment, and belongs to the technical field of image processing. The method comprises the following steps: acquiring an image to be processed through a camera, acquiring a target brightness characteristic value of the image to be processed when the image to be processed is a backlight scene, and identifying a target face in the image to be processed; acquiring a first face image matched with the target brightness characteristic value and the target face; and synthesizing the image to be processed and the first face image to obtain a target image. The method and the device can avoid the phenomena of motion double images, face edge halos and abnormal face region brightness, and effectively solve the problem of dark portrait photographing in a backlight scene.

Description

Image processing method and device and electronic equipment
Technical Field
The application belongs to the technical field of image processing, and particularly relates to an image processing method and device and electronic equipment.
Background
In daily life, people pay great attention to the imaging effect of a portrait and a human face during shooting, the shot scenes are many and complex, and when people take a picture in a backlight scene, the whole portrait part, particularly the human face part, is dark and black.
In order to solve the above technical problems, in the prior art, a multi-frame multi-exposure photo is usually adopted, a portrait and a face in a normally exposed frame are extracted and then synthesized into a final output image, or the brightness of a face part is directly increased after synthesis is performed by using multi-frame same exposure.
The first mode is easy to generate motion double images, face edge halos and abnormal face region brightness, and the second mode can cause the face region in the image to generate larger noise.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method, an image processing device and electronic equipment, and the method, the device and the electronic equipment can solve the problems that in the prior art, a face part is dark and blackened, movement double images, face edge halos and face area brightness abnormity easily occur, or a face area generates relatively large noise.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image processing method, including:
acquiring an image to be processed through a camera;
when the image to be processed is a backlight scene, acquiring a target brightness characteristic value of the image to be processed, and identifying a target face in the image to be processed;
acquiring a first face image matched with the target brightness characteristic value and the target face;
and synthesizing the image to be processed and the first face image to obtain a target image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the image to be processed acquisition module is used for acquiring an image to be processed through the camera;
the target face recognition module is used for acquiring a target brightness characteristic value of the image to be processed and recognizing a target face in the image to be processed when the image to be processed is a backlight scene;
the first image acquisition module is used for acquiring a first face image matched with the target brightness characteristic value and the target face;
and the target image acquisition module is used for synthesizing the image to be processed and the first face image to obtain a target image.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the image processing method according to the first aspect.
In a fourth aspect, the present application provides a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the image processing method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the image processing method according to the first aspect.
In the embodiment of the application, the image to be processed is obtained through the camera, when the image to be processed is a backlight scene, the target brightness characteristic value of the image to be processed is obtained, the target face in the image to be processed is identified, the first face image matched with the target brightness characteristic value and the target face is obtained, and the image to be processed and the first face image are synthesized to obtain the target image. According to the embodiment of the application, the face recognition and matching are carried out on the portrait in the backlight scene by combining the face of the user in the historical photographing data, and finally face synthesis is carried out, so that the phenomena of motion ghost, face edge halo and abnormal brightness of the face area can be avoided, and meanwhile, the problem that the portrait in the backlight scene is dark during photographing is effectively solved.
Drawings
Fig. 1 is a flowchart illustrating steps of an image processing method according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of another electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image generation scheme provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Referring to fig. 1, a flowchart illustrating steps of an image processing method provided in an embodiment of the present application is shown, and as shown in fig. 1, the image processing method may specifically include the following steps:
step 101: and acquiring an image to be processed through a camera.
The method and the device can be applied to scenes for beautifying faces in the images to be processed shot in a backlight mode.
The image to be processed refers to an image which is obtained by backlight shooting and needs to be subjected to face processing.
When a user carries out backlight shooting, a camera on the electronic equipment can be used for acquiring a frame of face image shot in backlight mode, and at the moment, the face image can be used as an image to be processed.
After the image to be processed is acquired by the camera, step 102 is executed.
Step 102: and when the image to be processed is a backlight scene, acquiring a target brightness characteristic value of the image to be processed, and identifying a target face in the image to be processed.
The brightness characteristic value is a characteristic value obtained by synthesizing the average brightness of the background and the average brightness of the face in a face image according to weight under a certain scene. In this example, other regions in the face image except for the region where the face is located may be used as the background.
The target brightness characteristic value is a characteristic value synthesized according to preset weight by combining the average brightness of the background and the average brightness of the human face in the image to be processed.
After the image to be processed is obtained, the face average brightness of the face part and the background average brightness of the background part in the image to be processed can be obtained, and then the target brightness characteristic value corresponding to the image to be processed is calculated by combining the weight corresponding to the face average brightness and the weight corresponding to the background average brightness. In particular, the detailed description may be combined with the following specific implementations.
In a specific implementation manner of the present application, the step 102 may include:
substep A1: and acquiring a first average brightness value of a face area in the image to be processed and a second average brightness value of a non-face area in the image to be processed.
In this embodiment, the obtaining process of the first average brightness value may be as follows: the sum value of the pixel brightness of all pixels in the face area in the image to be processed is obtained, and the first average brightness value can be obtained through calculation by combining the pixel number of all pixels in the area.
The second average brightness value refers to an average value of brightness of a non-face region in the image to be processed, and the obtaining process of the second average brightness value may be: and acquiring the sum of the pixel brightness of all pixels in the non-face area in the image to be processed, and calculating to obtain a second average brightness value by combining the pixel number of all pixels in the non-face area.
After the image to be processed is obtained, whether the image to be processed is an image shot under a backlight scene or not can be judged, if the image to be processed is the image shot under the backlight scene, face recognition is carried out on the image to be processed so as to recognize a face area and a non-face area in the image to be processed, and then a first average brightness value of the face area and a second average brightness value of the non-face area can be calculated respectively.
After the first and second average luminance values are obtained, sub-step a2 is performed.
Substep A2: and determining the target brightness characteristic value according to the first average brightness value, the second average brightness value, a first weight corresponding to the first average brightness value and a second weight corresponding to the second average brightness value.
The first weight is the weight corresponding to the first average brightness value, and the second weight is the weight corresponding to the second average brightness value. Wherein the sum of the first weight and the second weight is equal to 1.
After the first average luminance value and the second average luminance value are obtained, the target luminance characteristic value may be calculated from the first average luminance value, the second average luminance value, the first weight, and the second weight, and specifically, may be as shown in the following formula (1):
λ=kFLF+kBLB (1)
in the above formula (1), λ is a target luminance characteristic value, kFIs a first average luminance value, LFIs a first weight, kBIs a second average luminance value, LBIs the second weight. Since the human face plays a decisive role in characterizing the luminance characteristic values, kF、kBIs set to satisfy kF+kB1, and kF>0.7。
The target face refers to a face that needs to be beautified in the image to be processed, and it can be understood that when the image to be processed only includes one face, the face may be regarded as the target face, and when the image to be processed includes a plurality of faces, one or more faces of the plurality of faces may be regarded as the target face.
After the image to be processed is acquired, face recognition can be performed on the image to be processed so as to recognize and obtain a target face in the image to be processed.
After the target brightness characteristic value of the image to be processed is obtained and the target face in the image to be processed is identified, step 103 is executed.
Step 103: and acquiring a first face image matched with the target brightness characteristic value and the target face.
The first face image is the acquired face image matched with the target brightness characteristic value and the target face.
In this embodiment, the association relationship between different brightness characteristic values of the face, the face of the user and the face image may be established according to the historical photos in the album of the electronic device, so as to obtain a matched face image according to the obtained target brightness characteristic value and the target face in the subsequent process, and the establishment process of the association relationship may be described as follows in combination with the following specific implementation manner.
In another specific implementation manner of the present application, before the step 101, the method may further include:
step B1: and acquiring a plurality of face images containing the faces of the users.
In the embodiment of the application, a plurality of face images containing faces of users can be acquired from electronic equipment, and the acquired face images can be face images shot in a backlight shooting scene and serve as samples for establishing an association relationship.
After acquiring a plurality of face images including the faces of the user, step B2 is performed.
Step B2: and aiming at each face image, acquiring a brightness characteristic value corresponding to the face image.
The human face features refer to the facial features of users, the facial features of each person are different in the practical application process, and when the association relationship is established, the association relationship among the human faces of the users, the brightness feature values and the human face images can be established by combining the human faces of the users and the brightness feature values of different human face images.
After acquiring a plurality of face images including the faces of the user, the luminance characteristic value of each face image may be acquired, and then step B3 is performed.
Step B3: and establishing and storing a corresponding relation among the user face, the face image and the brightness characteristic value.
After the brightness characteristic value of the face image is obtained, the corresponding relationship among the face of the user, the face image and the brightness characteristic value may be established.
And between the acquisition of the faces of the target users, acquiring the target faces of the target users in the images to be processed, and matching the target faces with the corresponding first face images according to the target faces and the target brightness characteristic values.
After the first face image matching the target luminance characteristic value and the target face is acquired, step 104 is performed.
Step 104: and synthesizing the image to be processed and the first face image to obtain a target image.
After the first face image matched with the target brightness characteristic value and the target face is obtained, the image to be processed and the first face image can be synthesized to obtain the target image, and specifically, target face image information in the image to be processed can be replaced by target face image information in the first face image, so that the target image is obtained.
According to the embodiment of the application, the face recognition and matching are carried out on the portrait in the backlight scene by combining the face of the user in the historical photographing data, and finally face synthesis is carried out, so that the phenomena of motion ghost, face edge halo and abnormal brightness of the face area can be avoided, and meanwhile, the problem that the portrait in the backlight scene is dark during photographing is effectively solved.
When a face image matched with the target brightness characteristic value and the target face is not acquired, a technique of multi-frame image synthesis may be adopted to obtain a synthesized image, and specifically, the following specific implementation manner may be combined for detailed description.
In another specific implementation manner of the present application, after the step 103, the method may further include:
step S1: and under the condition that a face image matched with the target brightness characteristic value and the target face is not acquired, acquiring a plurality of continuous second face images which are related to the image to be processed and contain the target face.
In the embodiment of the present application, the second face image refers to an image containing a target face associated with an image to be processed.
When a matching face image is not acquired according to the target brightness characteristic value and the target face, a plurality of frames of continuous second face images including the target face associated with the image to be processed may be acquired, and then step S2 is performed.
Step S2: and synthesizing the image to be processed and the second face image to obtain a synthesized image.
After multiple frames of continuous second face images are acquired, the image to be processed and the multiple frames of continuous second face images can be synthesized, so that a synthesized image can be obtained.
According to the embodiment of the application, when the target brightness characteristic value and the face image of the target face are not matched, the problem that the image shooting of the backlight scene is dark can be solved by adopting a multi-frame synthesis technology.
According to the image processing method provided by the embodiment of the application, the to-be-processed image is obtained through the camera, when the to-be-processed image is in a backlight scene, the target brightness characteristic value of the to-be-processed image is obtained, the target face in the to-be-processed image is identified, the first face image matched with the target brightness characteristic value and the target face is obtained, and the to-be-processed image and the first face image are synthesized to obtain the target image. According to the embodiment of the application, the face recognition and matching are carried out on the portrait in the backlight scene by combining the face of the user in the historical photographing data, and finally face synthesis is carried out, so that the phenomena of motion ghost, face edge halo and abnormal brightness of the face area can be avoided, and meanwhile, the problem that the portrait in the backlight scene is dark during photographing is effectively solved.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the image processing method. The image processing apparatus provided in the embodiment of the present application is described with an example in which an image processing apparatus executes an image processing method.
Referring to fig. 2, a schematic structural diagram of an image processing apparatus provided in an embodiment of the present application is shown, and as shown in fig. 2, the image processing apparatus 200 may specifically include the following modules:
a to-be-processed image obtaining module 210, configured to obtain a to-be-processed image through a camera;
a target face recognition module 220, configured to, when the image to be processed is a backlight scene, obtain a target brightness characteristic value of the image to be processed, and recognize a target face in the image to be processed;
a first image obtaining module 230, configured to obtain a first face image matched with the target brightness characteristic value and the target face;
and a target image obtaining module 240, configured to synthesize the to-be-processed image and the first face image to obtain a target image.
Optionally, the apparatus further comprises:
the face image acquisition module is used for acquiring a plurality of face images containing the faces of the users;
the brightness characteristic value acquisition module is used for acquiring a brightness characteristic value corresponding to each face image;
and the corresponding relation establishing module is used for establishing the corresponding relation among the user face, the face image and the brightness characteristic value.
Optionally, the target face recognition module 220 includes:
the brightness average value acquisition unit is used for acquiring a first average brightness value of a face area in the image to be processed and a second average brightness value of a non-face area in the image to be processed;
a target brightness value obtaining unit, configured to determine the target brightness feature value according to the first average brightness value, the second average brightness value, a first weight corresponding to the first average brightness value, and a second weight corresponding to the second average brightness value;
wherein a sum of the first weight and the second weight is equal to 1.
Optionally, the apparatus further comprises:
the second image acquisition module is used for acquiring a plurality of continuous second face images which are related to the image to be processed and contain the target face under the condition that the face image matched with the target brightness characteristic value and the target face is not acquired;
and the synthesized image acquisition module is used for synthesizing the image to be processed and the second face image to obtain a synthesized image.
The image processing device provided by the embodiment of the application acquires the image to be processed through the camera, acquires the target brightness characteristic value of the image to be processed when the image to be processed is in a backlight scene, identifies the target face in the image to be processed, acquires the first face image matched with the target brightness characteristic value and the target face, and synthesizes the image to be processed and the first face image to obtain the target image. According to the embodiment of the application, the face recognition and matching are carried out on the portrait in the backlight scene by combining the face of the user in the historical photographing data, and finally face synthesis is carried out, so that the phenomena of motion ghost, face edge halo and abnormal brightness of the face area can be avoided, and meanwhile, the problem that the portrait in the backlight scene is dark during photographing is effectively solved.
The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented in the embodiment of the method in fig. 1, and is not described here again to avoid repetition.
Optionally, as shown in fig. 3, an electronic device 300 is further provided in this embodiment of the present application, and includes a processor 301, a memory 302, and a program or an instruction stored in the memory 302 and capable of being executed on the processor 301, where the program or the instruction is executed by the processor 301 to implement each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, it is not described here again.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 4 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 400 includes, but is not limited to: radio frequency unit 41, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, processor 410, and the like.
Those skilled in the art will appreciate that the electronic device 400 may further include a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 410 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 4 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The radio frequency unit 401 is configured to obtain an image to be processed through a camera; when the image to be processed is a backlight scene, acquiring a target brightness characteristic value of the image to be processed, and identifying a target face in the image to be processed; acquiring a first face image matched with the target brightness characteristic value and the target face;
and the processor 410 is configured to synthesize the image to be processed and the first face image to obtain a target image.
The embodiment of the application can avoid the phenomena of motion ghost, face edge halo and abnormal face region brightness, and effectively solves the problem of dark portrait photographing in a backlight scene.
Optionally, the radio frequency unit 401 is further configured to obtain a plurality of face images including faces of the user; aiming at each face image, acquiring a brightness characteristic value corresponding to the face image;
the processor 410 is further configured to establish and store a corresponding relationship between the user face, the face image, and the brightness feature value.
Optionally, the radio frequency unit 401 is further configured to obtain a first average brightness value of a face region in the image to be processed and a second average brightness value of a non-face region in the image to be processed;
the processor 410 is further configured to determine the target brightness characteristic value according to the first average brightness value, the second average brightness value, a first weight corresponding to the first average brightness value, and a second weight corresponding to the second average brightness value; wherein a sum of the first weight and the second weight is equal to 1.
Optionally, the radio frequency unit 401 is further configured to, when a face image matched with the target brightness characteristic value and the target face is not obtained, obtain multiple frames of continuous second face images including the target face associated with the image to be processed;
the processor 410 is further configured to synthesize the image to be processed and the second face image to obtain a synthesized image.
The embodiment of the application can also be based on the brightness characteristic value data of the learning history photo, so that various complex photographing scenes can be handled.
It should be understood that in the embodiment of the present application, the input Unit 404 may include a Graphics Processing Unit (GPU) 4041 and a microphone 4042, and the Graphics processor 4041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 406 may include a display panel 4061, and the display panel 4061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 407 includes a touch panel 4071 and other input devices 4072. A touch panel 4071, also referred to as a touch screen. The touch panel 4071 may include two parts, a touch detection device and a touch controller. Other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 409 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 410 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (8)

1. An image processing method, comprising:
acquiring an image to be processed through a camera;
when the image to be processed is a backlight scene, acquiring a target brightness characteristic value of the image to be processed, and identifying a target face in the image to be processed;
acquiring a first face image matched with the target brightness characteristic value and the target face;
synthesizing the image to be processed and the first face image to obtain a target image;
the acquiring of the target brightness characteristic value of the image to be processed includes:
acquiring a first average brightness value of a face area in the image to be processed and a second average brightness value of a non-face area in the image to be processed;
determining the target brightness characteristic value according to the first average brightness value, the second average brightness value, a first weight corresponding to the first average brightness value and a second weight corresponding to the second average brightness value;
wherein a sum of the first weight and the second weight is equal to 1.
2. The method according to claim 1, before the obtaining the target brightness characteristic value of the image to be processed and identifying the target face in the image to be processed, further comprising:
acquiring a plurality of face images containing faces of users;
aiming at each face image, acquiring a brightness characteristic value corresponding to the face image;
and establishing a corresponding relation among the user face, the face image and the brightness characteristic value.
3. The method according to claim 1, after the obtaining of the target brightness characteristic value of the image to be processed and the identifying of the target face in the image to be processed, further comprising:
under the condition that a face image matched with the target brightness characteristic value and the target face is not obtained, obtaining a plurality of continuous second face images which are related to the image to be processed and contain the target face;
and synthesizing the image to be processed and the second face image to obtain a synthesized image.
4. An image processing apparatus characterized by comprising:
the image to be processed acquisition module is used for acquiring an image to be processed through the camera;
the target face recognition module is used for acquiring a target brightness characteristic value of the image to be processed and recognizing a target face in the image to be processed when the image to be processed is a backlight scene;
the first image acquisition module is used for acquiring a first face image matched with the target brightness characteristic value and the target face;
the target image acquisition module is used for synthesizing the image to be processed and the first face image to obtain a target image;
wherein the target face recognition module comprises:
the brightness average value acquisition unit is used for acquiring a first average brightness value of a face area in the image to be processed and a second average brightness value of a non-face area in the image to be processed;
a target brightness value obtaining unit, configured to determine the target brightness feature value according to the first average brightness value, the second average brightness value, a first weight corresponding to the first average brightness value, and a second weight corresponding to the second average brightness value;
wherein a sum of the first weight and the second weight is equal to 1.
5. The apparatus of claim 4, further comprising:
the face image acquisition module is used for acquiring a plurality of face images containing the faces of the users;
the brightness characteristic value acquisition module is used for acquiring a brightness characteristic value corresponding to each face image;
and the corresponding relation establishing module is used for establishing the corresponding relation among the user face, the face image and the brightness characteristic value.
6. The apparatus of claim 4, further comprising:
the second image acquisition module is used for acquiring a plurality of continuous second face images which are related to the image to be processed and contain the target face under the condition that the face image matched with the target brightness characteristic value and the target face is not acquired;
and the synthesized image acquisition module is used for synthesizing the image to be processed and the second face image to obtain a synthesized image.
7. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the image processing method according to any one of claims 1 to 3.
8. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the image processing method according to any one of claims 1 to 3.
CN202011200199.4A 2020-10-29 2020-10-29 Image processing method and device and electronic equipment Active CN112367470B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011200199.4A CN112367470B (en) 2020-10-29 2020-10-29 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011200199.4A CN112367470B (en) 2020-10-29 2020-10-29 Image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112367470A CN112367470A (en) 2021-02-12
CN112367470B true CN112367470B (en) 2022-03-08

Family

ID=74512518

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011200199.4A Active CN112367470B (en) 2020-10-29 2020-10-29 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112367470B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101853498A (en) * 2009-03-31 2010-10-06 华为技术有限公司 Image synthetizing method and image processing device
CN103702037A (en) * 2013-12-04 2014-04-02 杨新锋 Automatic regulating method for video image brightness
CN107707838A (en) * 2017-09-11 2018-02-16 广东欧珀移动通信有限公司 Image processing method and device
CN109167921A (en) * 2018-10-18 2019-01-08 北京小米移动软件有限公司 Image pickup method, device, terminal and storage medium
CN110213476A (en) * 2018-02-28 2019-09-06 腾讯科技(深圳)有限公司 Image processing method and device
CN110602403A (en) * 2019-09-23 2019-12-20 华为技术有限公司 Method for taking pictures under dark light and electronic equipment
CN110913123A (en) * 2019-08-15 2020-03-24 厦门亿联网络技术股份有限公司 Anti-backlight automatic exposure method and device based on image blocking filtering and electronic equipment
CN111416948A (en) * 2020-03-25 2020-07-14 维沃移动通信有限公司 Image processing method and electronic equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012066775A1 (en) * 2010-11-18 2012-05-24 パナソニック株式会社 Image capture device, image capture method
JP6046905B2 (en) * 2012-04-02 2016-12-21 キヤノン株式会社 Imaging apparatus, exposure control method, and program
CN102903086B (en) * 2012-10-17 2015-05-20 北京经纬恒润科技有限公司 Brightness adjustment method and device of image to be spliced
CN109102484B (en) * 2018-08-03 2021-08-10 北京字节跳动网络技术有限公司 Method and apparatus for processing image
CN111161205B (en) * 2018-10-19 2023-04-18 阿里巴巴集团控股有限公司 Image processing and face image recognition method, device and equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101853498A (en) * 2009-03-31 2010-10-06 华为技术有限公司 Image synthetizing method and image processing device
CN103702037A (en) * 2013-12-04 2014-04-02 杨新锋 Automatic regulating method for video image brightness
CN107707838A (en) * 2017-09-11 2018-02-16 广东欧珀移动通信有限公司 Image processing method and device
CN110213476A (en) * 2018-02-28 2019-09-06 腾讯科技(深圳)有限公司 Image processing method and device
CN109167921A (en) * 2018-10-18 2019-01-08 北京小米移动软件有限公司 Image pickup method, device, terminal and storage medium
CN110913123A (en) * 2019-08-15 2020-03-24 厦门亿联网络技术股份有限公司 Anti-backlight automatic exposure method and device based on image blocking filtering and electronic equipment
CN110602403A (en) * 2019-09-23 2019-12-20 华为技术有限公司 Method for taking pictures under dark light and electronic equipment
CN111416948A (en) * 2020-03-25 2020-07-14 维沃移动通信有限公司 Image processing method and electronic equipment

Also Published As

Publication number Publication date
CN112367470A (en) 2021-02-12

Similar Documents

Publication Publication Date Title
CN112770059B (en) Photographing method and device and electronic equipment
CN112135046A (en) Video shooting method, video shooting device and electronic equipment
CN108513069B (en) Image processing method, image processing device, storage medium and electronic equipment
CN112291475B (en) Photographing method and device and electronic equipment
CN112422798A (en) Photographing method and device, electronic equipment and storage medium
CN108259767B (en) Image processing method, image processing device, storage medium and electronic equipment
CN112511743B (en) Video shooting method and device
CN112508820A (en) Image processing method and device and electronic equipment
CN111835937A (en) Image processing method and device and electronic equipment
CN108495038B (en) Image processing method, image processing device, storage medium and electronic equipment
CN112367470B (en) Image processing method and device and electronic equipment
CN113794831B (en) Video shooting method, device, electronic equipment and medium
CN112383708B (en) Shooting method and device, electronic equipment and readable storage medium
CN115439386A (en) Image fusion method and device, electronic equipment and storage medium
CN112738398B (en) Image anti-shake method and device and electronic equipment
CN114093005A (en) Image processing method and device, electronic equipment and readable storage medium
CN112153291B (en) Photographing method and electronic equipment
CN114339051A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112367464A (en) Image output method and device and electronic equipment
CN112672056A (en) Image processing method and device
CN112291474A (en) Image acquisition method and device and electronic equipment
CN114143448B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112333388B (en) Image display method and device and electronic equipment
CN113923367B (en) Shooting method and shooting device
CN112399091B (en) Image processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant