CN113992859A - Image quality improving method and device - Google Patents

Image quality improving method and device Download PDF

Info

Publication number
CN113992859A
CN113992859A CN202111608219.6A CN202111608219A CN113992859A CN 113992859 A CN113992859 A CN 113992859A CN 202111608219 A CN202111608219 A CN 202111608219A CN 113992859 A CN113992859 A CN 113992859A
Authority
CN
China
Prior art keywords
exposure
image
value
current
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111608219.6A
Other languages
Chinese (zh)
Inventor
刘达生
潘嘉明
陈彬
于海波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunding Network Technology Beijing Co Ltd
Original Assignee
Yunding Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunding Network Technology Beijing Co Ltd filed Critical Yunding Network Technology Beijing Co Ltd
Priority to CN202111608219.6A priority Critical patent/CN113992859A/en
Publication of CN113992859A publication Critical patent/CN113992859A/en
Priority to PCT/CN2022/104406 priority patent/WO2023280273A1/en
Priority to CN202280048533.XA priority patent/CN117730524A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application discloses a method and a device for improving image quality, wherein a specific implementation mode of the method comprises the following steps: carrying out face recognition on the current image to obtain a face area; acquiring an exposure value of a current image after strong light suppression in response to turning off the current image; and respectively setting exposure weights inside and outside the face region based on a preset exposure scene corresponding to the exposure value so as to process the current image based on the set exposure weights and obtain the image of the face region, wherein the preset exposure scene comprises the exposure weights inside and outside the face region corresponding to the exposure value. The embodiment realizes the automatic adjustment of the exposure weight of the face area under different exposure scenes, thereby obtaining a clear face image, without the improvement of a photosensitive sensor, and having lower cost and higher efficiency.

Description

Image quality improving method and device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for improving image quality.
Background
Along with the continuous improvement of people's safety consciousness, more and more people can adopt electronic product such as domestic surveillance camera head, intelligent cat eye, visual doorbell to monitor indoor outer environment. Due to the limitation of the existing photosensitive sensor and the corresponding image signal processing algorithm, the image quality of the image is reduced under the conditions of backlight, highlight overexposure and the like, and the human face cannot be clearly seen.
In order to make a human face clearer, the prior art for improving the imaging quality adopts a principle based on HDR/WDR (high dynamic illumination rendering/wide dynamic range), continuously shoots images of two frames, and then synthesizes the images into an image of one frame after image processing, so as to improve the imaging quality. Although the imaging effect of the method is improved, the method has higher requirements on hardware of the photosensitive sensor and needs the support of a corresponding main control chip and an image processing algorithm, so that the realization cost is higher, and the corresponding drive development and image effect debugging period are longer.
Disclosure of Invention
The embodiment of the application provides a method and a device for improving image quality.
In a first aspect, an embodiment of the present application provides a method for improving image quality, including:
carrying out face recognition on the current image to obtain a face area;
acquiring an exposure value of the current image after strong light suppression in response to turning off the current image;
and respectively setting exposure weights inside and outside the face region based on a preset exposure scene corresponding to the exposure value so as to process the current image based on the set exposure weights and obtain an image of the face region, wherein the preset exposure scene comprises the exposure weights inside and outside the face region corresponding to the exposure value.
In some embodiments, the image quality improvement method further includes:
after the human face in the current image is detected not to be in the target area range, restoring the exposure weight of the current image to a value before setting;
switching the highlight suppression of the current image to a pre-off state after the restoration is completed.
In some embodiments, the restoring the exposure weight of the current image to the pre-set value includes:
setting the exposure weight outside the face region as a preset value;
and recovering to the value before setting after delaying for preset time.
In some embodiments, the image quality improvement method further includes:
acquiring a current brightness value of an environment;
and when the current brightness value is smaller than a supplementary lighting threshold value, carrying out supplementary lighting processing.
In some embodiments, the performing supplementary lighting processing when the current brightness value is smaller than a lighting threshold includes:
acquiring current configuration information;
and if the current configuration information meets a first condition, performing visible light supplementary lighting processing based on the current brightness value, wherein the first condition represents that the equipment has a preset supplementary lighting parameter.
In some embodiments, the performing supplementary lighting processing when the current brightness value is smaller than a lighting threshold further includes:
and if the current configuration information meets a second condition, performing visible light supplementary lighting based on a preset threshold, wherein the second condition represents that the equipment performs visible light supplementary lighting based on the preset threshold.
In some embodiments, turning on a fill-in light for fill-in light when the current brightness value is smaller than a light-on threshold further includes:
and if the configuration information meets a third condition, performing infrared light supplement based on a preset mode, wherein the third condition represents that the light supplement mode is closed.
In some embodiments, after performing face recognition on the current image and obtaining a face region, the method further includes:
re-dividing the current image, and converting the original coordinates of the face region into a re-divided coordinate system;
and respectively setting exposure weights inside and outside the newly divided human face region based on a preset exposure scene corresponding to the exposure value.
In some embodiments, the image quality improvement method further includes:
acquiring performance parameters of a user terminal;
determining a corresponding image quality enhancement engine identifier based on the performance parameters;
and sending the image quality enhancement engine identification to the user terminal so that the user terminal calls a corresponding image quality enhancement engine based on the image quality enhancement engine identification to perform enhancement processing on the image of the face region.
In a second aspect, an embodiment of the present application provides an image quality improving apparatus, including:
the face recognition module is used for carrying out face recognition on the current image to obtain a face area;
an exposure value acquisition module for acquiring an exposure value of the current image after strong light suppression in response to closing the current image; and
and the weight adjusting module is used for respectively setting the exposure weights inside and outside the face region based on a preset exposure scene corresponding to the exposure value so as to process the current image based on the set exposure weights and acquire the image of the face region, wherein the preset exposure scene comprises the exposure weights inside and outside the face region corresponding to the exposure value.
According to the present invention, a storage medium is provided, on which a computer program is stored, wherein the program is executed by a processor to implement the image quality improvement method as described above.
According to the specific embodiment of the present application, there is provided an apparatus, including: one or more processors; a storage device having one or more programs stored thereon; when the one or more programs are executed by the one or more processors, the one or more processors are enabled to implement the image quality improvement method.
According to the image quality improving method and device, the face area is obtained by carrying out face recognition on the current image; acquiring an exposure value of a current image after strong light suppression in response to turning off the current image; and respectively setting exposure weights inside and outside the face region based on a preset exposure scene corresponding to the exposure value so as to process the current image based on the set exposure weights and obtain the image of the face region, wherein the preset exposure scene comprises the exposure weights inside and outside the face region corresponding to the exposure value. The automatic adjustment of the exposure weight of the face area under different exposure scenes is realized, so that a clear face image is obtained, the improvement of a photosensitive sensor is not needed, and the cost is lower and the efficiency is higher.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts, and the present application can be applied to other similar scenarios according to the provided drawings. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
FIG. 1 is an exemplary system architecture diagram to which some embodiments of the present application may be applied;
fig. 2 is a flowchart of an embodiment of an image quality improving method according to the present application;
fig. 3 is a flowchart illustrating an image quality improvement method according to another embodiment of the present application;
FIG. 4 is a schematic diagram of one embodiment of a current image repartitioning according to the present application;
FIG. 5 is a flow diagram of one embodiment of automatic fill lighting according to the present application;
fig. 6 is a flowchart of an embodiment of user-side image quality enhancement according to the present application;
fig. 7 is a block diagram of an embodiment of an image quality improving apparatus according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be understood that "system", "apparatus", "unit" and/or "module" as used herein is a method for distinguishing different components, elements, parts or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this application and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements. An element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, in the description of the embodiments of the present application, "a plurality" means two or more than two.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature.
Flow charts are used herein to illustrate operations performed by systems according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
Fig. 1 illustrates an exemplary system architecture to which some embodiments of the image quality improvement method or apparatus of the present application may be applied.
As shown in fig. 1, the system architecture may include a monitoring device 101, a server 102, and a terminal device 103. The connection between the server 102 and the terminal device 103 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may use the terminal device 3 to interact with the server 102 via the network to receive or send messages or the like. Various client applications, such as shopping applications, search applications, social platform software, etc., may be installed on the terminal device 103.
The terminal device 103 may be hardware or software. When the terminal device 103 is hardware, it may be various electronic devices, including but not limited to a smart phone, a tablet computer, a wearable device, a vehicle-mounted device, an Augmented Reality (AR)/Virtual Reality (VR) device, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), an e-book reader, an MP3 (Moving Picture Experts Group Audio Layer III, Moving Picture Experts compression standard Audio Layer 3) player, an MP4 (Moving Picture Experts Group Audio Layer IV, Moving Picture Experts compression standard Audio Layer 4) player, a laptop portable computer, a desktop computer, and other electronic devices. When the terminal device 3 is software, it can be installed in the electronic devices listed above. It may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services) or as a single piece of software or software module. The embodiment of the present application does not set any limit to the specific type of the electronic device.
The server 102 may be a server that provides various services, such as a background server that provides support for the terminal device 103. The background server may, in response to receiving the information acquisition request sent by the terminal device 103, perform processing such as analysis on the request, obtain a processing result (for example, information to be pushed), and return the processing result.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the image quality improving method provided by the embodiment of the present application is generally executed by the server 102, and accordingly, the image quality improving apparatus is generally disposed in the server 102.
It should be understood that the number of monitoring devices, servers and terminal devices in fig. 1 is merely illustrative. There may be any number of monitoring devices, servers, and terminal devices, as desired for implementation.
With continued reference to fig. 2, a flow of an embodiment of an image quality improvement method according to the present application is shown. The image quality improving method comprises the following steps:
201. carrying out face recognition on the current image to obtain a face area;
generally, a monitoring camera integrated with a face and human shape recognition function has corresponding coordinates after the face is recognized, and corresponding area coordinates can be directly read from a chip processing platform corresponding to the camera. When the recognized human shape area is large, the subsequent adjustment effect can be influenced, the human face area can be estimated and cut according to the proportion of the human face area in the human shape area, and therefore the area needing to be adjusted is locked in the human face area as far as possible.
202. Acquiring an exposure value of the current image after strong light suppression responding to closing of the current image;
because the strong light inhibition function of the camera is automatically started according to the light condition of a scene, the brightness of an image can be adjusted, and the subsequent adjustment of the exposure values inside and outside the face area can be influenced, so that the strong light inhibition function of the camera needs to be closed, and the influence on the subsequent adjustment of the exposure values is avoided.
203. And respectively setting exposure weights inside and outside the face region based on a preset exposure scene corresponding to the exposure value so as to process the current image based on the set exposure weights and obtain the image of the face region, wherein the preset exposure scene comprises the exposure weights inside and outside the face region corresponding to the exposure value.
Can be according to the inside and outside exposure value of the face region in the exposure scene that preset image exposure value corresponds, confirm the inside and outside exposure weight in current image face region, realize adjusting the exposure value of image according to the inside and outside different exposure weight in face region, can make face region clear under scenes such as adverse light, and then solve the problem of the face of a person of unclear sight, compare in prior art and need not be to photosensitive sensor's improvement, the cost is lower efficiency is higher. And when a plurality of face regions exist, corresponding adjustment can be carried out at the same time.
As an implementation of the above embodiment, the exposure weight can be divided into 8 levels, and the higher the level is, the more obvious the corresponding exposure frame is, and the higher the brightness of the frame is. Generally, the exposure weight in the face region can be set to 8, and the exposure value outside the face region according to the camera can be set to 0 or 1, so that the face region can be clearer.
After the human shape or face in the lens disappears, the relevant parameters in the camera are required to be restored to the original condition so as to carry out normal monitoring, recording and the like. Referring to fig. 3, in another embodiment of the present disclosure, the image quality improvement method may include the following steps:
301. carrying out face recognition on the current image to obtain a face area;
302. acquiring an exposure value of the current image after strong light suppression responding to closing of the current image;
303. respectively setting exposure weights inside and outside the face region based on a preset exposure scene corresponding to the exposure value so as to process the current image based on the set exposure weights and obtain an image of the face region, wherein the preset exposure scene comprises the exposure weights inside and outside the face region corresponding to the exposure value;
304. after the human face in the current image is detected not to be in the target area range, restoring the exposure weight of the current image to a value before setting;
305. switching the highlight suppression of the current image to a pre-off state after the restoration is completed.
Specifically, the restoring the exposure weight of the current image to the value before setting includes: setting the exposure weight outside the face area as a preset value; and recovering to the value before setting after delaying the preset time. For example, after the face or the human figure in the lens is detected to disappear, the weight outside the face area is set to be 1, after the weight outside the face area is set to be finished after delaying for a certain time, such as 1 second, the original weight of the camera for exposing the image is restored, then after the weight is ensured to be set to be finished after delaying for 1 second, the highlight inhibition is turned on to finish the whole process of respectively setting the weight inside and outside the face area from the recognition to the face, and the normal setting is restored after the face disappears.
Generally, before exposure weight adjustment is performed, the exposure value of the whole image is fixed, and regional adjustment is not performed, but after the face region is detected to perform regional exposure weight adjustment, the exposure weight outside the face region is already set to be 0 or 1, so that after the face disappears after the exposure weight adjustment is completed, the exposure weight of the image can be restored to be 0 or 1, namely, the exposure value is consistent with the exposure value outside the face region.
Wherein for improving the efficiency of adjusting the internal and external weighted value of the face region, face recognition is carried out on the current image, and after the face region is obtained, the method also can comprise the following steps:
re-dividing the current image, and converting the original coordinates of the face region into a re-divided coordinate system;
and then respectively setting exposure weights inside and outside the human face region after being divided again based on a preset exposure scene corresponding to the exposure value.
Specifically, referring to fig. 4, taking the obtained coordinates which are the resolution (480 × 480) of the image as an example, the image may be divided into 15 × 15 areas, and the original coordinates may be converted into 15 × 15 areas according to the corresponding needs, so that the corresponding weight may be adjusted according to the adjusted coordinates in the subsequent processing. Because the original image data needs to be adjusted, before the conversion, whether the image is turned over or rotated needs to be confirmed, and the image needs to be divided and converted after being converted to the original coordinates. For example, referring to fig. 5, after the image is mirror-inverted, the coordinates of the upper left corner of the face image are (x1, y1) and the lower right corner of the face image are (x2, y2), and the coordinates after conversion to the 15 × 15 region are:
ae_x1 = (480 - x1) / 32;
ae_y1 = (480 - y1) / 32;
ae_x2 = (480 - x2) / 32;
ae_y2 = (480 - y2) / 32。
and correspondingly obtaining the coordinates of the face area which is subdivided into 15 × 15 areas, and further determining the corresponding face area. Then, the corresponding weight values inside and outside the face area can be adjusted according to the converted coordinates, so that the exposure weight can be adjusted more quickly, and the adjustment efficiency is improved. Generally, the exposure weight in the face region can be set to 1, and the exposure weight outside the corresponding face region can be set to 0, so that the image of the face can be highlighted, and the obtained image of the face is clearer.
It is understood that the conversion of coordinates can be performed by those skilled in the art according to the size of the image and the processing requirements, and the present application is not limited thereto.
In order to further optimize the technical solution, the image quality improvement method may further include:
acquiring a current brightness value of an environment;
and when the current brightness value is smaller than the supplementary lighting threshold value, carrying out supplementary lighting processing.
Referring to fig. 5, when the current luminance value is smaller than the lighting threshold, the light supplement processing is performed, which specifically includes the following steps:
501. acquiring current configuration information;
502. and if the current configuration information meets a first condition, performing visible light supplementary lighting processing based on the current brightness value, wherein the first condition represents that the equipment has a preset supplementary lighting parameter.
503. And if the current configuration information meets a second condition, performing visible light supplementary lighting based on a preset threshold value, and characterizing that the equipment performs visible light supplementary lighting based on the preset threshold value.
504. And if the configuration information meets a third condition, performing infrared light supplement based on a preset mode, wherein the third condition represents that the light supplement mode is closed.
Specifically, when someone triggers the camera to record or directly broadcast before passing through the camera, the ADC value of the photosensitive sensor associated with the camera can be read in real time, and whether the current ambient brightness needs to be supplemented with light is judged. The light can be supplemented according to a visible light supplementing mode set by a user, wherein the conditions required to be met can be normally open (the light supplementing brightness can be set), automatic, closed and corresponding light supplementing threshold values. When the photosensitive ADC threshold is smaller than the light supplement threshold and the light supplement mode is normally open, turning on a light supplement lamp at a preset brightness value; when the light supplement mode is automatic, the light supplement lamp is automatically turned on according to the photosensitive ADC value and the picture is always colored in the visible light supplement mode, so that more image details can be reserved, and a better display effect is provided.
The current configuration information is a light supplement mode preset by a user, and corresponding light supplement can be performed according to the light supplement mode set by the user when light supplement is performed, wherein the first condition is automatic light supplement, and after the configuration information is identified, a light supplement lamp can be controlled according to a current brightness value of an environment to perform light supplement according to a preset comparison table; the second condition is that light supplement is performed according to a set value of a user, when the brightness of the environment is lower than a set threshold value, a visible light supplement lamp is turned on according to a light supplement value set by the user to perform light supplement, namely, the brightness value after light supplement is constant; the third condition is that the supplementary lighting is turned off, when the supplementary lighting set by the user is turned off, the supplementary lighting of visible light cannot be performed, and at the moment, the infrared supplementary lighting can be started to perform the supplementary lighting.
Still support traditional infrared light filling lamp to carry out the light filling simultaneously, the priority that infrared light filling lamp was opened is less than the priority of visible light filling lamp, when detecting the relevant setting of visible light filling lamp, just can launch infrared light filling lamp and carry out the light filling, and wherein the light filling mode of infrared light filling lamp can be including normally opening, automatic, mode such as close equally, and this application is not being repeated here.
In order to solve the problems of unclear image quality details, blurred fast moving images and the like of the user terminal, referring to fig. 6, the image quality improving method further includes the following steps:
601. acquiring performance parameters of a user terminal;
602. determining a corresponding image quality enhancement engine identifier based on the performance parameters;
603. and sending the image quality enhancement engine identification to the user terminal so that the user terminal calls a corresponding image quality enhancement engine based on the image quality enhancement engine identification to perform enhancement processing on the image of the face area.
The method comprises the steps that an image quality enhancement engine is integrated in a user terminal, wherein the image quality enhancement engine is a functional module or an application program integrated with an image quality enhancement processing algorithm, a user can set whether to turn on an image quality enhancement function, the image quality enhancement engine can support image quality enhancement processing of a machine learning algorithm and a deep learning algorithm, corresponding algorithms can be selected to be processed in a self-adaptive mode according to the performance of the user terminal, the selection is mainly carried out according to the performance conditions of CPU main frequency, memory, NPU computing power and the like of the user terminal, a machine learning algorithm with a relatively complex processing flow and a better effect can be selected to be processed when the performance is good, and the image quality enhancement processing can be carried out on common equipment by adopting a more matched deep learning algorithm.
The machine learning algorithm mainly comprises the steps of noise reduction processing, dynamic compensation, color enhancement, sharpness enhancement, contrast enhancement, rendering display and the like. After the function of image quality enhancement is turned on, a user can select to view a video or an implemented picture, and each frame of data after encoded data such as H264/H265 and the like are decoded is sent to a picture enhancement engine to be subjected to a series of algorithm processing and then rendered and displayed; based on noise reduction processing, non-overlapping information (namely noise waves) is automatically filtered out by comparing adjacent frames of images, so that a relatively pure and fine picture is displayed; and (3) estimating and reconstructing a new frame and inserting and displaying the new frame between two adjacent frames of images through an algorithm based on motion compensation, so that the motion picture blurring phenomenon is improved. A sharper image can be obtained by sharpness enhancement.
It should be noted that, the processing algorithm integrated in the image quality enhancement engine may be integrated by those skilled in the art according to the need, and the application is not limited herein.
Based on the same design concept, referring to fig. 7, an embodiment of the present application further provides an image quality improving apparatus, which can execute the steps of the image quality improving method described in the foregoing embodiment, and the apparatus may include:
a face recognition module 701, configured to perform face recognition on a current image to obtain a face region;
an exposure value acquisition module 702 for acquiring an exposure value of the current image after strong light suppression in response to closing the current image; and
the weight adjusting module 703 is configured to set exposure weights inside and outside the face region respectively based on a preset exposure scene corresponding to the exposure value, so that the current image is processed based on the set exposure weights to obtain an image of the face region, where the preset exposure scene includes the exposure weights inside and outside the face region corresponding to the exposure value.
The image quality improving apparatus provided in the above embodiment has the same beneficial effects as the image quality improving method of the above embodiment when implemented, and the description of the present application is omitted.
In some embodiments of the present disclosure, the image quality improving apparatus further includes:
and the parameter setting module is used for restoring the exposure weight of the current image to a value before setting after detecting that the face in the current image is not in the target area range, and switching the strong light inhibition to a state before closing after the restoration is finished.
Wherein, restoring the exposure weight of the current image to the value before setting comprises:
setting the exposure weight outside the face area as a preset value;
and recovering to the value before setting after delaying for preset time.
Further comprising: and the light supplementing module is used for acquiring the current brightness value of the environment and performing light supplementing processing when the current brightness value is smaller than the light-on threshold value.
The light supplement processing process comprises the following steps:
acquiring current configuration information;
and if the current configuration information meets a first condition, performing visible light supplementary lighting processing based on the current brightness value, wherein the first condition represents that the equipment has a preset supplementary lighting parameter.
And if the current configuration information meets a second condition, performing visible light supplementary lighting based on a preset threshold value, and characterizing that the equipment performs visible light supplementary lighting based on the preset threshold value.
And if the configuration information meets a third condition, performing infrared light supplement based on a preset mode, wherein the third condition represents that the light supplement mode is closed.
Further comprising: and the image segmentation module is used for re-segmenting the current image, converting the original coordinates of the face region into a re-segmented coordinate system, and respectively setting exposure weights inside and outside the re-segmented face region based on a preset exposure scene corresponding to the exposure value.
Further comprising: the image quality enhancement module is used for acquiring performance parameters of the user terminal, determining a corresponding image quality enhancement engine identifier based on the performance parameters, and sending the image quality enhancement engine identifier to the user terminal so that the user terminal calls the corresponding image quality enhancement engine based on the image quality enhancement engine identifier to enhance the image of the face area.
An embodiment of the present application further provides a storage medium having a computer program stored thereon, where the computer program is executed by a processor to implement the image quality improving method as described above.
Embodiments of the present application further provide an apparatus, comprising: one or more processors; a storage device having one or more programs stored thereon; when the one or more programs are executed by the one or more processors, the one or more processors are enabled to implement the image quality improvement method.
In this case, the apparatus, the computer readable medium, or the computer program product provided in the foregoing embodiments of the present application may all be used to execute the corresponding method provided above, and therefore, the beneficial effects achieved by the apparatus, the computer readable medium, or the computer program product may refer to the beneficial effects in the corresponding method provided above, and are not described herein again.
The image quality improving method and the image quality improving device provided by the embodiment of the application can adjust the automatic exposure weight of the face area in real time, and solve the problem that the face cannot be seen clearly under the conditions of backlight and the like; the monitoring picture can be ensured to be always in a color mode by adding a visible light supplement lamp for supplementing light; the image quality enhancement algorithm applied to the user terminal can enhance the image quality display, so that the image quality effect is obviously improved.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, which include both non-transitory and non-transitory, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above description is only for the purpose of illustrating the preferred embodiments of the present application and the technical principles applied, and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. The scope of the invention according to the present application is not limited to the specific combinations of the above-described features, and may also cover other embodiments in which the above-described features or their equivalents are arbitrarily combined without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (10)

1. An image quality improving method, comprising:
carrying out face recognition on the current image to obtain a face area;
acquiring an exposure value of the current image after strong light suppression in response to turning off the current image;
and respectively setting exposure weights inside and outside the face region based on a preset exposure scene corresponding to the exposure value so as to process the current image based on the set exposure weights and obtain an image of the face region, wherein the preset exposure scene comprises the exposure weights inside and outside the face region corresponding to the exposure value.
2. The method of claim 1, further comprising:
after the human face in the current image is detected not to be in the target area range, restoring the exposure weight of the current image to a value before setting;
switching the highlight suppression of the current image to a pre-off state after the restoration is completed.
3. The method of claim 2, wherein restoring the exposure weight of the current image to a pre-set value comprises:
setting the exposure weight outside the face region as a preset value;
and recovering to the value before setting after delaying for preset time.
4. The method of claim 1, further comprising:
acquiring a current brightness value of an environment;
and when the current brightness value is smaller than a supplementary lighting threshold value, carrying out supplementary lighting processing.
5. The method according to claim 4, wherein the performing a fill-in process when the current brightness value is smaller than a light-on threshold value includes:
acquiring current configuration information;
and if the current configuration information meets a first condition, performing visible light supplementary lighting processing based on the current brightness value, wherein the first condition represents that the equipment has a preset supplementary lighting parameter.
6. The method according to claim 5, wherein the performing a fill-in process when the current brightness value is smaller than a light-on threshold value further comprises:
and if the current configuration information meets a second condition, performing visible light supplementary lighting based on a preset threshold, wherein the second condition represents that the equipment performs visible light supplementary lighting based on the preset threshold.
7. The method of claim 5, wherein turning on a fill-in light for fill-in lighting when the current brightness value is less than a light-on threshold value further comprises:
and if the configuration information meets a third condition, performing infrared light supplement based on a preset mode, wherein the third condition represents that the light supplement mode is closed.
8. The method of claim 1, wherein after the face recognition is performed on the current image to obtain a face region, the method further comprises:
re-dividing the current image, and converting the original coordinates of the face region into a re-divided coordinate system;
and respectively setting exposure weights inside and outside the newly divided human face region based on a preset exposure scene corresponding to the exposure value.
9. The method of claim 1, further comprising:
acquiring performance parameters of a user terminal;
determining a corresponding image quality enhancement engine identifier based on the performance parameters;
and sending the image quality enhancement engine identification to the user terminal so that the user terminal calls a corresponding image quality enhancement engine based on the image quality enhancement engine identification to perform enhancement processing on the image of the face region.
10. An image quality improving apparatus, comprising:
the face recognition module is used for carrying out face recognition on the current image to obtain a face area;
an exposure value acquisition module for acquiring an exposure value of the current image after strong light suppression in response to closing the current image; and
and the weight adjusting module is used for respectively setting the exposure weights inside and outside the face region based on a preset exposure scene corresponding to the exposure value so as to process the current image based on the set exposure weights and acquire the image of the face region, wherein the preset exposure scene comprises the exposure weights inside and outside the face region corresponding to the exposure value.
CN202111608219.6A 2021-07-08 2021-12-27 Image quality improving method and device Pending CN113992859A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202111608219.6A CN113992859A (en) 2021-12-27 2021-12-27 Image quality improving method and device
PCT/CN2022/104406 WO2023280273A1 (en) 2021-07-08 2022-07-07 Control method and system
CN202280048533.XA CN117730524A (en) 2021-08-13 2022-07-07 Control method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111608219.6A CN113992859A (en) 2021-12-27 2021-12-27 Image quality improving method and device

Publications (1)

Publication Number Publication Date
CN113992859A true CN113992859A (en) 2022-01-28

Family

ID=79734452

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111608219.6A Pending CN113992859A (en) 2021-07-08 2021-12-27 Image quality improving method and device

Country Status (1)

Country Link
CN (1) CN113992859A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023280273A1 (en) * 2021-07-08 2023-01-12 云丁网络技术(北京)有限公司 Control method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1997113A (en) * 2006-12-28 2007-07-11 上海交通大学 Automatic explosion method based on multi-area partition and fuzzy logic
US20110249961A1 (en) * 2010-04-07 2011-10-13 Apple Inc. Dynamic Exposure Metering Based on Face Detection
CN106534714A (en) * 2017-01-03 2017-03-22 南京地平线机器人技术有限公司 Exposure control method, device and electronic equipment
CN110084207A (en) * 2019-04-30 2019-08-02 惠州市德赛西威智能交通技术研究院有限公司 Automatically adjust exposure method, device and the storage medium of face light exposure
CN110149471A (en) * 2019-06-28 2019-08-20 Oppo广东移动通信有限公司 Control method, imaging device, electronic equipment, computer equipment and storage medium
CN112511749A (en) * 2020-11-30 2021-03-16 上海摩象网络科技有限公司 Automatic exposure control method and device for target object and electronic equipment
CN112532859A (en) * 2019-09-18 2021-03-19 华为技术有限公司 Video acquisition method and electronic equipment
CN113313650A (en) * 2021-06-09 2021-08-27 北京百度网讯科技有限公司 Image quality enhancement method, device, equipment and medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1997113A (en) * 2006-12-28 2007-07-11 上海交通大学 Automatic explosion method based on multi-area partition and fuzzy logic
US20110249961A1 (en) * 2010-04-07 2011-10-13 Apple Inc. Dynamic Exposure Metering Based on Face Detection
CN106534714A (en) * 2017-01-03 2017-03-22 南京地平线机器人技术有限公司 Exposure control method, device and electronic equipment
CN110084207A (en) * 2019-04-30 2019-08-02 惠州市德赛西威智能交通技术研究院有限公司 Automatically adjust exposure method, device and the storage medium of face light exposure
CN110149471A (en) * 2019-06-28 2019-08-20 Oppo广东移动通信有限公司 Control method, imaging device, electronic equipment, computer equipment and storage medium
CN112532859A (en) * 2019-09-18 2021-03-19 华为技术有限公司 Video acquisition method and electronic equipment
CN112511749A (en) * 2020-11-30 2021-03-16 上海摩象网络科技有限公司 Automatic exposure control method and device for target object and electronic equipment
CN113313650A (en) * 2021-06-09 2021-08-27 北京百度网讯科技有限公司 Image quality enhancement method, device, equipment and medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023280273A1 (en) * 2021-07-08 2023-01-12 云丁网络技术(北京)有限公司 Control method and system

Similar Documents

Publication Publication Date Title
CN109348089B (en) Night scene image processing method and device, electronic equipment and storage medium
US10580140B2 (en) Method and system of real-time image segmentation for image processing
WO2021179820A1 (en) Image processing method and apparatus, storage medium and electronic device
CN108335279B (en) Image fusion and HDR imaging
JP2022528294A (en) Video background subtraction method using depth
US11070717B2 (en) Context-aware image filtering
CN111028190A (en) Image processing method, image processing device, storage medium and electronic equipment
CN113518185B (en) Video conversion processing method and device, computer readable medium and electronic equipment
CN107172354B (en) Video processing method and device, electronic equipment and storage medium
CN109618102B (en) Focusing processing method and device, electronic equipment and storage medium
CN107690804B (en) Image processing method and user terminal
Turban et al. Extrafoveal video extension for an immersive viewing experience
CN115086567A (en) Time-delay shooting method and device
CN113259594A (en) Image processing method and device, computer readable storage medium and terminal
CN113992859A (en) Image quality improving method and device
CN110971833B (en) Image processing method and device, electronic equipment and storage medium
CN113395599B (en) Video processing method, device, electronic equipment and medium
CN111800568B (en) Light supplement method and device
CN114187172A (en) Image fusion method and device, computer equipment and computer readable storage medium
CN116055895B (en) Image processing method and device, chip system and storage medium
CN112488933A (en) Video detail enhancement method and device, mobile terminal and storage medium
CN113516592A (en) Image processing method, model training method, device and equipment
CN116797505A (en) Image fusion method, electronic device and storage medium
CN110930340A (en) Image processing method and device
CN113891008B (en) Exposure intensity adjusting method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220128

RJ01 Rejection of invention patent application after publication