CN111754410A - Image processing method and device, electronic equipment and readable storage medium - Google Patents

Image processing method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN111754410A
CN111754410A CN201910238073.7A CN201910238073A CN111754410A CN 111754410 A CN111754410 A CN 111754410A CN 201910238073 A CN201910238073 A CN 201910238073A CN 111754410 A CN111754410 A CN 111754410A
Authority
CN
China
Prior art keywords
image
preset
image quality
motion blur
current image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910238073.7A
Other languages
Chinese (zh)
Other versions
CN111754410B (en
Inventor
赵健
徐琼
张文萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN201910238073.7A priority Critical patent/CN111754410B/en
Publication of CN111754410A publication Critical patent/CN111754410A/en
Application granted granted Critical
Publication of CN111754410B publication Critical patent/CN111754410B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The application discloses an image processing method, which comprises the following steps: when detecting that the target object in the current image has no motion blur, judging whether the image quality of the current image meets a preset image quality condition; if the image quality of the current image does not meet the preset image quality condition, triggering image quality optimization processing operation; the method continuously judges whether the image quality of the current image meets the requirement or not when the target object in the current image does not have motion blur, and performs image quality optimization processing operation on the current image when the image quality of the current image does not meet the requirement so as to improve the image quality; therefore, the problems that the target object in the acquired image has no motion blur and low image quality are solved, and the shooting requirement of a user on the target object is met; the application also discloses an image processing device, an electronic device and a computer readable storage medium, which have the beneficial effects.

Description

Image processing method and device, electronic equipment and readable storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
At present, when a camera device shoots a moving object, particularly a moving object with a high moving speed, a phenomenon of motion blur (namely trailing) often occurs in an acquired image, and then the moving object in the image cannot be identified.
In order to solve the above-described problems, a motion blur phenomenon occurring when an image pickup apparatus photographs a moving object is improved in the related art by setting a faster shutter time. Although this method can reduce the motion blur phenomenon of a moving object in a captured image, the amount of light entering the image capturing apparatus decreases due to the shutter time becoming faster, and the image quality of the image is affected by the insufficient amount of light entering.
Disclosure of Invention
The present application aims to provide an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium, which can ensure that a target object in an image does not have motion blur, and can improve image quality under the condition that the target object in the image does not have motion blur.
In order to solve the above technical problem, the present application provides an image processing method, including:
when detecting that the target object in the current image has no motion blur, judging whether the image quality of the current image meets a preset image quality condition;
and if the image quality of the current image does not meet the preset image quality condition, triggering image quality optimization processing operation.
The present application also provides an image processing apparatus including:
the image quality judging module is used for judging whether the image quality of the current image meets a preset image quality condition or not when detecting that the target object in the current image does not have motion blur;
and the image quality optimization module is used for triggering image quality optimization processing operation if the image quality of the current image does not meet the preset image quality condition.
The present application further provides an electronic device, comprising: a memory for storing a computer program; a processor for implementing the steps of the image processing method described above when executing the computer program.
The present application further provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the image processing method described above.
The application provides an image processing method, which comprises the following steps: when detecting that the target object in the current image has no motion blur, judging whether the image quality of the current image meets a preset image quality condition; and if the image quality of the current image does not meet the preset image quality condition, triggering image quality optimization processing operation.
Therefore, the method continuously judges whether the image quality of the current image meets the requirement (namely, the preset image quality condition) when the target object in the current image does not have motion blur, and performs image quality optimization processing operation on the current image when the image quality of the current image does not meet the requirement so as to improve the image quality; the problem of motion blur of a target object in the acquired image is further solved, the problem of low image quality caused by motion blur is solved, and the shooting requirement of a user on the target object is met; namely, the method firstly ensures that the target object in the current image has no motion blur, and then improves the image quality on the basis of no motion blur. And the problem of unavailable image quality caused in the process of avoiding the motion blur of the target object in the related art is further avoided. The application also provides an image processing device, an electronic device and a computer readable storage medium, which have the beneficial effects and are not described herein again.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present application;
FIG. 2 is a flowchart of another image processing method provided in the embodiments of the present application;
fig. 3 is a flowchart of a motion blur degree calculation method according to an embodiment of the present application;
fig. 4 is a flowchart of a target shutter time calculation method according to an embodiment of the present application;
fig. 5 is a flowchart illustrating a method for determining whether an image quality of a current image meets a predetermined image quality condition according to an embodiment of the present disclosure;
fig. 6 is a flowchart of a method for calculating an intensity of a target fill-in light according to an embodiment of the present disclosure;
FIG. 7 is a flowchart of another image processing method provided in the embodiments of the present application;
fig. 8 is a block diagram of an image processing apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The related art improves a motion blur phenomenon that occurs when an image pickup apparatus photographs a moving object by setting a faster shutter time when motion blur occurs in an image photographed by the image pickup apparatus. Although this method can solve the motion blur phenomenon occurring during the photographing process, the image quality of the image is affected by the reduction of the light input amount. That is, this approach is to guarantee that no motion blur phenomenon occurs at the expense of image quality of the image. However, in many application scenarios (e.g., video surveillance scenario, license plate recognition scenario, face recognition scenario, etc.), if the image quality of the final image does not meet the user's requirement, even if there is no motion blur in the image, the image cannot be used by the user (e.g., although there is no motion blur in the target person in the surveillance video, the image quality is too poor due to too little light input, the five sense organs of the target person cannot recognize the image, and the image is not meaningful for the user). The embodiment aims to overcome the problem of poor image quality caused by changing the shutter time to improve the motion blur phenomenon in the related art. Referring to fig. 1 in detail, fig. 1 is a flowchart of an image processing method according to an embodiment of the present disclosure; the method can comprise the following steps:
s101, when it is detected that the target object in the current image does not have motion blur, judging whether the image quality of the current image meets a preset image quality condition.
It should be noted that the execution subject of the present embodiment may be a processor capable of performing image processing on an image captured by an imaging device. The positional relationship between the processor and the image forming apparatus is not limited in this embodiment. The processor may be, for example, a processor provided in the imaging apparatus, or a processor provided in an apparatus that performs image processing outside the imaging apparatus.
In this embodiment, only when the target object in the current image does not have motion blur, the determination process of determining whether the image quality of the current image meets the preset image quality condition is performed. It can be seen that, in this embodiment, it is first required to ensure that the target object in the current image has no motion blur, and then ensure the image quality on the basis that the target object has no motion blur. That is, the absence of motion blur in the target object in the current image is a condition for triggering the subsequent image quality judgment. It should be noted that, in this embodiment, the manner of detecting that there is no motion blur in the target object in the current image is not limited. That is, the process of detecting whether the target object in the current image has motion blur is not limited in this embodiment. In this embodiment, in order to improve reliability and accuracy of determining whether a target object has motion blur, whether a target object in a current image has motion blur may be detected by setting a parameter for measuring a degree of blur of the target object (i.e., a degree of motion blur). Preferably, in this embodiment, detecting whether the target object in the current image has motion blur may include: calculating the motion blur degree of a target object in the current image; judging whether the motion blur degree belongs to a preset motion blur interval or not; and if not, determining that the target object in the current image has no motion blur.
In this embodiment, the calculation method of the motion blur degree of the target object in the current image is not limited. The motion blur degree of the target object may be determined, for example, by calculating a quality value or a quality value of the target object and an edge strength (e.g., an edge average strength); or determining the motion blur degree of the target object by calculating the definition of the target object; of course, the motion blur degree of the target object may be determined by any combination of the above calculation methods. Further, the present embodiment also does not limit the calculation method of the specific parameters in the above several calculation manners. That is, the specific calculation method of the quality value, the edge strength (such as the edge average strength) and the sharpness is not limited in this embodiment. For example, the quality value and the sharpness of the target object may be calculated by an intelligent recognition algorithm, and the edge intensity (e.g., the edge average intensity) may be calculated by an edge intensity calculation formula. The present embodiment also does not limit the method for determining the motion blur degree of the target object according to the parameters in the above several calculation manners. That is, the method for determining the degree of motion blur of the corresponding target object according to the quality value, or the quality value and the edge strength (such as the edge average strength), or the sharpness is not limited in this embodiment. For example, a mapping relationship between each value of each parameter and the motion blur degree may be set, and the motion blur degree corresponding to the value of the current parameter may be determined according to the mapping relationship. Of course, the mapping relationship may be a one-to-one mapping (e.g., one quality value corresponds to one motion blur level), or an interval mapping (e.g., one quality value interval corresponds to one motion blur level). It can be understood that, in this embodiment, the specific mapping relationship corresponding to each parameter is not limited, and the corresponding mapping relationship may be set by the user according to the specific condition of the selected parameter.
In this embodiment, after the motion blur degree of the target object is obtained through calculation, whether the target object in the current image has motion blur may be determined according to a preset motion blur interval. Namely judging whether the motion blur degree belongs to a preset motion blur interval or not; if the current image does not belong to the preset motion blur interval, determining that the target object in the current image does not have motion blur; the subsequent image quality determination step (i.e., step S101) may be continued; if the current image belongs to the preset motion blur interval, determining that the target object in the current image has motion blur; the condition for continuing the step of performing the subsequent image quality judgment (i.e., performing step S101) is not satisfied at this time. The present embodiment is not limited to the specific steps that need to be performed when determining that the target object in the current image has motion blur. For example, the shutter time may be adjusted to avoid the motion blur phenomenon of the target object in the subsequently acquired image, and the subsequent image quality determination may be continued until it is determined that the motion blur phenomenon of the target object in the acquired image does not exist. That is, before determining that the target object does not have motion blur in the current image, it is necessary to continue calculating the degree of motion blur of the target object in the captured image until determining that the target object does not have motion blur in the current image. It is also understood that the shutter time needs to be continuously adjusted until the target object is determined to be free of motion blur in the current image before the target object is determined to be free of motion blur in the current image. For example, if the motion blur degree of the target object belongs to the preset motion blur interval, it is determined that the target object in the current image has motion blur, and the adaptive shutter adjustment may be performed according to the motion blur degree of the target object in the current image.
The specific value of the preset motion blur interval is not limited in this embodiment. For example, a user may set a lowest acceptable motion blur degree (which may be referred to as a preset motion blur threshold, and a specific value of the preset motion blur threshold may be set by the user, so that the specific value of the preset motion blur threshold is not limited in this embodiment) according to an actual application scenario, a type of a target object, or a hardware computing capability, and further determine a preset motion blur interval according to the lowest motion blur degree. The lowest motion blur degree is used for measuring the tolerance of the user to the motion blur, and the larger the value of the lowest motion blur degree is, the higher the tolerance of the user to the motion blur is represented. When the lowest motion blur level (i.e. the preset motion blur threshold) is N, the preset motion blur interval in this embodiment may be [ N, + ∞), and of course, the preset motion blur interval may also be (N, + ∞). Or calculating a maximum blur difference between the motion blur degree of the target object and the lowest motion blur degree, and then determining a preset motion blur interval according to the maximum blur difference. For example, when the maximum blur difference is γ and the lowest motion blur degree is N, the preset motion blur section may be [ N + γ, + ∞), or (N + γ, + ∞); at this time, it can be understood that when the difference between the motion blur degree of the target object and the lowest motion blur degree is greater than or equal to (or greater than) the maximum blur difference value, it is considered that the motion blur degree of the current target object belongs to the preset motion blur interval, which indicates that motion blur exists in the target object in the currently acquired image; otherwise, it indicates that the target object in the currently acquired image has no motion blur.
In this embodiment, when it is determined that the target object does not have motion blur in the current image, it is necessary to determine whether the image quality of the current image meets the user requirement, that is, it is necessary to determine whether the image quality of the image needs to be improved. At this time, whether the image quality of the current image meets a preset image quality condition needs to be judged; if the image quality of the current image meets the preset image quality condition, ending the image processing operation; if the image quality of the current image does not meet the preset image quality condition, it is proved that the image quality needs to be improved at the moment, namely, the image quality optimization processing operation needs to be triggered.
However, the method for calculating the image quality of the current image is not limited in this embodiment. For example, the image quality may be calculated by calculating a noise value (e.g., noise mean intensity) of the image; the image quality can also be calculated by calculating the gain of the image and the noise value; the image quality can also be calculated by calculating the brightness and noise values of the image; it is of course also possible to calculate the image quality by calculating the brightness, gain and noise values of the image. It should be noted that the image herein may refer to the entire current image, or may refer to an image corresponding to a target object in the current image. That is, the image quality of the current image may represent the image quality corresponding to the entire current image, or may represent only the image quality corresponding to the target object in the current image. Therefore, the calculation target of the image quality of the current image can be specified by the user according to the actual situation. The relevant parts in this embodiment and the following embodiments can be explained according to the above description. That is, the portions related to the image quality can all determine the calculation target of the image quality of the current image according to the actual needs of the user (the current image may be the whole image, or only the image corresponding to the target object). For example, if the current application scene is a face recognition scene, the image quality may only consider the image quality of an image (face image) corresponding to a target object (face) in the current image. The image quality of the corresponding current image refers to the image quality of the image (face image) corresponding to the target object.
Further, the preset image quality condition is not limited in this embodiment either. The user can set the corresponding preset image quality condition according to the selected image quality calculation method, the application scene or the hardware calculation capability. For example, when the user calculates the image quality by calculating a noise value (e.g., noise average intensity) of the image, a preset noise tolerance threshold may be set, and the corresponding preset image quality condition may be: if the noise value (e.g., the average noise intensity) of the current image is less than or equal to (or less than) the preset noise tolerance threshold, the image quality of the current image satisfies the preset image quality condition. That is, if the noise value (such as the average noise intensity) of the current image is less than or equal to (or is less than) the preset noise tolerance threshold, the image quality of the current image meets the preset image quality condition; if the noise value (e.g., the average noise intensity) of the current image is greater than (or equal to or greater than) the preset noise tolerance threshold, the image quality of the current image does not satisfy the preset image quality condition. In this case, noise is mainly used as an image quality standard for measuring a low light input amount or a low-light image.
For another example, when the user calculates the image quality by calculating the brightness of the image and a noise value (e.g., noise average intensity), a preset standard image brightness interval and a preset noise tolerance threshold may be set. Since the motion blur phenomenon occurring when the image pickup apparatus photographs a moving object is improved in the related art by setting a faster shutter time. Therefore, the quality of the acquired image is degraded due to the insufficient amount of light entering the image. That is, the shortage of the amount of light entering in this case is an important cause of the degradation of the image quality. Therefore, in the embodiment, when the preset image quality condition is set, the brightness of the current image can be used as a factor for measuring whether the illumination is sufficient or not. At this time, the corresponding preset image quality condition may be: if the brightness of the current image belongs to a preset standard image brightness interval, the image quality of the current image meets a preset image quality condition; or the brightness of the current image does not belong to the preset standard image brightness interval, but the noise value (such as the noise average intensity) of the current image is less than or equal to (or less than) the preset noise tolerance threshold, and then the image quality of the current image meets the preset image quality condition. That is, if the brightness of the current image belongs to the preset standard image brightness interval, it can be determined that the image quality of the current image meets the preset image quality condition without calculating the noise value (such as the noise average intensity) of the current image; if the brightness of the current image does not belong to the preset standard image brightness interval, but the noise value (such as the noise average intensity) of the current image is less than or equal to (or less than) the preset noise tolerance threshold, the image quality of the current image meets the preset image quality condition; if the brightness of the current image does not belong to the preset standard image brightness interval and the noise value (such as the noise average intensity) of the current image is greater than (or is greater than or equal to) the preset noise tolerance threshold, the image quality of the current image does not meet the preset image quality condition.
It can be understood that, if the brightness of the current image belongs to the preset standard image brightness interval, it is proved that the brightness of the current image is still within the normal brightness range even though the brightness of the current image is reduced compared with the brightness of the previously acquired image, and therefore, it can be considered that the reduction of the light input amount at this time does not have a great influence on the image quality, and in order to save the calculation resources, the calculation process of the noise value (such as the noise average intensity) of the current image and the subsequent comparison process may not be executed any more. If the brightness of the current image does not belong to the preset standard image brightness interval, it is proved that the brightness of the current image is obviously reduced compared with the brightness of the previously acquired image and does not belong to the normal brightness range, and therefore, the reduction of the light incoming amount at this time can be considered to cause a great influence on the image quality, and therefore, a calculation process and a subsequent comparison process of a noise value (such as noise average intensity) of the current image need to be executed to determine whether the image quality of the current image meets the preset image quality condition.
For another example, when the user calculates the image quality by calculating the gain, brightness, and noise value (e.g., noise average intensity) of the image, a preset standard image gain interval, a preset standard image brightness interval, and a preset noise tolerance threshold may be set. At this time, in order to ensure whether the illumination of the current image is sufficient (i.e., whether the amount of light entering the captured current image is sufficient), the determination is more accurate. In this embodiment, whether the light entering amount of the acquired current image is sufficient or not can be comprehensively determined by the gain and brightness of the image. That is, in the present embodiment, when the preset image quality condition is set, the gain and the brightness of the current image may be used as factors for measuring whether the illumination is sufficient. At this time, the corresponding preset image quality condition may be: if the gain of the current image belongs to a preset standard image gain interval and the brightness of the current image belongs to a preset standard image brightness interval, the image quality of the current image meets a preset image quality condition; or the gain of the current image does not belong to the preset standard image gain interval or the brightness of the current image does not belong to the preset standard image brightness interval, but the noise value (such as the noise average intensity) of the current image is less than or equal to (or less than) the preset noise tolerance threshold, and then the image quality of the current image meets the preset image quality condition. That is, if the gain of the current image belongs to the preset standard image gain interval and the brightness of the current image belongs to the preset standard image brightness interval, it can be determined that the image quality of the current image meets the preset image quality condition without calculating the noise value (such as the noise average intensity) of the current image; if the gain of the current image does not belong to the preset standard image gain interval or the brightness of the current image does not belong to the preset standard image brightness interval, but the noise value (such as the noise average intensity) of the current image is less than or equal to (or less than) a preset noise tolerance threshold, the image quality of the current image meets the preset image quality condition; if the gain of the current image does not belong to the preset standard image gain interval or the brightness of the current image does not belong to the preset standard image brightness interval, and the noise value (such as the noise average intensity) of the current image is greater than (or equal to or greater than) the preset noise tolerance threshold, the image quality of the current image does not meet the preset image quality condition.
Of course, the preset image quality condition at this time may also be: if the gain of the current image belongs to a preset standard image gain interval or the brightness of the current image belongs to a preset standard image brightness interval, the image quality of the current image meets a preset image quality condition; or the gain of the current image does not belong to the preset standard image gain interval and the brightness of the current image does not belong to the preset standard image brightness interval, but the noise value (such as the noise average intensity) of the current image is less than or equal to (or less than) the preset noise tolerance threshold, and then the image quality of the current image meets the preset image quality condition. That is, if the gain of the current image belongs to the preset standard image gain interval or the brightness of the current image belongs to the preset standard image brightness interval, it can be determined that the image quality of the current image meets the preset image quality condition without calculating the noise value (such as the noise average intensity) of the current image; if the gain of the current image does not belong to the preset standard image gain interval and the brightness of the current image does not belong to the preset standard image brightness interval, but the noise value (such as the noise average intensity) of the current image is less than or equal to (or less than) a preset noise tolerance threshold, the image quality of the current image meets the preset image quality condition; if the gain of the current image does not belong to the preset standard image gain interval and the brightness of the current image does not belong to the preset standard image brightness interval, and the noise value (such as the noise average intensity) of the current image is greater than (or equal to or greater than) the preset noise tolerance threshold, the image quality of the current image does not meet the preset image quality condition.
It can be understood that, given the various preset image quality conditions and the corresponding specific process for determining whether the image quality of the current image meets the preset image quality conditions, the user may set specific values of corresponding thresholds (such as a preset standard image gain interval, a preset standard image brightness interval, and a preset noise tolerance threshold) and select specific preset image quality conditions according to the actual application scene.
The present embodiment does not limit the target object. In this embodiment, the target object may be a moving object or a stationary object. That is, the image processing method provided in the present embodiment is applicable to a moving object as well as to a stationary object (in which case the stationary object may be determined by delimiting an image region). However, in general, motion blur is rarely observed in a stationary object, and motion blur is often observed in a moving object. Therefore, in order to improve the image processing efficiency and reduce the consumption of hardware computing resources, it is preferable that the target object in the present embodiment may be a moving object, such as may be referred to as a target moving object. It can be understood that, when the target object is specifically a target moving object, in the embodiments provided in the present application, the target object adaptation may be replaced by the target moving object. Of course, the number of target objects is not limited in this embodiment, and may be one target object or a plurality of target objects. For example, if there is one target object, when it is detected that there are a plurality of moving objects in the image, one of the moving objects may be designated as the target object. If there are a plurality of target objects, when a plurality of moving objects are detected in the image, all the detected moving objects may be set as the target objects, or a predetermined number of the moving objects may be designated as the target objects. It is understood that the selection manner of the target object is not limited in this embodiment. The processor may automatically select the target object, or may manually select the target object, or the processor may perform the preliminary screening and then the user may designate the target object.
And S102, if the image quality of the current image does not meet the preset image quality condition, triggering image quality optimization processing operation.
In this embodiment, if the image quality of the current image meets the preset image quality condition, the image processing operation may be ended, which indicates that the current image meets the user requirement and belongs to an available image; and if the image quality of the current image does not meet the preset image quality condition, triggering image quality optimization processing operation, and further improving the image quality. The content of the specific image quality optimization processing operation is not limited in this embodiment. For example, the image quality optimization processing operation may include an image quality optimization processing operation in the software program, may include an image quality optimization processing operation corresponding to a parameter setting of a hardware component, and may include an image quality optimization processing operation corresponding to a parameter setting of a hardware component as well as an image quality optimization processing operation in the software program.
It should be noted that the present embodiment does not limit the specific content of the image quality optimization processing operation in the specific software program. The image quality of the current image can be improved, for example, by presetting an image processing program, such as an intelligent image processing algorithm. Of course, the specific content of the preset image processing program is not limited in this embodiment. The preset image processing program may be, for example, an intelligent image processing algorithm with respect to adjusting brightness or gain of an image, or the like. In this embodiment, specific contents of the image quality optimization processing operation corresponding to the parameter setting of the specific hardware component are not limited. For example, the intensity of a fill-in light (e.g., an external fill-in light) can be adjusted to overcome the defect of insufficient light entering amount of the collected image. Of course, the specific manner of adjusting the intensity of the light-compensating lamp is not limited in this embodiment. For example, a mapping relationship between the image quality and the fill-in light intensity may be set, and the target fill-in light intensity corresponding to the image quality of the current image may be determined according to the mapping relationship. Of course, the mapping relationship may be one-to-one mapping (for example, one image quality corresponds to one target fill-in light intensity), or may be interval mapping (for example, one image quality interval corresponds to one target fill-in light intensity). It is to be understood that the method for calculating the image quality of the current image is not limited in this embodiment. The image quality of the previous image may be calculated, for example, by a noise level (e.g., noise mean intensity). In this embodiment, when determining the intensity of the target fill-in light, not only the image quality of the current image but also the brightness of the pixels in the current image may be considered to avoid the local exposure phenomenon.
Based on the above embodiments, the image processing method provided by the present application continues to determine whether the image quality of the current image meets the requirement (i.e. preset image quality conditions) when it is ensured that the target object in the current image does not have motion blur, and performs image quality optimization processing operation on the current image when the image quality of the current image does not meet the requirement, so as to improve the image quality; the problem of motion blur and the problem of low image quality of a target object in the acquired image are further solved, and the shooting requirement of a user on the target object is met; namely, the method firstly ensures that the target object in the current image has no motion blur, and then improves the image quality on the basis of no motion blur. And the problems that in the related art, only the target object is concerned whether motion blur exists or not and the image quality is unavailable in the process of avoiding the motion blur of the target object are solved. And whether the target object in the current image has motion blur can be accurately detected in the embodiment.
Based on the foregoing embodiment, please refer to fig. 2, in order to improve the efficiency of the image processing method provided in the present application, the embodiment may further include:
s1001, judging whether a target object exists in a current image; if not, directly ending the image processing; if yes, the process proceeds to step S1002.
S1002, detecting whether the target object in the current image has motion blur. If not, the process proceeds to step S101.
If the target object does not exist in the current image, the current image is proved to be not the image required by the user, so that the user does not care whether the image has motion blur. In order to improve the image processing efficiency and reduce the occupation of hardware computing resources, the image processing can be directly finished. If the target object exists in the current image, the current image is proved to be the image required by the user, and therefore the user cares whether the image has motion blur or not. At this time, a step of detecting whether the target object in the current image has motion blur needs to be performed. In this embodiment, details of the process of detecting whether the target object in the current image has the motion blur are not repeated, and specific contents of relevant parts in the above embodiments may be referred to.
The specific process of determining whether the target object exists in the current image is not limited in this embodiment. The user can set a corresponding judgment process according to the type of the selected target object. For example, the determining process may be that, when the current image is received, the current image is subjected to image recognition, and the image recognition result is compared with various types of target objects preset by the user (for example, the target object is set to be a human face, and the target object is set to be a license plate, etc.), so as to determine whether the target object exists in the current image. It should be noted that the number of target objects preset by the user is not limited in the determination process (for example, a human face and a license plate are set as the target objects at the same time). In the above process, the image recognition method is not limited, and the user may determine the corresponding image recognition method according to the set type of the target object (for example, when the target object is a human face, a human face recognition algorithm may be selected to perform image recognition on the current image).
For another example, the determining process may also be determining whether a moving object exists in the image when the current image is received; if no moving object exists, the image processing process is directly finished; and if the moving object exists, performing image recognition on the current image, comparing the image recognition result with various types of target objects preset by a user, and judging whether the target object exists in the current image. It should be noted that the determination process is applicable to the case where the target object is a moving object. The number of target objects preset by the user is not limited in this determination process. The image recognition method is not limited in the judging process, and the user can determine the corresponding image recognition method according to the set type of the target object. Of course, if the target object set by the user is a moving object, it is determined that the moving object exists in the image (i.e., it is determined that the target object exists in the current image), and the step of detecting whether the target object in the current image has motion blur may be directly performed (i.e., step S1002).
Based on the above embodiments, the image processing method provided by the present application can improve the image processing efficiency by determining in advance whether the target object exists in the current image. Namely, unnecessary calculation processes are avoided to a certain extent, the image processing time is reduced, and hardware calculation resources are saved.
Based on any of the above embodiments, in order to improve the reliability and accuracy of whether the target object in the current image has motion blur, the embodiment provides a more reliable and accurate motion blur degree calculation method, and improves the reliability and accuracy of whether the target object in the current image has motion blur by improving the accuracy of the motion blur degree. Referring to fig. 3, in this embodiment, calculating the motion blur degree of the target object in the current image may include:
s301, calculating the quality value of the target object in the current image and the average intensity of the edge image.
S302, calculating the motion blur degree by using the quality value and the average intensity of the edge image.
In the embodiment, when the motion blur degree is calculated, the quality value with higher accuracy and the average intensity of the edge image are selected, so that the calculated motion blur degree is more reliable and accurate. In this embodiment, the specific method for calculating the quality value and the average intensity of the edge image is not limited. The user can select an appropriate calculation method according to the need for accuracy of the degree of motion blur, hardware calculation power, or the type of target object.
For example, if the user has a high requirement for the accuracy of the motion blur degree, an intelligent algorithm with high accuracy may be selected to calculate the quality value of the target object. Specifically, when the target object is a face, a face recognition intelligent algorithm may be selected to calculate the quality value C of the target face in the current image. It can be understood that the larger the quality value C is, the better the quality of the current target face is proved to be, and the probability of motion blur is small. When the target object is a license plate, the intelligent license plate recognition algorithm can be selected to calculate the quality value C of the target license plate in the current image. It can be understood that the larger the quality value C is, the better the quality of the current target license plate is proved to be, and the possibility of motion blur is small.
For another example, the user may select an edge image average intensity calculation formula with high accuracy to calculate the edge image average intensity of the target object. Specifically, the calculation formula of the average intensity of the edge image may be
Figure BDA0002008821660000131
Wherein S is the average intensity of the edge image, I1The image is an edge image of the target object, P and Q are subscripts of pixels in the edge image of the target object, and P and Q respectively represent the width and the height of the edge image of the target object; wherein, I1The specific calculation process may be
Figure BDA0002008821660000132
Where I is the current image and F is the low pass filter. That is, a blurred image of the target object is obtained by using a low-pass filter, and the blurred image is subtracted from the current image (i.e., the original image) to obtain an edge image of the target object. It is understood that, in general, the larger the average intensity S of the edge image, the sharper the edge of the target object is, and the less the possibility of motion blur occurs. Of course, the user may also select another method to obtain the edge image of the target object, and calculate the average intensity of the edge image of the target object by using another edge image intensity calculation formula, which is not limited in this embodiment.
The embodiment does not limit the specific way of calculating the degree of motion blur by using the quality value and the average intensity of the edge image. For example, a mapping relationship between the quality value and the average intensity of the edge image and the degree of motion blur may be set, and the degree of motion blur corresponding to the currently calculated quality value and the average intensity of the edge image may be determined according to the mapping relationship. Or setting a function related to the quality value and the average intensity of the edge image, and calculating the motion blur degree corresponding to the current quality value and the average intensity of the edge image through the function. The specific form of the function is not limited in this embodiment. Further, since the larger the quality value C, the smaller the possibility of occurrence of motion blur; the larger the edge image average intensity S, the smaller the possibility of occurrence of motion blur. And then the motion blur degree is in negative correlation with the quality value and the average intensity of the edge image. Thus, the functional form may preferably be a decreasing function. Of course, the decreasing form of the decreasing function may be set by the user according to the actual situation to form a corresponding decreasing function, which is not limited in this embodiment.
Further, according to the analysis, the quality value and the average intensity of the edge image have a monotone negative correlation with the final motion blur degree. Therefore, in order to further improve the reliability of the set decreasing function, the present embodiment may set a monotonically decreasing function to calculate the degree of motion blur when calculating the degree of motion blur using the quality value and the edge image average intensity. Namely, the motion blur degree is calculated by using a monotone decreasing function according to the quality value and the average intensity of the edge image. In this embodiment, the specific form of the monotonically decreasing function is not limited. The user can set according to actual conditions. In order to ensure the computational efficiency and reliability of the degree of motion blur. Preferably, a monotonically decreasing function may be utilized in this embodiment
Figure BDA0002008821660000141
The degree of motion blur is calculated. Wherein M is the motion blur degree calculated according to a monotone decreasing function, sigma12Is a constant value, can be adjusted according to actual requirements, and the embodiment does not limit the specific value, w1,w2Can be understood as a weight constant and w1+w2=1。
Of course, the correspondence relationship between the monotone decreasing function M and the quality value C and the average intensity S of the edge image in the present embodiment may be in other forms as long as it can satisfy that the monotone decreasing function M monotonously decreases with the quality value C and the average intensity S of the edge image.
Based on the above embodiment, the image processing method provided by the application can jointly calculate the motion blur degree through the quality value and the average intensity of the edge image, and can improve the reliability and accuracy of the motion blur degree. And the reliability and the accuracy of the motion blur degree are further improved by setting a monotone decreasing function, so that the reliability and the accuracy of whether the target object in the current image has the motion blur or not are improved.
Based on any of the above embodiments, when a plurality of target objects exist in the current image, the motion blur degree corresponding to each target object needs to be calculated, and then whether the target object in the current image has motion blur is determined according to the motion blur degree of each target object. It can be understood that the absence of motion blur of the target object in the current image can be indicated only if there is no motion blur of each target object in the current image. That is, only if the motion blur degree of each target object in the current image does not belong to the preset motion blur interval, it can be stated that the target object in the current image does not have motion blur. In this embodiment, a specific process of detecting whether a motion blur exists in a target object in a current image when a plurality of target objects exist in the current image is not limited.
For example, whether the target object in the current image has motion blur may be determined by means of separate comparison. The specific process may be to calculate the motion blur degree of each target object in the current image (at this time, the motion blur degree of each target object may be calculated at the same time, or the motion blur degree of each target object may be calculated separately), then compare whether each motion blur degree belongs to a preset motion blur interval, and prove that the target object in the current image does not have motion blur only when the motion blur degree of each target object in the current image does not belong to the preset motion blur interval; i.e. simultaneous calculation, respectively comparison.
For another example, whether the target object in the current image has motion blur may also be determined by sequential comparison. The specific process can be that the motion blur degree of each target object in the current image is calculated in sequence, then whether each motion blur degree belongs to a preset motion blur interval is compared in sequence, and only when the motion blur degree of each target object in the current image does not belong to the preset motion blur interval, the fact that the target object in the current image does not have motion blur is proved; specifically, only when the motion blur degree of the previous target object does not belong to the preset motion blur interval, the motion blur degree of the next target object can be calculated and whether the motion blur degree belongs to the preset motion blur interval or not can be judged. If the motion blur degree of one target object belongs to the preset motion blur interval, directly finishing the calculation and indicating that the target object in the current image has motion blur; only when the motion blur degree of each target object in the current image does not belong to a preset motion blur interval, the fact that the target object in the current image does not have motion blur is proved; namely calculation in sequence and comparison in sequence.
For another example, whether the target object in the current image has motion blur may also be determined by performing joint calculation and comparing the target object with the current image. The specific process may be that after the motion blur degrees of the target objects in the current image are obtained through calculation (at this time, the motion blur degrees of the target objects may be simultaneously calculated, or the motion blur degrees of the target objects may be calculated respectively), only the maximum motion blur degree of the motion blur degrees of the target objects in the current image is selected to be compared with the preset motion blur interval, and as long as the maximum motion blur degree does not belong to the preset motion blur interval, it is proved that the motion blur degree of each target object in the current image does not belong to the preset motion blur interval, that is, the target objects in the current image do not have motion blur.
Further, in order to improve the efficiency of detecting whether the target object in the current image has motion blur, preferably, if a plurality of target objects exist in the current image, determining whether the motion blur degree belongs to the preset motion blur interval may include: selecting the maximum motion blur degree in the motion blur degrees of all target objects in the current image; and judging whether the maximum motion blur degree belongs to a preset motion blur interval. That is, as long as the maximum motion blur degree does not belong to the preset motion blur interval, it is proved that the motion blur degree of each target object in the current image does not belong to the preset motion blur interval, that is, the target object in the current image does not have motion blur. That is, it is possible to determine whether there is motion blur in the target object in the current image only by comparing the maximum degree of motion blur with the preset motion blur section once.
Based on the above embodiments, the image processing method provided by the present application may compare the maximum motion blur degree with the preset motion blur interval when a plurality of target objects exist in the current image, and directly determine whether the target objects in the current image have motion blur according to the comparison result. Namely, whether the target object in the current image has motion blur can be determined only through one-time judgment, and the efficiency of detecting whether the target object in the current image has motion blur is improved.
Based on any of the above embodiments, if it is detected that the target object in the current image has motion blur, that is, the target object in the current image has a trailing phenomenon. At this time, a faster target shutter time is required to solve the motion blur phenomenon of the target object. The shutter parameters of the imaging device are set through the acquired target shutter time, so that the subsequent imaging device acquires images by using the target shutter time, and the phenomenon of motion blur of a target object in the images is avoided through the faster target shutter time. The calculation method of the target shutter time is not limited in this embodiment.
For example, the moving speed of the target object in the current image is calculated, and the corresponding target shutter time is determined according to the moving speed. Or calculating the illumination intensity corresponding to the current image, and determining the corresponding target shutter time according to the illumination intensity. Of course, the target shutter time may also be calculated according to the motion blur degree of the target object in the current image (the calculation process of the specific motion blur degree is not limited in this embodiment, and the related content of the motion blur degree calculation in other embodiments may be referred to). The present embodiment also does not limit the method for determining the target shutter time according to the parameters (i.e. the motion speed, the illumination intensity, and the motion blur degree) in the above several calculation manners. That is, the method for determining the corresponding target shutter time according to the motion speed, or the illumination intensity, or the motion blur degree is not limited in this embodiment. For example, a mapping relationship between each value of each parameter and the target shutter time may be set, and the target shutter time may be determined based on the mapping relationship. Of course, the mapping relationship may be a one-to-one mapping (e.g., one motion speed corresponds to one target shutter time) or an interval mapping (e.g., one motion speed interval corresponds to one target shutter time). It can be understood that, in this embodiment, the specific mapping relationship corresponding to each parameter is not limited, and the corresponding mapping relationship may be set by the user according to the specific condition of the selected parameter.
Further, since the motion blur degree is considered comprehensively, the blur degree of the target object can be measured comprehensively, and therefore, in order to improve the accuracy and reliability of the calculation of the target shutter time, it is preferable that the target shutter time is calculated according to the motion blur degree of the target object in the current image in the present embodiment.
However, the present embodiment does not limit the manner of calculating the target shutter time according to the motion blur degree. The target shutter time may be calculated, for example, from a preset mapping relationship of the degree of motion blur and the target shutter time. Or calculating the target shutter time according to the deviation degree of the motion blur degree from a preset motion blur threshold value; the larger the degree of deviation at this time, the smaller the value of the target shutter time obtained, i.e., the faster the target shutter time. Of course, the calculation method of the deviation degree is not limited in this embodiment. For example, the ratio of the motion blur degree to a preset motion blur threshold value can be used as the deviation degree; the difference between the motion blur degree and a preset motion blur threshold may be used as the deviation degree. Or calculating a dynamic adjustment shutter coefficient according to the motion blur degree and a preset motion blur threshold value, and then calculating a target shutter time according to the current shutter time and the dynamic adjustment shutter coefficient.
Further, in order to ensure the accuracy and reliability of the calculation of the target shutter time and to be able to realize a refined adaptive shutter adjustment process. Referring to fig. 4, preferably, in this embodiment, calculating the target shutter time according to the motion blur degree of the target object in the current image may include:
s401, calculating a dynamic adjustment shutter coefficient according to the motion blur degree of the target object in the current image and a preset motion blur threshold value.
S402, calculating the target shutter time according to the current shutter time and the dynamic adjustment shutter coefficient.
It should be noted that the calculation process of the dynamic adjustment shutter coefficient is not limited in this embodiment. For example, a preset motion blur threshold value and a motion blur degree ratio can be used as a dynamic adjustment shutter coefficient; or the reciprocal of the difference between the motion blur degree and a preset motion blur threshold value can be used as a dynamic adjustment shutter coefficient; certainly, the dynamic adjustment shutter coefficient R may also be calculated by using a formula R ═ 1/[ K × (M-N) + b ], where K is a linear coefficient (which can be understood as a slope) and b is a constant, that is, fine adjustment can be performed through K and b, so as to improve accuracy of the dynamic adjustment shutter coefficient, M is a motion blur degree, and N is a preset motion blur threshold; the present embodiment does not limit the values of K, b and N.
In this embodiment, after the dynamic adjustment shutter coefficient is obtained through calculation, the product of the current shutter time and the dynamic adjustment shutter coefficient may be used as the target shutter time. Namely Sa=ScR; wherein R is the dynamic adjustment shutter coefficient, ScFor the current shutter time, SaIs the target shutter time.
Based on the above embodiments, the image processing method provided by the present application can avoid the motion blur phenomenon of the target object by adaptively adjusting the shutter time if the motion blur of the target object in the current image is detected; further, the shutter time is adjusted in a self-adaptive mode by dynamically adjusting the shutter coefficient, the accuracy and the reliability of target shutter time calculation can be guaranteed, and a refined self-adaptive shutter adjusting process can be achieved.
Based on the above embodiments, the reliability of the adaptive shutter adjustment is ensured by controlling the intensity of the single adjustment shutter time in the present embodiment. That is, it is preferable that, after calculating the target shutter time, the present embodiment may further include:
judging whether the difference value between the target shutter time and the current shutter time is greater than the preset maximum adjustment intensity;
and if so, taking the preset maximum adjustment intensity as the target shutter time.
In this embodiment, by setting the preset maximum adjustment intensity, when the difference between the target shutter time and the current shutter time is not greater than the preset maximum adjustment intensity, the shutter parameter of the imaging device is adjusted according to the target shutter time. When the difference between the target shutter time and the current shutter time is greater than the preset maximum adjustment intensity, the current shutter adjustment intensity is considered to be too large, and therefore the preset maximum adjustment intensity is directly used as the target shutter time, namely the shutter parameters of the imaging device are adjusted by using the shutter time corresponding to the preset maximum adjustment intensity. Of course, in this embodiment, the specific value of the preset maximum adjustment strength is not limited, and the user may set the adjustment strength according to actual situations.
Based on the above embodiments, the image processing method provided by the present application can avoid the motion blur phenomenon of the target object by adaptively adjusting the shutter time if the motion blur of the target object in the current image is detected; namely, the shutter is dynamically adjusted, so that the image acquired by the imaging device has no motion blurring phenomenon. And further, the reliability of the self-adaptive shutter adjustment is ensured by controlling the intensity of the single adjustment of the shutter time.
Based on any of the above embodiments, after the motion blur phenomenon of the target object in the current image is overcome by adjusting the shutter time, the amount of light entering the imaging device is reduced due to the fact that the shutter time is increased, and particularly under the condition that the ambient light is weakened, the image quality is greatly affected after the shutter time is adjusted, that is, after the motion blur phenomenon of the target object is overcome, the image quality is mainly reduced due to the light. Therefore, in this embodiment, whether the ambient light is sufficient under the current shooting condition can be determined according to the brightness and the gain of the current image, and if the current ambient light is sufficient, the image quality of the current image is not significantly reduced. If the current ambient light is insufficient, the image quality of the current image may be reduced, and therefore, in order to ensure the image quality, a step of determining whether the image quality of the current image meets a preset image quality condition needs to be performed. Specifically, in the low-image, the present embodiment may select noise as a measure of image quality. Whether the image quality of the current image satisfies the preset image quality condition may be determined, for example, by calculating the noise mean intensity. Referring to fig. 5, preferably, the determining whether the image quality of the current image satisfies the preset image quality condition in this embodiment may include:
s501, if the brightness of the current image does not belong to a preset standard image brightness interval and the gain of the current image does not belong to a preset standard image gain interval, calculating the noise average intensity of the current image.
S502, judging whether the average noise intensity is larger than a preset noise tolerance threshold value or not; if not, the process proceeds to step S503, and if yes, the process proceeds to step S504.
S503, determining that the image quality of the current image meets a preset image quality condition.
S504, determining that the image quality of the current image does not meet the preset image quality condition.
In this embodiment, if the brightness of the current image does not belong to the preset standard image brightness interval and the gain of the current image does not belong to the preset standard image gain interval, it means that the ambient illumination is insufficient under the current shooting condition. At this time, the noise average intensity of the current image needs to be calculated, and then whether the image quality of the current image meets the preset image quality condition can be determined according to the noise average intensity of the current image. Specifically, if the average noise intensity of the current image is not greater than a preset noise tolerance threshold, determining that the image quality of the current image meets a preset image quality condition; and if the average noise intensity of the current image is greater than a preset noise tolerance threshold, determining that the image quality of the current image does not meet a preset image quality condition.
In this embodiment, specific values of the preset standard image brightness interval, the preset standard image gain interval, and the preset noise tolerance threshold are not limited. The user can determine the actual application scene and the hardware computing capacity.
It should be noted that, in this embodiment, the average noise intensity of the current image may refer to the average noise intensity of the entire current image, or may refer to the average noise intensity of the image corresponding to the target object in the entire current image. This embodiment is not limited to this. Further, the present embodiment does not limit the noise average intensity calculation process of the current image. For example, a noise average intensity calculation formula with high accuracy may be selected to calculate the noise average intensity of the current image. The following describes the image quality corresponding to the target object in the current image. Specifically, the noise average intensity calculation formula may be
Figure BDA0002008821660000191
Wherein Z iscIs the average intensity of the noise (in this case, the average intensity of the noise corresponding to the noise image of the target object), I2M, N are subscripts of pixels in the target object noise image, and M, N respectively represent the width and height of the target object noise image; wherein, I2The specific calculation process may be
Figure BDA0002008821660000201
Wherein, I is the current image, and G is a Gaussian filter. Namely, the image corresponding to the target object obtained after the noise reduction is carried out by using the Gaussian filter, and the difference is carried out between the image and the current image (namely, the original image), so as to obtain the noise image of the target object. It will be appreciated that the noise mean intensity Z is typically the casecThe larger the image quality, the worse the image quality is proved to be. Of course, the user may select other ways to obtain the target object noise image, and calculate the noise average intensity of the target object noise image by using other noise average intensity calculation formulas. This embodiment is not limited to this.
Based on the above embodiment, the image processing method provided by the application can improve the efficiency of judging whether the image quality of the current image meets the preset image quality condition, and further improve the image processing efficiency.
Based on any of the embodiments described above, the reliability of the processing operation is optimized in this embodiment in order to ensure image quality. The image quality can be improved by presetting an image processing program and externally arranging a light supplement lamp. In this embodiment, when the image quality of the current image does not satisfy the preset image quality condition, a suitable image quality optimization processing operation may be selected according to the image quality of the current image. For example, if the image quality of the current image is not much different from the image quality required by the user (i.e., the difference between the image quality of the current image and the image quality required by the user is within the preset range), the preset image processing program may be selected to perform the image quality optimization processing operation on the current image and/or the subsequent image in the current shooting scene. If the difference between the image quality of the current image and the image quality required by the user is not more than a preset value (namely, the difference between the image quality of the current image and the image quality required by the user does not belong to a preset range), a preset image processing program and an external light supplement lamp can be simultaneously selected to perform image quality optimization processing operation on the current image and/or a subsequent image in the current shooting scene. Or when the image quality of the current image does not meet the preset image quality condition, directly and simultaneously selecting a preset image processing program and an external light supplement lamp to perform image quality optimization processing operation on the current image and/or a subsequent image in the current shooting scene.
It should be noted that, in this embodiment, when the peripheral light supplement lamp is used to perform the light supplement operation, a determination method of the intensity of the target light supplement lamp corresponding to the peripheral light supplement lamp is not limited. For example, the fill light may be performed at a fixed target fill light intensity. The intensity of the target fill light can also be determined according to the image quality of the current image. Or determining the target fill-in light intensity according to the difference between the image quality of the current image and the image quality required by the user.
Further, in order to ensure the image quality after the image quality optimization processing operation, preferably, in this embodiment, when the image quality of the current image does not satisfy the preset image quality condition, a preset image processing program and an external fill light are simultaneously selected to perform the image quality optimization processing operation. In order to improve the light supplement effect of the peripheral light supplement lamp, in this embodiment, the intensity of the target light supplement lamp is determined according to the image quality of the current image. Preferably, the triggering of the image quality optimization processing operation in this embodiment may include:
and executing a preset image processing program, and determining the intensity of the target fill-in light according to the image quality of the current image.
In this embodiment, the preset image processing program is not limited, and the user may determine the specific preset image processing program according to the type of the target object or the actual application scene. The embodiment also does not limit the specific manner of determining the intensity of the target fill-in light according to the image quality of the current image. For example, a mapping relationship between the image quality and the target fill-in light intensity may be set, and the target fill-in light intensity may be determined according to the mapping relationship and the image quality of the current image. Of course, the mapping relationship may be one-to-one mapping (for example, one image quality corresponds to one target fill-in light intensity), or may be interval mapping (for example, one image quality interval corresponds to one target fill-in light intensity). Or determining the target fill-in light intensity according to the quality deviation degree between the image quality of the current image and the image quality required by the user. It can be understood that, when the mass deviation degree is larger, the target fill light intensity is larger. The method of calculating the mass deviation is not limited in this embodiment. For example, the difference between the image quality of the current image and the image quality required by the user may be used as the quality deviation degree. The quality deviation degree may be a ratio of the image quality of the current image to the image quality desired by the user.
Further, if the noise average intensity of the current image is used to characterize the image quality of the current image, the noise average intensity of the current image is used to characterize the image quality of the current imageThe calculation process of the target fill-in light intensity may be to set a mapping relationship between the average noise intensity of the current image and the target fill-in light intensity, and then determine the target fill-in light intensity according to the mapping relationship and the average noise intensity of the current image. Or calculating the intensity of the target fill-in light according to the average intensity of the noise of the current image and a preset noise tolerance threshold. In this embodiment, a specific process of calculating the intensity of the target fill-in light according to the noise average intensity of the current image and the preset noise tolerance threshold is not limited. For example, the target fill-in light intensity may be calculated according to a difference between the noise average intensity of the current image and a preset noise tolerance threshold. Specifically, the formula L ═ k is used1*(Zc-Zr) Calculating the intensity of the target fill light, wherein L is the intensity of the target fill light, k1For positive adjustment of the coefficients, the user can set, according to actual requirements, ZcAs the noise mean intensity of the current image, ZrIs a preset noise tolerance threshold difference. It can be understood that the intensity of the light supplement lamp adjustment may be obtained by subtracting the current light supplement lamp intensity from the calculated target light supplement lamp intensity. It can be seen that when Z isc-ZrThe larger the intensity of the target fill light is. Of course, the intensity of the target fill-in light can also be calculated according to the ratio of the average noise intensity of the current image to the preset noise tolerance threshold value. This embodiment is not limited to this.
Based on the above embodiment, the image processing method provided by the application can improve the effect of image quality optimization processing operation, can determine the required target fill-in light intensity according to the image quality of the current image, and avoids the exposure phenomenon caused by the overlarge target fill-in light intensity.
Based on the above embodiments, in order to further improve the reliability of the intensity of the target fill light. In the embodiment, when the intensity of the target fill-in light is calculated, the brightness of the brightest area in the current image is also considered, and the exposure phenomenon of the brightest area caused by the overlarge intensity of the target fill-in light is avoided. Referring to fig. 6, preferably, calculating the target fill-in light intensity according to the noise average intensity of the current image and a preset noise tolerance threshold may include:
s601, judging whether the brightness of the brightest area in the current image is larger than a preset brightness upper limit; if yes, the process proceeds to step S602, and if no, the process proceeds to step S603.
S602, calculating the intensity of a first fill-in light according to the average noise intensity of the current image and a preset noise tolerance threshold, calculating the intensity of a second fill-in light according to the brightness of the brightest area and a preset brightness upper limit, and taking the difference value of the intensity of the first fill-in light and the intensity of the second fill-in light as the intensity of a target fill-in light.
S603, calculating the intensity of the first fill-in light according to the average noise intensity of the current image and a preset noise tolerance threshold value, and taking the intensity of the first fill-in light as the intensity of the target fill-in light.
In this embodiment, the calculation method of the first fill-in light intensity and the second fill-in light intensity is not limited. For example, the intensity of the first fill-in lamp can be adjusted by k1*(Zc-Zr) The noise tolerance threshold value is obtained through calculation, or can be obtained through calculation of the ratio of the noise average intensity to the preset noise tolerance threshold value. The intensity of the second fill-in light can be calculated according to the difference between the brightness of the brightest area and the preset brightness upper limit (for example, by k)2*(Lm-Ly) Is calculated to obtain, wherein k2For positive adjustment of the coefficients, the user can set, according to actual requirements, LmBrightness of the brightest area, LyA preset upper brightness limit). Of course, the brightness of the brightest area may be calculated by the ratio of the brightness of the brightest area to the preset brightness upper limit. In this embodiment, the specific value of the preset brightness upper limit is not limited, and the user may set or modify the brightness upper limit according to the actual situation.
This embodiment can be expressed by the formula L ═ k1*(Zc-Zr)-k2*max(0,Lm-Ly) The entire calculation process in the present embodiment is described. Wherein k is1、k2The adjustment coefficient can be set according to actual requirements for positive adjustment, and the specific value is not limited in this embodiment. For example, k may be provided here1=20,k25. It will be appreciated that the Z can be relied upon when it is desired to trigger an image quality optimization processing operationc-ZrAnd LmAnd calculating the intensity of the target fill-in light. Zc-ZrThe larger the light intensity of the target fill light is, the stronger the light intensity of the target fill light is. But when L ismExceeds LyIn order to avoid exposure (e.g. local exposure of the brightest region), the intensity of the target fill light needs to be reduced until the brightness L of the brightest block is reachedmLess than Ly. Of course, the above formula is only a specific form, L and ZcAnd LmThe correspondence relationship of (A) may be in other forms as long as it satisfies L and ZcMonotonically increasing, L and LmMonotonously decreasing.
It should be noted that, in this embodiment, the size of the brightest area is not limited, for example, every 4 pixels in the current image may be used as one area, and then the brightness of the brightest area in the current image may be determined. In this case, the brightness of the brightest area may be an average value of the brightness of each pixel in the area, or may be the brightness of the brightest area in the current image determined by the brightest pixel point in the area.
Based on the above embodiment, the image processing method provided by the application can improve the effect of image quality optimization processing operation, and can further improve the reliability of the intensity of the target fill-in light. When the intensity of the target fill light is calculated, the brightness of the brightest area in the current image is also considered, and the exposure phenomenon of the brightest area caused by the fact that the intensity of the target fill light is too high is avoided.
Based on any of the above embodiments, please refer to fig. 7, after triggering the image quality optimization processing operation in this embodiment, the method may further include:
s701, judging whether the image quality of the current image after the image quality optimization processing operation is lower than a preset image quality lower limit condition; if yes, the process proceeds to step S702. If not, ending the image processing operation or directly processing the next image acquired by the imaging equipment. That is, the operation when the image quality of the current image after the image quality optimization processing operation is not lower than the preset image quality lower limit condition is not limited in this embodiment.
S702, fixing all preset parameters of the imaging device to be corresponding preset values so that the imaging device can shoot with the fixed preset parameters in the current shooting scene.
The present embodiment does not limit the preset image quality lower limit condition. The setting process may refer to the setting process of the preset image quality condition in each of the above-described related embodiments. It should be noted that the preset image quality lower limit condition cannot be higher than the requirement of the preset image quality condition, and generally the preset image quality lower limit condition is lower than the requirement of the preset image quality condition.
In this embodiment, if the image quality of the current image after the image quality optimization processing operation is still lower than the preset image quality lower limit condition, it is indicated that the current image processing process has no good effect on the current shooting scene, and subsequently, in order to save hardware resources, the image processing methods provided in the above embodiments may not be executed. Only the preset parameters of the imaging device need to be fixed to the corresponding preset values, so that the imaging device can shoot in the current shooting scene with the fixed preset parameters. In this embodiment, the type of the preset parameter is not limited, and for example, the preset parameter may include a shutter, a preset fill light, and the like, and the preset parameter may be set by a user according to actual needs. In this embodiment, the specific value of each preset parameter corresponding to the preset value is not limited. For example, the user may take the value of each preset parameter of the imaging device in the energy-saving state as a fixed corresponding preset value. The numerical value of each preset parameter when the imaging device collects the image last time may also be used as a fixed corresponding preset numerical value, or of course, may also be a default numerical value of each preset parameter.
Further, in order to improve the reliability of determining whether the image quality of the current image after the image quality optimization processing operation is lower than a preset image quality lower limit condition. Preferably, the determining whether the image quality of the current image after the image quality optimization processing operation is lower than a preset image quality lower limit condition may include:
judging whether the brightness of the current image subjected to the image quality optimization processing operation belongs to a preset image brightness lower limit interval or not and whether the gain of the current image subjected to the image quality optimization processing operation belongs to a preset image gain upper limit interval or not;
and if the brightness of the current image after the image quality optimization processing operation belongs to a preset image brightness lower limit interval or the gain of the current image after the image quality optimization processing operation belongs to a preset image gain upper limit interval, determining that the image quality of the current image after the image quality optimization processing operation is lower than a preset image quality lower limit condition.
The present embodiment does not limit the specific values of the preset image brightness lower limit interval and the preset image gain upper limit interval. The user can set according to actual conditions. If the brightness of the current image after the image quality optimization processing operation belongs to the preset image brightness lower limit interval or the gain of the current image after the image quality optimization processing operation belongs to the preset image gain upper limit interval, it indicates that the image quality of the current image is very poor, and at this time, the image quality optimization processing operation or the adaptive shutter adjusting operation does not need to be executed on the current image. Or on the basis, the default parameters of the imaging device are restored, for example, each preset parameter of the imaging device is fixed to a corresponding preset value, so that the imaging device performs shooting with each fixed preset parameter in the current shooting scene.
Based on the above embodiments, the image processing method provided by the present application can avoid invalid image processing operations, such as adaptive shutter adjustment operation, image quality optimization processing operation, and the like, under the condition that the image quality of the current image is poor, and save hardware computing resources.
It is understood that the above embodiments are merely some examples, and that the above embodiments may be arbitrarily combined to form a new combined embodiment unless they are necessarily not combined. For example, the solution of fig. 7 may be added to the corresponding position of any one of the solutions of fig. 1 to 6.
The following describes an image processing apparatus, an electronic device, and a computer-readable storage medium provided in an embodiment of the present application, and the image processing apparatus, the electronic device, and the computer-readable storage medium described below and the image processing method described above may be referred to correspondingly.
Referring to fig. 8, fig. 8 is a block diagram of an image processing apparatus according to an embodiment of the present disclosure; the image processing apparatus may include:
an image quality determining module 100, configured to determine whether an image quality of a current image meets a preset image quality condition when it is detected that a target object in the current image does not have motion blur;
the image quality optimization module 200 is configured to trigger an image quality optimization processing operation if the image quality of the current image does not meet a preset image quality condition.
Based on the above embodiment, the apparatus may further include:
the target object judging module is used for judging whether a target object exists in the current image or not;
and the motion blur detection module is used for detecting whether the target object in the current image has motion blur or not if the target object exists in the current image.
Based on the above embodiments, the motion blur detection module may include:
the motion blur calculation unit is used for calculating the motion blur degree of the target object in the current image;
the state judging unit is used for judging whether the motion blur degree belongs to a preset motion blur interval or not; and if not, determining that the target object in the current image has no motion blur.
Based on the above embodiment, the motion blur calculation unit may include:
the motion blur parameter calculating subunit is used for calculating the quality value of the target object in the current image and the average intensity of the edge image;
and the motion blur calculation subunit is used for calculating the motion blur degree by using the quality value and the average intensity of the edge image.
Based on the above embodiment, the motion blur calculation subunit is specifically a subunit that calculates the degree of motion blur by using a monotonically decreasing function according to the quality value and the average intensity of the edge image.
Based on any of the embodiments described above, the state determination unit may include:
the data selection subunit is used for selecting the maximum motion blur degree from the motion blur degrees of all the target objects in the current image when a plurality of target objects exist in the current image;
and the state judgment subunit is used for judging whether the maximum motion blur degree belongs to a preset motion blur interval.
Based on any of the above embodiments, the apparatus may further include:
and the shutter time calculation module is used for calculating the target shutter time according to the motion blur degree of the target object in the current image when the target object in the current image is detected to have motion blur.
Based on the above embodiment, the shutter time calculation module may include:
the shutter coefficient calculation unit is used for calculating a dynamic adjustment shutter coefficient according to the motion blur degree of the target object in the current image and a preset motion blur threshold value;
and the shutter time calculating unit is used for calculating the target shutter time according to the current shutter time and the dynamic adjustment shutter coefficient.
Based on any of the above embodiments, the apparatus may further include:
the intensity judgment module is used for judging whether the difference value between the target shutter time and the current shutter time is greater than the preset maximum adjustment intensity;
and the shutter time adjusting module is used for taking the preset maximum adjusting intensity as the target shutter time if the difference value is greater than the preset maximum adjusting intensity.
Based on any of the above embodiments, the image quality determination module 100 may include:
the noise average intensity calculating unit is used for calculating the noise average intensity of the current image when the brightness of the current image does not belong to a preset standard image brightness interval and the gain of the current image does not belong to a preset standard image gain interval;
the noise judging unit is used for judging whether the average noise intensity is greater than a preset noise tolerance threshold value or not;
the image quality determining unit is used for determining that the image quality of the current image meets a preset image quality condition if the average noise intensity is not greater than a preset noise tolerance threshold; and if the average noise intensity is greater than a preset noise tolerance threshold, determining that the image quality of the current image does not meet a preset image quality condition.
Based on any of the above embodiments, the image quality optimization module 200 may include:
the first quality optimization submodule is used for executing a preset image processing program;
and the second quality optimization submodule is used for determining the intensity of the target fill-in light according to the image quality of the current image.
Based on the above embodiment, the second quality optimization submodule may include:
and the second quality optimization unit is used for calculating the intensity of the target fill-in light according to the noise average intensity of the current image and a preset noise tolerance threshold value.
Based on the above embodiment, the second quality optimization unit may include:
the brightness judgment subunit is used for judging whether the brightness of the brightest area in the current image is greater than a preset brightness upper limit;
the first fill-in light intensity calculation operator unit is used for calculating the intensity of a first fill-in light according to the average intensity of noise of the current image and a preset noise tolerance threshold value if the brightness of the brightest area is greater than a preset brightness upper limit, calculating the intensity of a second fill-in light according to the brightness of the brightest area and the preset brightness upper limit, and taking the difference value between the intensity of the first fill-in light and the intensity of the second fill-in light as the intensity of a target fill-in light;
and the second fill-in light intensity calculating operator unit is used for calculating the intensity of the first fill-in light according to the average noise intensity of the current image and a preset noise tolerance threshold value if the brightness of the brightest area is not greater than a preset brightness upper limit, and taking the intensity of the first fill-in light as the intensity of the target fill-in light.
Based on any of the above embodiments, the apparatus may further include:
the quality lower limit judging module is used for judging whether the image quality of the current image after the image quality optimization processing operation is lower than a preset image quality lower limit condition or not;
and the parameter fixing module is used for fixing each preset parameter of the imaging equipment to a corresponding preset numerical value if the preset image quality lower limit condition is lower than the preset image quality lower limit condition so that the imaging equipment can shoot with each fixed preset parameter in the current shooting scene.
Based on the above embodiment, the quality lower limit determining module may include:
a quality interval judgment unit, configured to judge whether the brightness of the current image after the image quality optimization processing operation belongs to a preset image brightness lower limit interval and whether the gain of the current image after the image quality optimization processing operation belongs to a preset image gain upper limit interval;
and the image quality lower limit determining unit is used for determining that the image quality of the current image after the image quality optimization processing operation is lower than the preset image quality lower limit condition if the brightness of the current image after the image quality optimization processing operation belongs to the preset image brightness lower limit interval or the gain of the current image after the image quality optimization processing operation belongs to the preset image gain upper limit interval.
It should be noted that, based on any of the above embodiments, the apparatus may be implemented based on a programmable logic device, where the programmable logic device includes an FPGA, a CPLD, a single chip, a processor, and the like. These programmable logic devices may be provided in electronic devices, such as imaging devices.
An embodiment of the present application further provides an electronic device, including: a memory for storing a computer program; a processor for implementing the steps of the image processing method of any of the above embodiments when executing the computer program. Of course, in this embodiment, other components required by the electronic device are not limited, and specific reference may be made to the setting of the existing electronic device. For example, the electronic device may further include a camera, a shutter (which may be a rolling shutter or a mechanical shutter), an external fill light, and other imaging-related components. The electronic device may now be an imaging device comprising image processing capabilities. Further, the imaging device is not limited in this embodiment, and for example, the imaging device may be a monitoring camera, or a terminal having a shooting function, an image pickup device (such as a camera), or the like.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the image processing method according to any of the embodiments above are implemented. The computer-readable storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The image processing method, the image processing apparatus, the electronic device, and the computer-readable storage medium provided by the present application are described in detail above. The principles and embodiments of the present application are explained herein using specific examples, which are provided only to help understand the method and the core idea of the present application. It should be noted that, for those skilled in the art, it is possible to make several improvements and modifications to the present application without departing from the principle of the present application, and such improvements and modifications also fall within the scope of the claims of the present application.

Claims (15)

1. An image processing method, comprising:
when detecting that the target object in the current image has no motion blur, judging whether the image quality of the current image meets a preset image quality condition;
and if the image quality of the current image does not meet the preset image quality condition, triggering image quality optimization processing operation.
2. The image processing method according to claim 1, further comprising:
judging whether a target object exists in the current image or not;
and if so, detecting whether the target object in the current image has motion blur.
3. The method according to claim 2, wherein the detecting whether the target object has motion blur in the current image comprises:
calculating the motion blur degree of the target object in the current image;
judging whether the motion blur degree belongs to a preset motion blur interval or not;
and if not, determining that the target object in the current image has no motion blur.
4. The image processing method according to claim 3, wherein the calculating the motion blur degree of the target object in the current image comprises:
calculating the quality value of the target object in the current image and the average intensity of the edge image;
and calculating the motion blur degree by using the quality value and the average intensity of the edge image.
5. The image processing method according to claim 4, wherein the calculating the motion blur degree using the quality value and the edge image average intensity comprises:
and calculating the motion blur degree by using a monotone decreasing function according to the quality value and the average intensity of the edge image.
6. The image processing method according to claim 1, when it is detected that motion blur exists in the target object in the current image, further comprising:
and calculating the target shutter time according to the motion blur degree of the target object in the current image.
7. The method according to claim 6, wherein said calculating a target shutter time based on the degree of motion blur of the target object in the current image comprises:
calculating a dynamic adjustment shutter coefficient according to the motion blur degree of the target object in the current image and a preset motion blur threshold value;
and calculating the target shutter time according to the current shutter time and the dynamic adjustment shutter coefficient.
8. The image processing method according to claim 1, wherein the determining whether the image quality of the current image satisfies a preset image quality condition comprises:
when the brightness of the current image does not belong to a preset standard image brightness interval and the gain of the current image does not belong to a preset standard image gain interval, calculating the noise average intensity of the current image;
judging whether the average noise intensity is greater than a preset noise tolerance threshold value or not;
if not, determining that the image quality of the current image meets a preset image quality condition;
and if so, determining that the image quality of the current image does not meet a preset image quality condition.
9. The image processing method according to claim 1, wherein the triggering an image quality optimization processing operation comprises:
and executing a preset image processing program, and determining the intensity of the target fill-in light according to the image quality of the current image.
10. The method of claim 9, wherein the determining a target fill light intensity according to the image quality of the current image comprises:
and calculating the intensity of the target fill-in light according to the average noise intensity of the current image and a preset noise tolerance threshold value.
11. The image processing method according to claim 1, further comprising, after the triggering the image quality optimization processing operation:
judging whether the image quality of the current image after the image quality optimization processing operation is lower than a preset image quality lower limit condition;
if the current shooting scene is lower than the preset scene, fixing all preset parameters of the imaging device to be corresponding preset values, so that the imaging device can shoot with all the fixed preset parameters in the current shooting scene.
12. The method according to claim 11, wherein said determining whether the image quality of the current image after the image quality optimization processing operation is lower than a preset image quality lower limit condition comprises:
judging whether the brightness of the current image subjected to the image quality optimization processing operation belongs to a preset image brightness lower limit interval or not and whether the gain of the current image subjected to the image quality optimization processing operation belongs to a preset image gain upper limit interval or not;
and if the brightness of the current image after the image quality optimization processing operation belongs to the preset image brightness lower limit interval or the gain of the current image after the image quality optimization processing operation belongs to the preset image gain upper limit interval, determining that the image quality of the current image after the image quality optimization processing operation is lower than the preset image quality lower limit condition.
13. An image processing apparatus characterized by comprising:
the image quality judging module is used for judging whether the image quality of the current image meets a preset image quality condition or not when detecting that the target object in the current image does not have motion blur;
and the image quality optimization module is used for triggering image quality optimization processing operation if the image quality of the current image does not meet the preset image quality condition.
14. An electronic device, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the image processing method according to any one of claims 1 to 12 when executing the computer program.
15. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 12.
CN201910238073.7A 2019-03-27 2019-03-27 Image processing method and device, electronic equipment and readable storage medium Active CN111754410B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910238073.7A CN111754410B (en) 2019-03-27 2019-03-27 Image processing method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910238073.7A CN111754410B (en) 2019-03-27 2019-03-27 Image processing method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111754410A true CN111754410A (en) 2020-10-09
CN111754410B CN111754410B (en) 2024-04-09

Family

ID=72671119

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910238073.7A Active CN111754410B (en) 2019-03-27 2019-03-27 Image processing method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111754410B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113222981A (en) * 2021-06-01 2021-08-06 山东贝特建筑项目管理咨询有限公司 Processing method and system for anchor bolt image in heat-insulation board

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102131079A (en) * 2011-04-20 2011-07-20 杭州华三通信技术有限公司 Method and device for eliminating motion blur of image
US20140186017A1 (en) * 2012-12-27 2014-07-03 Canon Kabushiki Kaisha Optical apparatus and image capturing apparatus, and method of controlling the same and storage medium
CN107578439A (en) * 2017-07-19 2018-01-12 阿里巴巴集团控股有限公司 Generate the method, apparatus and equipment of target image
CN109447006A (en) * 2018-11-01 2019-03-08 北京旷视科技有限公司 Image processing method, device, equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102131079A (en) * 2011-04-20 2011-07-20 杭州华三通信技术有限公司 Method and device for eliminating motion blur of image
US20140186017A1 (en) * 2012-12-27 2014-07-03 Canon Kabushiki Kaisha Optical apparatus and image capturing apparatus, and method of controlling the same and storage medium
CN107578439A (en) * 2017-07-19 2018-01-12 阿里巴巴集团控股有限公司 Generate the method, apparatus and equipment of target image
CN109447006A (en) * 2018-11-01 2019-03-08 北京旷视科技有限公司 Image processing method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113222981A (en) * 2021-06-01 2021-08-06 山东贝特建筑项目管理咨询有限公司 Processing method and system for anchor bolt image in heat-insulation board

Also Published As

Publication number Publication date
CN111754410B (en) 2024-04-09

Similar Documents

Publication Publication Date Title
CN111028189B (en) Image processing method, device, storage medium and electronic equipment
CN109936698B (en) Automatic exposure control method and device, electronic equipment and storage medium
EP3046320B1 (en) Method for generating an hdr image of a scene based on a tradeoff between brightness distribution and motion
EP2368226B1 (en) High dynamic range image combining
JP3938833B2 (en) Exposure control device
JP5719418B2 (en) High dynamic range image exposure time control method
US10298853B2 (en) Image processing apparatus, method of controlling image processing apparatus, and imaging apparatus
US20110285871A1 (en) Image processing apparatus, image processing method, and computer-readable medium
JP6074254B2 (en) Image processing apparatus and control method thereof
US9071766B2 (en) Image capturing apparatus and control method thereof
JP2015005001A (en) Image signal processor, imaging device and image processing program
EP3293697A1 (en) Image processing device, imaging device, image processing method, and storage medium storing image processing program for image processing device
US11159740B2 (en) Image capturing device and control method thereof and medium
KR101754425B1 (en) Apparatus and method for auto adjusting brightness of image taking device
KR101972032B1 (en) Adaptive exposure control apparatus for a camera
CN112738411B (en) Exposure adjusting method, exposure adjusting device, electronic equipment and storage medium
CN111754410B (en) Image processing method and device, electronic equipment and readable storage medium
JP2020071809A (en) Image processing device and image processing method
JP5932392B2 (en) Image processing apparatus and image processing method
CN114666512B (en) Method and system for adjusting rapid automatic exposure
US11727716B2 (en) Information processing apparatus, imaging apparatus, which determines exposure amount with respect to face detection and human body detection
CN110072050B (en) Self-adaptive adjustment method and device of exposure parameters and shooting equipment
CN110868549B (en) Exposure control method and device and electronic equipment
CN111064897B (en) Exposure evaluation value statistical method and imaging equipment
JP6514577B2 (en) IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND IMAGING APPARATUS

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant