CN110475063B - Image acquisition method and device, and storage medium - Google Patents

Image acquisition method and device, and storage medium Download PDF

Info

Publication number
CN110475063B
CN110475063B CN201910708747.5A CN201910708747A CN110475063B CN 110475063 B CN110475063 B CN 110475063B CN 201910708747 A CN201910708747 A CN 201910708747A CN 110475063 B CN110475063 B CN 110475063B
Authority
CN
China
Prior art keywords
image
diffraction
preset
frame
brightness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910708747.5A
Other languages
Chinese (zh)
Other versions
CN110475063A (en
Inventor
王路
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910708747.5A priority Critical patent/CN110475063B/en
Publication of CN110475063A publication Critical patent/CN110475063A/en
Application granted granted Critical
Publication of CN110475063B publication Critical patent/CN110475063B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application discloses an image acquisition method, an image acquisition device and a storage medium, wherein the method comprises the following steps: receiving a shooting instruction; when the de-diffraction function is started, acquiring a multi-frame image or a single-frame image according to a shooting instruction; the multi-frame image comprises at least two frames of images with different exposure degrees and an initial image collected under a preset exposure degree, and the single frame of image is an image collected under the preset exposure degree; performing brightness differentiation processing on the at least two frames of images and the initial image to obtain a differentiation area, and removing the differentiation area on the initial image to obtain a de-diffraction image; or, carrying out the de-diffraction processing on the single-frame image by using a preset de-diffraction model to obtain a de-diffraction image; displaying the de-diffraction image.

Description

Image acquisition method and device, and storage medium
Technical Field
The embodiment of the application relates to an image processing technology, in particular to an image acquisition method and device and a storage medium.
Background
The existing smart phone is provided with a camera, the lens of the camera is often covered with user fingerprints, sweat stains and other use traces, the existence of the fingerprints and the sweat stains causes light diffraction when the camera takes a picture, especially when an object with high brightness is taken, obvious diffraction stripes are arranged around the object in the taken picture, the diffraction stripes reduce the quality of the picture, the user usually finds that the quality of the picture is not good when looking over the picture, the user needs to manually erase the use traces and then take the picture again, and therefore the intelligent degree of the taken picture is reduced. Secondly, some users may think that the camera has a problem of poor shooting quality, and thus, the reliability of shooting images is also reduced.
Disclosure of Invention
The application provides an image acquisition method and device and a storage medium, which can improve the intelligent degree of shooting images.
The technical scheme of the application is realized as follows:
the embodiment of the application provides an image acquisition method, which is applied to an image acquisition device and comprises the following steps:
receiving a shooting instruction;
when the de-diffraction function is started, acquiring a multi-frame image or a single-frame image according to the shooting instruction; the multi-frame image comprises at least two frames of images with different exposure degrees and an initial image collected under a preset exposure degree, and the single frame of image is the image collected under the preset exposure degree;
performing brightness differentiation processing on the at least two frames of images and the initial image to obtain a differentiation area, and removing the differentiation area from the initial image to obtain a de-diffraction image; or, carrying out the de-diffraction processing on the single-frame image by using a preset de-diffraction model to obtain a de-diffraction image;
displaying the de-diffracted image.
In the foregoing solution, the performing brightness differentiation processing on the at least two frames of images and the initial image to obtain a differentiation region, and removing the differentiation region from the initial image to obtain a de-diffraction image includes:
comparing the brightness of the at least two frames of images to determine an initial differentiation area;
removing the initial differentiation area on the image with the highest brightness in the at least two frames of images to obtain a reference image;
comparing the brightness of the reference image with that of the initial image to determine the differentiation area;
and removing the differentiated area on the initial image to obtain the de-diffraction image.
In the foregoing solution, the performing a diffraction removing process on the single-frame image by using a preset diffraction removing model to obtain a diffraction removed image includes:
performing region division on the single-frame image to obtain at least one single-frame subregion;
calculating at least one single-frame brightness value corresponding to the at least one single-frame sub-region;
when a target single-frame brightness value which is greater than or equal to a preset environment brightness threshold value exists in the at least one single-frame brightness value, a region corresponding to the target single-frame brightness value is the highlight region, and the preset diffraction removing model is utilized to perform diffraction removing processing on the highlight region in the single-frame image to obtain a diffraction removing image;
and when the brightness value of the at least one single frame is greater than or equal to a preset environment brightness threshold value, carrying out diffraction removing processing on the single frame image by using the preset diffraction removing model to obtain the diffraction removing image.
In the foregoing solution, before the performing the diffraction removal on the single-frame image by using the preset diffraction removal model to obtain the diffraction-removed image, the method further includes:
acquiring at least one non-diffraction image and at least one diffraction image under at least one shooting scene, wherein the at least one shooting scene, the at least one non-diffraction image and the at least one diffraction image are in one-to-one correspondence;
and training a preset initial de-diffraction model by using the at least one non-diffraction image and the at least one diffraction image to obtain the preset de-diffraction model.
In the foregoing solution, before the performing the diffraction removal on the single-frame image by using the preset diffraction removal model to obtain the diffraction-removed image, the method further includes:
acquiring a non-diffraction point light source image;
calculating a kernel function corresponding to a three-channel image aiming at the point light source image;
performing spot special-shaped processing on the point light source image by using the kernel function to obtain a special-shaped image;
and training a preset deconvolution model by using the point light source image and the special-shaped image to obtain the preset de-diffraction model.
In the above scheme, before the receiving the shooting instruction, the method further includes:
when a camera starting instruction is received, entering a preview state, and acquiring a frame of preview image in the preview state;
performing area division on the preview image to obtain at least one sub-area;
calculating at least one brightness value corresponding to the at least one sub-region;
analyzing whether a highlight area exists in the preview image or not based on the at least one brightness value and the preset environment brightness threshold value;
and when a highlight area exists in the preview image, starting the de-diffraction function.
In the foregoing solution, the analyzing whether a highlight area exists in the preview image based on the at least one brightness value and the preset ambient brightness threshold includes:
when a target brightness value which is greater than or equal to the preset environment brightness threshold value exists in the at least one brightness value, representing that the highlight area exists;
and when the at least one brightness value is smaller than the preset environment brightness threshold value, representing that the highlight area does not exist.
In the foregoing solution, the analyzing whether a highlight area exists in the preview image based on the at least one brightness value and the preset ambient brightness threshold includes:
calculating the difference degree between every two brightness values in the at least one brightness value to obtain at least one difference degree;
determining a maximum brightness value from the at least one brightness value when a target difference greater than or equal to a preset difference threshold value exists in the at least one difference;
when the maximum brightness value is larger than the preset environment brightness threshold value, representing that the highlight area exists;
when each difference degree in the at least one difference degree is smaller than the preset difference degree threshold value, judging whether a target brightness value larger than the preset environment brightness threshold value exists in the at least one brightness value;
when the target brightness value is present, characterizing that the highlight region is present.
In the above solution, after entering the preview state, acquiring a frame of preview image in the preview state, and before starting the de-diffraction function, the method further includes:
detecting whether diffraction fringes exist in the preview image;
correspondingly, when a highlight area exists in the preview image, the starting of the de-diffraction function comprises the following steps:
and when a highlight area exists in the preview image and the diffraction fringes exist in the preview image, starting the de-diffraction function.
In the above scheme, after the detecting whether the diffraction fringes exist in the preview image, the method further includes:
and when a highlight area exists in the preview image and the diffraction fringes exist in the preview image, generating and displaying prompt information, wherein the prompt information is used for prompting a user to detect whether the camera is shielded or not.
The embodiment of the application provides an image acquisition device, the device includes:
a communication unit for receiving a photographing instruction;
the acquisition unit is used for acquiring multi-frame images or single-frame images according to the shooting instruction when the de-diffraction function is started; the multi-frame image comprises at least two frames of images with different exposure degrees and an initial image collected under a preset exposure degree, and the single frame of image is the image collected under the preset exposure degree;
the de-diffraction unit is used for carrying out brightness differentiation processing on the at least two frames of images and the initial image to obtain a differentiation area, and removing the differentiation area from the initial image to obtain a de-diffraction image; or, carrying out the de-diffraction processing on the single-frame image by using a preset de-diffraction model to obtain a de-diffraction image;
a display unit for displaying the de-diffracted image.
In the above scheme, the de-diffraction unit is specifically configured to compare brightness of the at least two frames of images to determine an initial differentiation area; removing the initial differentiation area on the image with the highest brightness in the at least two frames of images to obtain a reference image; comparing the brightness of the reference image and the brightness of the initial image to determine the differentiation area; and removing the differentiated area on the initial image to obtain the de-diffraction image.
In the above scheme, the de-diffraction unit is specifically configured to perform area division on the single-frame image to obtain at least one single-frame subregion; calculating at least one single-frame brightness value corresponding to the at least one single-frame sub-region; when a target single-frame brightness value which is greater than or equal to a preset environment brightness threshold value exists in the at least one single-frame brightness value, a region corresponding to the target single-frame brightness value is the highlight region, and the preset diffraction removing model is utilized to perform diffraction removing processing on the highlight region in the single-frame image to obtain a diffraction removing image; and when the brightness value of the at least one single frame is greater than or equal to a preset environment brightness threshold value, performing diffraction removing processing on the single frame image by using the preset diffraction removing model to obtain the diffraction removing image.
In the above scheme, the apparatus further comprises:
the model generation unit is used for acquiring at least one non-diffraction image and at least one diffraction image in at least one shooting scene before the single-frame image is subjected to diffraction removal by using a preset diffraction removal model to obtain the diffraction removal image, wherein the at least one shooting scene, the at least one non-diffraction image and the at least one diffraction image are in one-to-one correspondence; and training a preset initial de-diffraction model by using the at least one non-diffraction image and the at least one diffraction image to obtain the preset de-diffraction model.
In the above scheme, the apparatus further comprises:
the model generation unit is used for acquiring a non-diffraction point light source image; calculating a kernel function corresponding to the three-channel image aiming at the point light source image; performing spot heterotype processing on the point light source image by using the kernel function to obtain a heterotype image; and training a preset deconvolution model by using the point light source image and the special-shaped image to obtain the preset de-diffraction model.
In the above scheme, the apparatus further comprises:
the diffraction judging unit is used for entering a preview state when a camera starting instruction is received before the shooting instruction is received, and acquiring a frame of preview image in the preview state; dividing the preview image into regions to obtain at least one subregion; calculating at least one brightness value corresponding to the at least one sub-region; analyzing whether a highlight area exists in the preview image or not based on the at least one brightness value and the preset environment brightness threshold value; and when a highlight area exists in the preview image, starting the de-diffraction function.
In the foregoing solution, the diffraction determining unit is specifically configured to characterize that the highlight region exists when a target brightness value greater than or equal to the preset environment brightness threshold exists in the at least one brightness value; and when the at least one brightness value is smaller than the preset environment brightness threshold value, the highlight area is represented to be absent.
In the foregoing solution, the diffraction determining unit is specifically configured to calculate a difference between every two luminance values in at least one luminance value to obtain at least one difference; when a target difference degree which is larger than or equal to a preset difference degree threshold value exists in the at least one difference degree, determining a maximum brightness value from the at least one brightness value; when the maximum brightness value is larger than the preset environment brightness threshold value, representing that the highlight area exists; when each difference degree in the at least one difference degree is smaller than the preset difference degree threshold value, judging whether a target brightness value larger than the preset environment brightness threshold value exists in the at least one brightness value or not; and when the target brightness value exists, characterizing that the highlight area exists.
In the foregoing solution, the diffraction determining unit is further configured to, in the entering of the preview state, detect whether a diffraction fringe exists in the preview image after a frame of preview image is acquired in the preview state and before the de-diffraction function is started;
correspondingly, the diffraction judging unit is specifically configured to, when a highlight region exists in the preview image and the diffraction fringes exist in the preview image, turn on the de-diffraction function.
In the foregoing solution, the diffraction determining unit is further configured to, after detecting whether a diffraction fringe exists in the preview image, generate and display a prompt message when a highlight region exists in the preview image and the diffraction fringe exists in the preview image, where the prompt message is used to prompt a user to detect whether the camera is blocked.
The embodiment of the application provides an image acquisition device, image acquisition device includes: a processor, a memory and a communication bus, the memory communicating with the processor through the communication bus, the memory storing one or more programs executable by the processor, the processor performing any of the image acquisition methods as described above when the one or more programs are executed.
Embodiments of the present application provide a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement any one of the image acquisition methods described above.
The embodiment of the application provides an image acquisition method, a mobile power supply and a storage medium, wherein the method comprises the following steps: receiving a shooting instruction; when the de-diffraction function is started, acquiring a multi-frame image or a single-frame image according to a shooting instruction; the multi-frame image comprises at least two frames of images with different exposure degrees and an initial image collected under a preset exposure degree, and the single frame of image is an image collected under the preset exposure degree; performing brightness differentiation processing on the at least two frames of images and the initial image to obtain a differentiation area, and removing the differentiation area on the initial image to obtain a de-diffraction image; or, carrying out the de-diffraction processing on the single-frame image by using a preset de-diffraction model to obtain a de-diffraction image; displaying the de-diffraction image. By adopting the technical scheme, when the de-diffraction function is started, a differentiation area is obtained based on at least two frames of images with different exposure degrees and an initial image collected under a preset exposure degree, because the details of a highlight object in an image with low exposure degree are more complete and clear, the details of a non-highlight object are more lost, diffraction stripes exist around the highlight object in an image with high exposure degree, the highlight object is not clear, and the details of the non-highlight object are more complete and clear, therefore, brightness differentiation processing is carried out on at least two frames of images with different exposure degrees and the initial image, the differentiation area comprising the diffraction stripes irrelevant to the highlight object can be obtained, the differentiation area is removed on an initialized image, a de-diffraction image can be obtained, and then a preset de-diffraction model can be directly utilized to carry out de-diffraction processing on a single frame of image, so as to obtain a de-diffraction image, therefore, the method does not need manual operation of a user, obtains the de-diffraction image with better quality, and improves the intelligent degree and the reliability of the shot image.
Drawings
Fig. 1 is a first schematic structural diagram of an image capturing device according to an embodiment of the present disclosure;
FIG. 2(a) is a first schematic diagram of a non-diffractive image provided in an embodiment of the present application;
fig. 2(b) is a schematic diagram of a diffraction image provided in the embodiment of the present application;
FIG. 3 is a first schematic diagram of a diffraction display provided in an embodiment of the present application;
FIG. 4(a) is a first non-diffractive display diagram provided in accordance with an embodiment of the present invention;
FIG. 4(b) is a schematic diagram of a diffraction display provided in an embodiment of the present application;
fig. 5 is a first schematic flowchart of an image acquisition method according to an embodiment of the present application;
FIG. 6(a) is a schematic diagram of an image with a first exposure level according to an embodiment of the present disclosure;
FIG. 6(b) is a schematic diagram of an image of a second exposure level provided by an embodiment of the present application;
FIG. 6(c) is a schematic diagram of an image at a third exposure level according to an embodiment of the present disclosure;
FIG. 6(d) is a schematic diagram of an initial image with a preset exposure level according to an embodiment of the present application;
FIG. 6(e) is a schematic diagram of a de-diffraction image provided by an embodiment of the present application;
FIG. 7(a) is a schematic diagram of a non-diffractive point light source image according to an embodiment of the present disclosure;
fig. 7(b) is a schematic diagram of a single-channel image provided by an embodiment of the present application;
fig. 7(c) is a schematic diagram of a profile image provided in an embodiment of the present application;
fig. 8 is a schematic flowchart illustrating a second image acquisition method according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an image capturing device according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of an image capturing device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for the convenience of description of the present application, and have no specific meaning by themselves. Thus, "module", "component" or "unit" may be used mixedly.
Referring now to fig. 1, which is a schematic diagram of an image capturing apparatus for implementing various embodiments of the present application, the apparatus 1 may include: a camera 11, a processor 12, and a display unit 13; when the processor 12 detects a start instruction of the camera 11 sent by a user, the camera 11 is controlled to be started, and the camera 11 enters a preview state, namely, a scene before a lens is displayed on the display unit 13 in real time; when the processor 12 detects a shooting instruction sent by a user, the camera 11 is controlled to shoot a scene in front of the lens, an image is generated and displayed on the display unit 13; the image capturing device 1 may include a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, a palm computer, and a fixed terminal such as a desktop computer.
It should be noted that the camera 11 generally includes a camera and a glass cover plate located at the front end of the camera, when the user uses the image acquisition device, the user can touch the glass cover plate with fingers inevitably, so that finger fingerprints and sweat stains can be printed on the glass cover plate; when the scene in front of the lens comprises a highlight object, such as a light source, and the camera 11 shoots the scene in front of the lens, a diffraction phenomenon occurs, namely, a physical phenomenon that waves deviate from an original straight line propagation when encountering an obstacle, so that obvious diffraction stripes are formed around the highlight object in the obtained image, the quality of the image is reduced due to the diffraction stripes, and a user needs to manually process finger fingerprints and sweat stains of the glass cover plate and shoot the image again, so that the intelligent degree of the shot image is reduced, and secondly, some users can consider that the camera has the problem of poor shooting quality, and therefore the reliability of the shot image is also reduced.
Exemplarily, as shown in fig. 2(a), when there is no fingerprint or sweat stain on the glass cover plate of the camera in the image capturing device, the highlight object is captured, and there is no diffraction fringe around the highlight object 21 in the obtained image; as shown in fig. 2(b), when there are fingerprints and sweat stains on the glass cover plate of the camera in the image capturing device, the same highlight object is photographed, and diffraction fringes exist around the highlight object 22 in the obtained image; the diagonal regions in fig. 2(a) and 2(b) represent background information of the shooting scene excluding highlight objects.
Illustratively, as shown in fig. 3, the diffraction phenomenon refers to a phenomenon that light encounters an obstacle 31 deviating from an original geometric path and travels around behind the obstacle 31 during the propagation process, and diffraction 32 of the light can be seen on the screen 30.
Illustratively, as shown in fig. 4(a), when the gap of the obstacle 41 is large, light can pass through the obstacle 41 according to the original straight path to form an image on the screen 40, but since the wavelength of light is short and is only a few tenths of microns, and objects are generally much larger than the light, light is diffracted when being directed to a pinhole, a slit, a filament, and traces of use such as fingerprints and sweat stains can also cause diffraction of light; as shown in fig. 4(b), when the light passes through the obstacle 42, the light deviates from the original straight path, and the diffraction of the light is clearly seen on the screen 40.
It will be appreciated by those skilled in the art that the configuration of the image capturing apparatus shown in fig. 1 does not constitute a limitation of the image capturing apparatus, and that the image capturing apparatus may comprise more or less components than those shown, or some components may be combined, or a different arrangement of components.
It should be noted that the embodiment of the present application can be implemented based on the image capturing apparatus shown in fig. 1, and a specific embodiment of image capturing is described below based on fig. 1.
Example one
An embodiment of the present application provides an image acquisition method, as shown in fig. 5, the image acquisition method includes the following steps:
s301, receiving a shooting instruction;
the method comprises the steps that an image acquisition device receives a shooting instruction sent by a user under the condition that shooting is started; usually, after shooting is started, a user aligns the camera to a shooting scene, and generates a shooting instruction by clicking the shooting control, that is, receives the shooting instruction.
S302, when the de-diffraction function is started, acquiring a multi-frame image or a single-frame image according to a shooting instruction; the multi-frame image comprises at least two frames of images with different exposure degrees and an initial image collected under a preset exposure degree, and the single frame of image is an image collected under the preset exposure degree;
in the embodiment of the application, the image acquisition device can be provided with a de-diffraction function, and only when the de-diffraction function is started, the de-diffraction of the image is carried out.
It should be noted that the image capturing apparatus may also automatically turn on the function of removing diffraction directly when starting to capture an image, and the embodiment of the present application is not limited. That is, in some embodiments, the image capture state may directly turn on the de-diffraction function upon receiving a capture instruction.
When the image acquisition device determines that the de-diffraction function is started, responding to a shooting instruction, controlling the camera to sequentially shoot a shooting scene under at least two exposure levels to obtain at least two frames of images, and controlling the camera to shoot the shooting scene under a preset exposure level to obtain an initial image; or, responding to a shooting instruction, directly controlling a camera to shoot a shooting scene under a preset exposure degree to obtain a single-frame image; wherein, two at least exposure are the exposure of variation in size, predetermine the exposure that uses when the exposure is camera normal shooting.
In some embodiments, the image capture state directly turns on the de-diffraction function upon receiving a capture command.
The brightness of images shot by the camera under different exposure levels is different, the higher the exposure level is, the higher the brightness of the images is, and the exposure level is determined by three variables of exposure time, sensitivity and aperture value.
In some embodiments, the at least two exposures include a first exposure and a second exposure, the first exposure is smaller than the second exposure, the first exposure is an exposure corresponding to an underexposure state, and the second exposure is an exposure corresponding to a preset exposure or an overexposure state.
In some embodiments, the at least two exposures include a first exposure, a second exposure and a third exposure, the first exposure is less than the second exposure, the second exposure is less than the third exposure, the first exposure is an exposure corresponding to an underexposure state, and the third exposure is an exposure corresponding to a preset exposure or an overexposure state.
It should be noted that, if a shooting scene includes a highlight object and a non-highlight object, the exposure of the camera is smaller, the highlight object in the image is clearer, the details of the non-highlight object are more lost, and the possibility of occurrence of a diffraction phenomenon during shooting is lower; therefore, the camera takes a picture at the first exposure level, resulting in a first image, as shown in fig. 6 (a); the camera takes a picture at the second exposure level to obtain a second image, as shown in fig. 6 (b); shooting by the camera at a third exposure level to obtain a third image, as shown in fig. 6 (c); shooting by the camera under the preset exposure to obtain an initial image, wherein the preset exposure is greater than the third exposure as shown in fig. 6 (d); the second image is compared with the first image and the third image, because the third image displays more details of the non-highlight object, the diffraction area of the highlight object is larger, and the diffraction area possibly covers the non-highlight object around the highlight object, the second image can reflect the details of the non-highlight object around the highlight object compared with the third image.
In some embodiments, the image capture device obtains different exposure levels by adjusting exposure time and sensitivity.
S303, performing brightness differentiation processing on the at least two frames of images and the initial image to obtain a differentiation area, and removing the differentiation area from the initial image to obtain a diffraction-removed image; or, carrying out the de-diffraction processing on the single-frame image by using a preset de-diffraction model to obtain a de-diffraction image;
the image acquisition device performs brightness differentiation processing on at least two frames of images and the initial image to obtain a differentiation area, and removes the differentiation area from the initial image to obtain a de-diffraction image; or directly taking the single-frame image as the input of a preset diffraction removing model to obtain a diffraction removing image.
In some embodiments, the image acquisition device compares brightness of at least two frames of images to determine an initial differentiation region; removing the initial differentiation area on the image with the highest brightness in at least two frames of images to obtain a reference image; comparing the brightness of the reference image and the initial image to determine a differentiation area; and removing the differentiated area on the initial image to obtain a de-diffraction image.
The image acquisition device searches for areas with different brightness between at least two frames of images, takes the areas as initial differentiation areas, removes the initial differentiation areas from the images with the highest brightness in the at least two frames of images to obtain reference images, and the reference images can reflect the real details of the high-brightness objects and the non-high-brightness objects in the shooting scene; and searching for areas with different brightness between the reference image and the initial image, taking the areas as differentiation areas, and removing the differentiation areas on the initial image to obtain a de-diffraction image.
In some embodiments, the image acquisition device extracts a 1 st image and a highlight image with the highest highlight from the at least two frames of images according to the exposure levels corresponding to the at least two frames of images, and takes a region with the same brightness as the 1 st image in the highlight image as an overlapping region, wherein the exposure level corresponding to the 1 st image is the minimum, and the exposure level corresponding to the highlight image is the maximum; taking out an ith image from at least two frames of images, wherein i is an integer larger than 1, taking an area with the same brightness as the ith image in the highlight image as an area to be increased, and taking an area with the brightness smaller than a preset brightness threshold value in the area to be increased as an increased area; merging the increased area into the overlapped area to obtain an updated overlapped area; continuously taking out the (i + 1) th image from the at least two frames of images until the at least two frames of images are taken out, wherein the exposure level of the (i) th image is less than that of the (i + 1) th image; and taking the area except the overlapped area in the highlight image as an initial differentiated area.
Exemplarily, taking fig. 6(a), 6(b), 6(c) and 6(d) as an example, comparing the brightness of the first image, the second image and the third image to determine an initial differentiated area; removing the initial differentiation area on the third image to obtain a reference image; comparing the brightness of the reference image and the initial image to determine a differentiation area; the differentiated area is removed from the original image to obtain a de-diffracted image, as shown in fig. 6 (e).
In some embodiments, the brightness is an image whiteness degree or an image color degree, the preset brightness threshold includes a preset grayscale threshold and a preset color threshold, the image whiteness degree corresponds to the preset grayscale threshold, and the image color degree corresponds to the preset grayscale threshold.
It should be noted that, when the exposure level is increased, the diffraction region of the highlight object in the image is enlarged, and the details of the non-highlight object are clearer, but the brightness of the diffraction region of the highlight object is often greater than that of the non-highlight object, so that the diffraction region and the non-highlight object in the region to be increased can be distinguished by presetting a brightness threshold.
Illustratively, taking the preset gray threshold as an example, the range of gray values in the image is 0-255, white is 255, and black is 0, and considering that the diffraction region of the highlight object is usually white, the preset gray threshold may be set to be equal to a value of about 255 or 255.
In some embodiments, before the image acquisition device performs diffraction removal on a single-frame image by using a preset diffraction removal model to obtain a diffraction-removed image, at least one non-diffraction image and at least one diffraction image are acquired in at least one shooting scene, and the at least one shooting scene, the at least one non-diffraction image and the at least one diffraction image are in one-to-one correspondence; and training the preset initial de-diffraction model by using at least one non-diffraction image and at least one diffraction image to obtain the preset de-diffraction model.
The image acquisition device sequentially takes out a first input sample from at least one diffraction image, takes out a first output sample corresponding to the first input sample from at least one non-diffraction image, and trains a preset initial de-diffraction model by using the first input sample and the second output sample to obtain the preset de-diffraction model.
In some embodiments, the image acquisition device acquires a non-diffractive point light source image before performing de-diffraction on the single-frame image by using a preset de-diffraction model to obtain a de-diffraction image; calculating a kernel function corresponding to the three-channel image aiming at the point light source image; carrying out spot special-shaped processing on the point light source image by utilizing the kernel function to obtain a special-shaped image; and training the preset deconvolution model by using the point light source image and the special-shaped image to obtain a preset de-diffraction model.
The method comprises the steps that an image acquisition device acquires a point light source image, wherein the point light source image is a superimposed image of three color channels of Red, Green and Blue (RGB); carrying out two-channel zero setting on the point light source image to obtain three single-channel images, and calculating a Kernel Function (Kernel Function) of each single-channel image; sequentially utilizing the kernel functions of the three single-channel images to perform single-channel light spot special-shaped processing on the point light source images to obtain special-shaped images; acquiring a preset deconvolution model, wherein the preset deconvolution model is a deconvolution model based on a Point Spread Function (PSF); and taking the special-shaped image as a second input sample, taking the point light source image as a second output sample, and training the preset deconvolution model by using the second input sample and the second output sample to obtain a preset de-diffraction model.
Exemplarily, taking the non-diffracted point-light source image shown in fig. 7(a) as an example, the point-light source image is sequentially subjected to two-channel zeroing to obtain three single-channel images, such as one single-channel image shown in fig. 7 (b); calculating a kernel function corresponding to each single-channel image; performing spot heterotype processing on the spot light source image by using the kernel function to obtain a heterotype image, as shown in fig. 7 (c); in the drawings, the hatched portions in fig. 7(a), 7(b), and 7(c) indicate background information except for a highlight object in a shooting scene, the white portions in fig. 7(a), 7(b), and 7(c) indicate highlight objects, and the density differences of the hatched portions in fig. 7(a), 7(b), and 7(c) indicate luminance values differences.
In some embodiments, the image acquisition device processes the special-shaped image by using a preset deconvolution model to obtain a deconvolution image, and calculates the difference between the deconvolution image and the point light source image; when the difference degree is greater than or equal to a preset first difference degree threshold value, adjusting a preset deconvolution model to obtain an updated deconvolution model; continuously processing the special-shaped image by using the updated deconvolution model until the difference degree is smaller than a preset first difference degree threshold value; and taking the updated deconvolution model as a preset de-diffraction model.
In some embodiments, the image acquisition device grays the deconvolution image and the point light source image respectively; calculating a gray difference value between each corresponding pixel for the grayed deconvolution image and the grayed point light source image; and averaging all the gray level difference values to obtain the difference degree.
In some embodiments, the image acquisition device performs region division on a single-frame image to obtain at least one single-frame subregion; calculating at least one single-frame brightness value corresponding to at least one single-frame sub-region; when a target single-frame brightness value which is greater than or equal to a preset environment brightness threshold value exists in at least one single-frame brightness value, a region corresponding to the target single-frame brightness value is a highlight region, and a preset diffraction removing model is utilized to perform diffraction removing processing on the highlight region in the single-frame image to obtain a diffraction removed image; and when the brightness value of at least one single frame is greater than or equal to the preset environment brightness threshold value, carrying out diffraction removing processing on the single frame image by using a preset diffraction removing model to obtain a diffraction removing image.
And S304, displaying the de-diffraction image.
After the image acquisition device obtains the de-diffraction image, the de-diffraction image is stored, the de-diffraction image is displayed on the display unit, and the de-diffraction image does not have color stripes or has few diffraction stripes, so that the shooting scene is reflected more truly.
In some embodiments, as shown in fig. 8, before step S301, the image acquisition method further comprises the steps of:
s401, when a camera starting instruction is received, entering a preview state, and acquiring a frame of preview image in the preview state;
when detecting a starting instruction of the camera, the image acquisition device controls the camera to start so that the camera enters a preview state and acquires a certain frame of preview image generated by the camera in the preview state.
S402, performing area division on the preview image to obtain at least one sub-area;
the image acquisition device divides the preview image into regions to obtain at least one subregion.
In some embodiments, the image acquisition device determines a center position of a preview image, and determines an upper left corner center position, an upper right corner center position, a lower left corner center position and a lower right corner center position based on a preset distance ratio and a preview image center; and then dividing an area according to a certain pixel length by taking the center position of the preview image, the center position of the upper left corner, the center position of the upper right corner, the center position of the lower left corner and the center position of the lower right corner as the center positions of the areas respectively to obtain five areas.
Illustratively, taking a preview image with a resolution of 2000 × 1000 as an example, the image capturing device determines the coordinates of the center of the preview image as (1000, 500), and calculates the abscissa distance D according to the formula (1) assuming that the preset distance ratio P is 70%xTo 700, the distance D of the ordinate is calculated according to the formula (2)yIs 350:
Dx=X×P=1000×70% (1)
Dy=Y×P=500×70% (2)
wherein, X is the abscissa of the center of the preview image, and Y is the ordinate of the center of the preview image; according to the abscissa distance, the ordinate distance and the coordinates of the center of the preview image, calculating to obtain the center coordinates of the upper left corner (300, 150), the center coordinates of the upper right corner (300, 850), the center coordinates of the lower left corner (1700, 150) and the center coordinates of the lower right corner (1700, 850); the five coordinates are taken as the coordinates of the center of the area, and the five areas of 100 x 100 are divided.
S403, calculating at least one brightness value corresponding to at least one sub-region;
the image acquisition device calculates at least one brightness value corresponding to at least one sub-region based on the preview image; wherein, the brightness value is a gray value or an RGB value.
In some embodiments, taking the brightness value as the gray scale value as an example, the image capturing device grays the preview image to obtain a gray scale image, and calculates the gray scale value of each of the at least one sub-region in the gray scale image.
S404, analyzing whether a highlight area exists in the preview image or not based on at least one brightness value and a preset environment brightness threshold value;
the image acquisition device judges whether a highlight area exists in the preview image or not based on a preset environment brightness threshold value and at least one brightness value.
In some embodiments, before analyzing whether a highlight area exists in a preview image by using a preset ambient brightness threshold, the image acquisition device acquires brightness images under different ambient brightness for a shooting scene including a highlight object; detecting whether each image in the brightness image comprises diffraction fringes or not by using a preset diffraction detection model; when the jth image in the luminance images does not include the diffraction fringes and the (j + 1) th image includes the diffraction fringes, calculating the gray value of the (j + 1) th image, taking the gray value of the (j + 1) th image as a preset ambient luminance threshold value, wherein j is an integer greater than 0, and the ambient luminance corresponding to the jth image is lower than the ambient luminance corresponding to the (j + 1) th image; wherein, the ambient brightness refers to visible light with the wavelength range of 400nm-700 nm.
In some embodiments, the image acquisition device calculates a degree of difference between every two brightness values of the at least one brightness value, resulting in at least one degree of difference; determining a maximum brightness value from the at least one brightness value when a target difference greater than or equal to a preset difference threshold value exists in the at least one difference; when the maximum brightness value is larger than a preset environment brightness threshold value, representing that a highlight area exists; when each difference degree in the at least one difference degree is smaller than a preset difference degree threshold value, judging whether a target brightness value larger than a preset environment brightness threshold value exists in the at least one brightness value; when a target luminance value is present, a highlight region is present.
The image acquisition device compares at least one difference with a preset difference threshold, when the target difference which is greater than or equal to the preset difference threshold exists in the at least one difference, the brightness distribution of the representation preview image is not uniform, whether the maximum brightness value is greater than a preset environment brightness threshold is judged, when the maximum brightness value is greater than the preset environment brightness threshold, a high-brightness area exists in the representation preview image, and otherwise, the representation preview image does not exist in the high-brightness area; and when all the differences in the at least one difference are smaller than a preset difference threshold, the brightness distribution of the representation preview image is uniform, whether each brightness value in the at least one brightness value is larger than a preset environment brightness threshold is judged, when a target brightness value in the at least one brightness value is larger than the preset environment brightness threshold, a highlight area exists in the representation preview image, and otherwise, the highlight area does not exist in the representation preview image.
In some embodiments, the image capturing apparatus divides the difference between every two brightness values by the minimum brightness value of every two brightness values to obtain the difference corresponding to every two brightness values, and accordingly, the preset difference threshold may be set to 20%.
In some embodiments, the image acquisition device sorts at least one brightness value, and takes out a brightness value with the maximum preset number from the sorted brightness values as a maximum brightness value; wherein the preset number is an integer greater than 1.
In some embodiments, the image acquisition device is used for representing that a highlight area exists when a target brightness value which is greater than or equal to a preset environment brightness threshold value exists in at least one brightness value; and when at least one brightness value is smaller than a preset environment brightness threshold value, representing that no highlight area exists.
The image acquisition device directly judges whether each brightness value in at least one brightness value is larger than a preset environment brightness threshold value, and when a target brightness value exists in at least one brightness value, a highlight area exists in the representation preview image; and when all the at least one brightness value is smaller than the preset environment brightness threshold value, representing that no highlight area exists in the preview image.
And S405, when the highlight area exists in the preview image, starting a diffraction removing function.
And when the image acquisition device determines that a highlight area exists in the preview image, starting a de-diffraction function, otherwise, not starting the de-diffraction function.
In some embodiments, the image acquisition device detects whether diffraction fringes exist in a preview image after entering the preview state and after acquiring a frame of preview image in the preview state and before starting a de-diffraction function; accordingly, when a highlight area exists in the preview image and a diffraction stripe exists in the preview image, the de-diffraction function is started.
The image acquisition device detects whether the preview image comprises diffraction stripes, when a highlight area exists in the preview image and the diffraction stripes exist in the preview image, the possibility that diffraction phenomena occur when shooting is carried out under the current ambient brightness is determined to be high, the de-diffraction function is started to avoid the diffraction phenomena from reducing the quality of the shot image, and otherwise, the de-diffraction function is not started.
In some embodiments, after detecting whether the diffraction fringes exist in the preview image, the image acquisition device generates and displays prompt information when a highlight area exists in the preview image and the diffraction fringes exist in the preview image, wherein the prompt information is used for prompting a user to detect whether the camera is blocked.
When the image acquisition device has a highlight area in the preview image and diffraction stripes exist in the preview image, prompt information is generated and displayed on the display unit, and the user is informed of possible use traces such as fingerprints or sweat stains on the glass cover plate of the camera through the prompt information, so that the user can take the image after erasing the use traces, and the image with better quality can be obtained by direct shooting.
It can be understood that when the image acquisition device starts the de-diffraction function, a differentiation area is obtained based on at least two frames of images with different exposure levels and an initial image acquired under a preset exposure level, because the details of a highlight object in an image with low exposure level are more complete and clear, the details of a non-highlight object are more lost, and diffraction stripes exist around the highlight object in an image with high exposure level, the highlight object is not clear and the details of the non-highlight object are more complete and clear, thus, brightness differentiation processing is performed on at least two frames of images with different exposure levels and the initial image, the differentiation area comprising diffraction stripes irrelevant to the highlight object can be obtained, the differentiation area is removed on an initialized image, a de-diffraction image can be obtained, and then, a preset de-diffraction model can be directly utilized to perform de-diffraction processing on a single frame of image, and a de-diffraction image is obtained, so that manual operation of a user is not required, the de-diffraction image with better quality is obtained, and the intelligence degree and the reliability of the shot image are improved.
Example two
The further description will be made based on the same inventive concept of the first embodiment.
An embodiment of the present application provides an image capturing apparatus, as shown in fig. 9, an image capturing apparatus 9 includes:
a communication unit 91 for receiving a shooting instruction;
the acquisition unit 92 is used for acquiring a multi-frame image or a single-frame image according to the shooting instruction when the de-diffraction function is started; the multi-frame image comprises at least two frames of images with different exposure degrees and an initial image collected under a preset exposure degree, and the single frame of image is an image collected under the preset exposure degree;
a de-diffraction unit 93, configured to perform brightness differentiation processing on the at least two frames of images and the initial image to obtain a differentiation region, and remove the differentiation region from the initial image to obtain a de-diffraction image; or, carrying out the de-diffraction processing on the single-frame image by using a preset de-diffraction model to obtain a de-diffraction image;
a display unit 94 for displaying the undiffracted image.
In some embodiments, the de-diffraction unit 93 is specifically configured to compare brightness of at least two frames of images to determine an initial differentiated area; removing the initial differentiation area on the image with the highest brightness in the at least two frames of images to obtain a reference image; comparing the brightness of the reference image and the initial image to determine a differentiation area; and removing the differentiated area on the initial image to obtain a de-diffraction image.
In some embodiments, the de-diffraction unit 93 is specifically configured to perform region division on the single-frame image to obtain at least one single-frame subregion; calculating at least one single-frame brightness value corresponding to at least one single-frame sub-region; when a target single-frame brightness value which is greater than or equal to a preset environment brightness threshold value exists in at least one single-frame brightness value, the area corresponding to the target single-frame brightness value is a highlight area, and a preset diffraction removing model is utilized to perform diffraction removing processing on the highlight area in the single-frame image to obtain a diffraction removed image; and when the brightness value of at least one single frame is greater than or equal to the preset environment brightness threshold value, performing diffraction removing processing on the single frame image by using a preset diffraction removing model to obtain a diffraction removing image.
In some embodiments, the image acquisition device 9 further comprises:
the model generating unit 98 is configured to obtain at least one non-diffraction image and at least one diffraction image in at least one shooting scene before performing de-diffraction on the single-frame image by using a preset de-diffraction model to obtain a de-diffraction image, where the at least one shooting scene, the at least one non-diffraction image and the at least one diffraction image are in one-to-one correspondence; and training the preset initial de-diffraction model by using at least one non-diffraction image and at least one diffraction image to obtain the preset de-diffraction model.
In some embodiments, the image acquisition device 9 further comprises:
a model generation unit 98 for acquiring a non-diffractive point light source image; calculating a kernel function corresponding to the three-channel image aiming at the point light source image; performing spot special-shaped processing on the point light source image by utilizing the kernel function to obtain a special-shaped image; and training the preset deconvolution model by using the point light source image and the special-shaped image to obtain a preset de-diffraction model.
In some embodiments, the image acquisition device 9 further comprises:
the diffraction judging unit 99 is used for entering a preview state when a camera starting instruction is received before a shooting instruction is received, and acquiring a frame of preview image in the preview state; dividing the preview image into regions to obtain at least one subregion; calculating at least one brightness value corresponding to at least one sub-region; analyzing whether a highlight area exists in the preview image or not based on at least one brightness value and a preset environment brightness threshold value; and when the highlight area exists in the preview image, starting the de-diffraction function.
In some embodiments, the diffraction determining unit 99 is specifically configured to characterize that a highlight area exists when a target brightness value greater than or equal to a preset environment brightness threshold exists in at least one brightness value; and when at least one brightness value is smaller than a preset environment brightness threshold value, representing that no high-brightness region exists.
In some embodiments, the diffraction determining unit 99 is specifically configured to calculate a difference between every two luminance values in the at least one luminance value to obtain at least one difference; when the target difference degree which is greater than or equal to the preset difference degree threshold value exists in the at least one difference degree, determining a maximum brightness value from the at least one brightness value; when the maximum brightness value is larger than a preset environment brightness threshold value, representing that a highlight area exists; when each difference degree in the at least one difference degree is smaller than a preset difference degree threshold value, judging whether a target brightness value larger than a preset environment brightness threshold value exists in the at least one brightness value; and when the target brightness value exists, the highlight area exists in the characterization.
In some embodiments, the diffraction determining unit 99 is further configured to detect whether a diffraction fringe exists in the preview image after entering the preview state, and after acquiring a frame of preview image in the preview state, and before starting the function of removing diffraction;
accordingly, the diffraction determining unit 99 is specifically configured to turn on the de-diffraction function when the highlight region exists in the preview image and the diffraction fringes exist in the preview image.
In some embodiments, the diffraction determining unit 99 is further configured to, after detecting whether a diffraction fringe exists in the preview image, generate and display a prompt message for prompting a user to detect whether the camera is blocked when a highlight region exists in the preview image and a diffraction fringe exists in the preview image.
It should be noted that, in practical applications, the communication unit 91 may be implemented by a wireless communication chip or a wired communication chip, the acquisition unit 92 may be implemented by a camera, the de-diffraction unit 93, the model generation unit 98, and the diffraction judgment unit 99 may be implemented by a processor 95 located on the image acquisition device 9, the display unit 94 may be implemented by a display, and the wireless communication chip, the wired communication chip, the camera, and the display are not shown in fig. 9; the processor 95 is specifically a CPU (Central Processing Unit), an MPU (Microprocessor Unit), a DSP (Digital Signal Processing), or a Field Programmable Gate Array (FPGA).
The embodiment of the present application further provides an image capturing apparatus 9, as shown in fig. 10, where the apparatus 9 includes: a processor 95, a memory 96 and a communication bus 97, the memory 96 being in communication with the processor 95 via the communication bus 97, the memory 96 storing one or more programs executable by the processor 95, the one or more programs, when executed, causing the processor 95 to perform any of the image acquisition methods as described in the previous embodiments.
The embodiment of the present application provides a computer-readable storage medium, where one or more programs are stored, and the one or more programs are executable by the one or more processors 95, and when the program is executed by the processors 95, the image capturing method according to the first embodiment is implemented.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus, and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present application, and is not intended to limit the scope of the present application.

Claims (16)

1. An image acquisition method, characterized in that the method comprises:
receiving a shooting instruction;
when the de-diffraction function is started, acquiring a multi-frame image or a single-frame image according to the shooting instruction; the multi-frame image comprises at least two frames of images with different exposure degrees and an initial image collected under a preset exposure degree, and the single frame of image is the image collected under the preset exposure degree;
comparing the brightness of the at least two frames of images to determine an initial differentiation area; removing the initial differentiation area on the image with the highest brightness in the at least two frames of images to obtain a reference image; comparing the brightness of the reference image with that of the initial image to obtain a differentiation area, and removing the differentiation area from the initial image to obtain a de-diffraction image; or, carrying out the de-diffraction processing on the single-frame image by using a preset de-diffraction model to obtain a de-diffraction image;
the method comprises the steps of acquiring at least one non-diffraction image and at least one diffraction image in at least one shooting scene, wherein the at least one shooting scene, the at least one non-diffraction image and the at least one diffraction image are in one-to-one correspondence, and training a preset initial de-diffraction model by utilizing the at least one non-diffraction image and the at least one diffraction image to obtain the preset de-diffraction model; or acquiring a non-diffracted point light source image, calculating a kernel function corresponding to a three-channel image aiming at the point light source image, performing light spot special-shaped processing on the point light source image by using the kernel function to obtain a special-shaped image, and training a preset deconvolution model by using the point light source image and the special-shaped image to obtain the preset de-diffraction model;
displaying the de-diffracted image.
2. The method of claim 1, wherein the performing the de-diffraction processing on the single-frame image by using a preset de-diffraction model to obtain a de-diffraction image comprises:
performing region division on the single-frame image to obtain at least one single-frame subregion;
calculating at least one single-frame brightness value corresponding to the at least one single-frame sub-region;
when a target single-frame brightness value which is greater than or equal to a preset environment brightness threshold value exists in the at least one single-frame brightness value, a region corresponding to the target single-frame brightness value is a highlight region, and the preset diffraction removing model is utilized to perform diffraction removing processing on the highlight region in the single-frame image to obtain a diffraction removing image;
and when the brightness value of the at least one single frame is greater than or equal to a preset environment brightness threshold value, carrying out diffraction removing processing on the single frame image by using the preset diffraction removing model to obtain the diffraction removing image.
3. The method of claim 1, wherein prior to said receiving a capture instruction, the method further comprises:
when a camera starting instruction is received, entering a preview state, and acquiring a frame of preview image in the preview state;
performing area division on the preview image to obtain at least one sub-area;
calculating at least one brightness value corresponding to the at least one sub-region;
analyzing whether a highlight area exists in the preview image or not based on the at least one brightness value and a preset environment brightness threshold value;
and when a highlight area exists in the preview image, starting the de-diffraction function.
4. The method of claim 3, wherein analyzing whether a highlight region exists in the preview image based on the at least one brightness value and the preset ambient brightness threshold comprises:
when a target brightness value which is greater than or equal to the preset environment brightness threshold value exists in the at least one brightness value, representing that the highlight area exists;
and when the at least one brightness value is smaller than the preset environment brightness threshold value, representing that the highlight area does not exist.
5. The method of claim 3, wherein analyzing whether a highlight region exists in the preview image based on the at least one brightness value and the preset ambient brightness threshold comprises:
calculating the difference degree between every two brightness values in the at least one brightness value to obtain at least one difference degree;
determining a maximum brightness value from the at least one brightness value when a target difference greater than or equal to a preset difference threshold value exists in the at least one difference;
when the maximum brightness value is larger than the preset environment brightness threshold value, representing that the highlight area exists;
when each difference degree in the at least one difference degree is smaller than the preset difference degree threshold value, judging whether a target brightness value larger than the preset environment brightness threshold value exists in the at least one brightness value;
when the target brightness value is present, characterizing that the highlight region is present.
6. The method of claim 3, wherein after entering a preview state, after acquiring a preview image of a frame in the preview state, and before the enabling of the de-diffraction function, the method further comprises:
detecting whether diffraction fringes exist in the preview image;
correspondingly, when a highlight area exists in the preview image, the starting of the de-diffraction function comprises the following steps:
and when a highlight area exists in the preview image and the diffraction fringes exist in the preview image, starting the de-diffraction function.
7. The method of claim 6, wherein after said detecting whether diffraction fringes are present in said preview image, said method further comprises:
and when a highlight area exists in the preview image and the diffraction fringes exist in the preview image, generating and displaying prompt information, wherein the prompt information is used for prompting a user to detect whether the camera is shielded or not.
8. An image acquisition apparatus, characterized in that the apparatus comprises:
a communication unit for receiving a photographing instruction;
the acquisition unit is used for acquiring multi-frame images or single-frame images according to the shooting instruction when the de-diffraction function is started; the multi-frame image comprises at least two frames of images with different exposure degrees and an initial image collected under a preset exposure degree, and the single frame of image is the image collected under the preset exposure degree;
the de-diffraction unit is used for comparing the brightness of the at least two frames of images to determine an initial differentiation area; removing the initial differentiation area on the image with the highest brightness in the at least two frames of images to obtain a reference image; comparing the brightness of the reference image with that of the initial image to obtain a differentiation area, and removing the differentiation area from the initial image to obtain a de-diffraction image; or, carrying out the de-diffraction processing on the single-frame image by using a preset de-diffraction model to obtain a de-diffraction image;
the model generation unit is used for acquiring at least one non-diffraction image and at least one diffraction image in at least one shooting scene, wherein the at least one shooting scene, the at least one non-diffraction image and the at least one diffraction image are in one-to-one correspondence, and a preset initial de-diffraction model is trained by utilizing the at least one non-diffraction image and the at least one diffraction image to obtain the preset de-diffraction model; or acquiring a non-diffracted point light source image, calculating a kernel function corresponding to a three-channel image aiming at the point light source image, performing light spot special-shaped processing on the point light source image by using the kernel function to obtain a special-shaped image, and training a preset deconvolution model by using the point light source image and the special-shaped image to obtain the preset de-diffraction model;
a display unit for displaying the de-diffracted image.
9. The apparatus of claim 8,
the de-diffraction unit is specifically configured to perform area division on the single-frame image to obtain at least one single-frame subregion; calculating at least one single-frame brightness value corresponding to the at least one single-frame sub-region; when a target single-frame brightness value which is greater than or equal to a preset environment brightness threshold value exists in the at least one single-frame brightness value, a region corresponding to the target single-frame brightness value is a highlight region, and the preset diffraction removing model is utilized to perform diffraction removing processing on the highlight region in the single-frame image to obtain a diffraction removing image; and when the brightness value of the at least one single frame is greater than or equal to a preset environment brightness threshold value, performing diffraction removing processing on the single frame image by using the preset diffraction removing model to obtain the diffraction removing image.
10. The apparatus of claim 8, further comprising:
the diffraction judging unit is used for entering a preview state when a camera starting instruction is received before the shooting instruction is received, and acquiring a frame of preview image in the preview state; dividing the preview image into regions to obtain at least one subregion; calculating at least one brightness value corresponding to the at least one sub-region; analyzing whether a highlight area exists in the preview image or not based on the at least one brightness value and a preset environment brightness threshold value; and when a highlight area exists in the preview image, starting the de-diffraction function.
11. The apparatus of claim 10,
the diffraction judging unit is specifically configured to represent that the highlight area exists when a target brightness value greater than or equal to the preset environment brightness threshold exists in the at least one brightness value; and when the at least one brightness value is smaller than the preset environment brightness threshold value, the highlight area is represented to be absent.
12. The apparatus of claim 10,
the diffraction judging unit is specifically configured to calculate a difference between every two luminance values in the at least one luminance value to obtain at least one difference; when a target difference degree which is larger than or equal to a preset difference degree threshold value exists in the at least one difference degree, determining a maximum brightness value from the at least one brightness value; when the maximum brightness value is larger than the preset environment brightness threshold value, representing that the highlight area exists; when each difference degree in the at least one difference degree is smaller than the preset difference degree threshold value, judging whether a target brightness value larger than the preset environment brightness threshold value exists in the at least one brightness value or not; and when the target brightness value exists, characterizing that the highlight area exists.
13. The apparatus of claim 10,
the diffraction judging unit is further configured to detect whether diffraction fringes exist in the preview image after the preview image is acquired in the preview state and before the de-diffraction function is started in the preview state;
correspondingly, the diffraction judging unit is specifically configured to, when a highlight region exists in the preview image and the diffraction fringes exist in the preview image, turn on the de-diffraction function.
14. The apparatus of claim 13,
the diffraction judging unit is further configured to, after detecting whether a diffraction stripe exists in the preview image, generate and display a prompt message when a highlight region exists in the preview image and the diffraction stripe exists in the preview image, where the prompt message is used to prompt a user to detect whether the camera is blocked.
15. An image capturing apparatus, characterized in that the image capturing apparatus comprises: a processor, a memory, and a communication bus, the memory in communication with the processor through the communication bus, the memory storing one or more programs executable by the processor, the processor performing the method of any of claims 1-7 when the one or more programs are executed.
16. A computer-readable storage medium, having one or more programs stored thereon, the one or more programs being executable by one or more processors to perform the method of any of claims 1-7.
CN201910708747.5A 2019-08-01 2019-08-01 Image acquisition method and device, and storage medium Active CN110475063B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910708747.5A CN110475063B (en) 2019-08-01 2019-08-01 Image acquisition method and device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910708747.5A CN110475063B (en) 2019-08-01 2019-08-01 Image acquisition method and device, and storage medium

Publications (2)

Publication Number Publication Date
CN110475063A CN110475063A (en) 2019-11-19
CN110475063B true CN110475063B (en) 2021-03-16

Family

ID=68508511

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910708747.5A Active CN110475063B (en) 2019-08-01 2019-08-01 Image acquisition method and device, and storage medium

Country Status (1)

Country Link
CN (1) CN110475063B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111835976B (en) * 2020-07-28 2021-10-26 Oppo广东移动通信有限公司 Displacement equipment, photographing data acquisition method and device and storage medium
CN114066740A (en) * 2020-08-07 2022-02-18 北京小米移动软件有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN114079731A (en) * 2020-08-19 2022-02-22 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and terminal
CN111970447B (en) * 2020-08-25 2021-12-21 云谷(固安)科技有限公司 Display device and mobile terminal
CN114338956A (en) * 2020-09-30 2022-04-12 北京小米移动软件有限公司 Image processing method, image processing apparatus, and storage medium
CN112312032B (en) * 2020-10-21 2022-03-25 Tcl通讯(宁波)有限公司 Method and device for shooting by using off-screen camera, storage medium and mobile terminal
CN112257630A (en) * 2020-10-29 2021-01-22 广东稳峰电力科技有限公司 Unmanned aerial vehicle detection imaging method and device of power system
CN112800857A (en) * 2021-01-07 2021-05-14 北京中云伟图科技有限公司 Bare land rapid extraction method based on high-resolution satellite data
CN114387248B (en) * 2022-01-12 2022-11-25 苏州天准科技股份有限公司 Silicon material melting degree monitoring method, storage medium, terminal and crystal pulling equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007201536A (en) * 2006-01-23 2007-08-09 Canon Inc Imaging apparatus
CN107295236A (en) * 2017-08-11 2017-10-24 深圳市唯特视科技有限公司 A kind of snapshot Difference Imaging method based on time-of-flight sensor
CN107945158A (en) * 2017-11-15 2018-04-20 上海摩软通讯技术有限公司 A kind of dirty method and device of detector lens
CN109167903A (en) * 2018-10-31 2019-01-08 Oppo广东移动通信有限公司 Image acquiring method, image acquiring device, structure optical assembly and electronic device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001186405A (en) * 1999-12-27 2001-07-06 Canon Inc Image pickup device and focusing method for the image pickup device
EP2772786B1 (en) * 2011-10-27 2016-07-13 Dai Nippon Printing Co., Ltd. Projection device
KR101406129B1 (en) * 2012-03-28 2014-06-13 엘지디스플레이 주식회사 Display apparatus
CN105103534B (en) * 2013-03-27 2018-06-22 富士胶片株式会社 Photographic device and calibration method
FR3026836B1 (en) * 2014-10-03 2022-04-22 Centre Nat Rech Scient METHOD AND OPTICAL DEVICE OF TELEMETRY
JP6594101B2 (en) * 2015-08-19 2019-10-23 キヤノン株式会社 Image processing apparatus, image processing method, and image processing program
US10735655B2 (en) * 2017-12-07 2020-08-04 Canon Kabushiki Kaisha Apparatus, method, and program for image processing
CN108376656B (en) * 2018-02-08 2020-07-31 北京科技大学 Nondestructive testing method for oversized crystal grain size based on two-dimensional X-ray detection technology

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007201536A (en) * 2006-01-23 2007-08-09 Canon Inc Imaging apparatus
CN107295236A (en) * 2017-08-11 2017-10-24 深圳市唯特视科技有限公司 A kind of snapshot Difference Imaging method based on time-of-flight sensor
CN107945158A (en) * 2017-11-15 2018-04-20 上海摩软通讯技术有限公司 A kind of dirty method and device of detector lens
CN109167903A (en) * 2018-10-31 2019-01-08 Oppo广东移动通信有限公司 Image acquiring method, image acquiring device, structure optical assembly and electronic device

Also Published As

Publication number Publication date
CN110475063A (en) 2019-11-19

Similar Documents

Publication Publication Date Title
CN110475063B (en) Image acquisition method and device, and storage medium
US11430103B2 (en) Method for image processing, non-transitory computer readable storage medium, and electronic device
WO2019105154A1 (en) Image processing method, apparatus and device
KR101662846B1 (en) Apparatus and method for generating bokeh in out-of-focus shooting
US8971628B2 (en) Face detection using division-generated haar-like features for illumination invariance
EP0932114A2 (en) A method of and apparatus for detecting a face-like region and observer tracking display
EP3480784A1 (en) Image processing method, and device
CN106604005B (en) A kind of projection TV Atomatic focusing method and system
CN110796600B (en) Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment
CN109741281A (en) Image processing method, device, storage medium and terminal
EP3477548B1 (en) Method and image capturing device for detecting fog in a scene
JP7136956B2 (en) Image processing method and device, terminal and storage medium
CN110493531B (en) Image processing method and system
CN113014803A (en) Filter adding method and device and electronic equipment
CN114641982A (en) System for performing ambient light image correction
WO2019128539A1 (en) Image definition obtaining method and apparatus, storage medium, and electronic device
WO2023137956A1 (en) Image processing method and apparatus, electronic device, and storage medium
JP2021027535A (en) Image processing device, image processing method, and program
US10863103B2 (en) Setting apparatus, setting method, and storage medium
US11330177B2 (en) Image processing apparatus and image processing method
CN111917986A (en) Image processing method, medium thereof, and electronic device
CN112153300A (en) Multi-view camera exposure method, device, equipment and medium
CN111885371A (en) Image occlusion detection method and device, electronic equipment and computer readable medium
CN111127537A (en) Method and apparatus for detecting shadows in a head mounted device
CN112312035B (en) Exposure parameter adjusting method, exposure parameter adjusting device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant