CN115426458A - Light source detection method and related equipment thereof - Google Patents

Light source detection method and related equipment thereof Download PDF

Info

Publication number
CN115426458A
CN115426458A CN202211354642.2A CN202211354642A CN115426458A CN 115426458 A CN115426458 A CN 115426458A CN 202211354642 A CN202211354642 A CN 202211354642A CN 115426458 A CN115426458 A CN 115426458A
Authority
CN
China
Prior art keywords
camera
image
light source
electronic device
detection method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211354642.2A
Other languages
Chinese (zh)
Other versions
CN115426458B (en
Inventor
王宇
陈铎
孙佳男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202211354642.2A priority Critical patent/CN115426458B/en
Publication of CN115426458A publication Critical patent/CN115426458A/en
Application granted granted Critical
Publication of CN115426458B publication Critical patent/CN115426458B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Studio Devices (AREA)

Abstract

The application provides a light source detection method and related equipment thereof, and relates to the field of image processing, wherein the light source detection method comprises the following steps: starting a camera application program of the electronic equipment; acquiring a first image by using a first camera; performing virtual focus processing on the second camera and/or reducing exposure value processing; acquiring a reference image by using the processed second camera; based on the reference image, a target light source included in the first image is determined. This application is through carrying out virtual burnt and/or reduce the exposure value to the second camera and handling, can assist the accurate detection light source of first camera with simple swift mode.

Description

Light source detection method and related equipment thereof
Technical Field
The present application relates to the field of image processing, and in particular, to a light source detection method and related apparatus.
Background
With the popularization of electronic devices with shooting functions in life, people using electronic devices to shoot have become a daily behavior in life.
When a scene includes a light source, the brightness of the light source is relatively high, which affects the light and shadow effect of the surrounding environment, and therefore, in the related art, the light source is usually detected according to a brightness threshold recognition method, for example, a local area larger than a preset brightness threshold is determined as the light source, and then the subsequent processing is performed.
However, the detection method is not highly robust, and some highly reflective objects, high-brightness metal, and the like can be erroneously identified as light sources, and a new detection method is urgently needed to solve the problem.
Disclosure of Invention
The application provides a light source detection method and related equipment thereof, which can assist a first camera in accurately detecting a light source in a simple and rapid mode by processing a second camera through virtual focus and/or exposure value reduction.
In a first aspect, a light source detection method is provided, which is applied to an electronic device including a first camera and a second camera, where the first camera and the second camera are used for shooting a same scene; the method comprises the following steps:
starting a camera application of the electronic device;
acquiring a first image by using the first camera;
performing virtual focus processing on the second camera and/or reducing exposure value processing;
acquiring a reference image by using the processed second camera;
based on the reference image, a target light source included in the first image is determined.
According to the light source detection method provided by the embodiment of the application, under the condition that the first camera and the second camera are started for shooting, the second camera is subjected to virtual focus and/or exposure value reduction processing, so that the area of the non-real light source in the reference image acquired by the processed second camera is blurred and blackened; then, the reference image is combined to assist the first image to screen out a highlight area generated by a highly reflective object, highlight metal, noise and the like, so that a real light source is identified, and the accuracy of light source detection can be improved.
With reference to the first aspect, in an implementation manner of the first aspect, performing virtual focus processing on the second camera includes:
and reducing the distance between the lens in the second camera and the image sensor to be smaller than the distance between the lens in the first camera and the image sensor when the first image is acquired.
In this implementation, since the second camera is subjected to virtual focus processing, most of the objects in the first reference image acquired by the second camera can be blurred. For example, objects in the background that are farther from the second camera may be blurred. The high-brightness metal and high-reflection objects reflect high-brightness light rays to form images, and the energy of reflected light is very low relative to that of a light source, so that the corresponding images of the high-brightness metal and high-reflection objects in the first reference image become blurred after virtual focus processing and are mixed with a background to form a group. In this case, only the true light-emitting source is not blurred after the virtual focus processing because of its high energy, and can be made to stand out in a blurred background and become brighter than the surroundings.
With reference to the first aspect, in an implementation manner of the first aspect, the performing exposure value reduction processing on the second camera includes:
and reducing the exposure value corresponding to the second camera to be half of the exposure value corresponding to the first camera when the first image is acquired.
In this implementation, decreasing the exposure value causes the second reference image captured by the second camera to become a relatively dark image. At this time, the highlight metal and high-reflection object is imaged by reflecting the highlight light, and the energy of the reflected light is very low relative to the capability of the light source, so that the corresponding images of the highlight metal and high-reflection object in the second reference image also become gray or black areas along with the reduction of the exposure value; the bright area generated by the noise also becomes grey and black along with the reduction of the exposure value; in this case, only the actual light source is relatively bright and close to white after the exposure value of the second camera is reduced because of its high energy.
With reference to the first aspect, in an implementation manner of the first aspect, the performing exposure value reduction processing on the second camera includes:
keeping the sensitivity unchanged, and reducing the exposure time corresponding to the second camera; or,
and keeping the exposure time unchanged, and reducing the corresponding sensitivity of the second camera.
In this implementation, the exposure value of the second camera may be reduced by reducing the exposure time or sensitivity. Further, the exposure time and the sensitivity can be reduced at the same time.
With reference to the first aspect, in an implementation manner of the first aspect, before performing the virtual focus processing and/or the exposure value reduction processing on the second camera, the method further includes:
acquiring a second image by using the second camera;
performing brightness threshold recognition and shape detection on the first image and the second image, determining a region to be detected in the first image and a region to be detected in the second image, and matching;
determining a target light source included in the first image based on the reference image, including:
performing the brightness threshold recognition and the shape detection on the reference image, and determining a target area in the reference image, wherein the target area and the area to be detected in the second image have a corresponding relation;
and determining a region to be measured in the first image with the corresponding target region as a target light source.
In this implementation, the region to be measured in the first image and the region to be measured in the second image are matched, and the target region in the second reference image and the region to be measured in the second image have a corresponding relationship, so that the region to be measured in the first image and the target region in the second reference image can be associated. When the second reference image has a target area which is related to the area to be measured in the first image, indicating that the area to be measured indicates the real light source position; when the second reference image has no target area associated with the area to be measured in the first image, the area to be measured indicates a false light source position, such as a bright area formed by a highly reflective object, a high brightness metal or noise. Therefore, by taking the second reference image as a reference, real and false light sources in the first image can be determined through simple comparison.
With reference to the first aspect, in an implementation manner of the first aspect, the second camera includes at least one second camera.
When a plurality of second cameras are provided, virtual focus and/or exposure value reduction processing can be performed on the plurality of second cameras respectively, and the target light source in the first image can be determined by combining the target areas in the reference images shot by the plurality of second cameras. The more cameras that are assisted, the more accurate the detection.
In a second aspect, an electronic device is provided, the electronic device comprising: one or more processors, memory, and a display screen; the memory coupled with the one or more processors, the memory to store computer program code, the computer program code including computer instructions, the one or more processors to invoke the computer instructions to cause the electronic device to perform:
starting a camera application of the electronic device;
acquiring a first image by using the first camera;
performing virtual focus processing on the second camera and/or reducing exposure value processing;
acquiring a reference image by using the processed second camera;
based on the reference image, a target light source included in the first image is determined.
With reference to the second aspect, in an implementation manner of the second aspect, the performing virtual focus processing on the second camera includes:
and reducing the distance between the lens in the second camera and the image sensor to be smaller than the distance between the lens in the first camera and the image sensor when the first image is acquired.
With reference to the second aspect, in an implementation manner of the second aspect, the performing exposure value reduction processing on the second camera includes:
and reducing the exposure value corresponding to the second camera to be half of the exposure value corresponding to the first camera when the first image is acquired.
With reference to the second aspect, in an implementation manner of the second aspect, the performing exposure value reduction processing on the second camera includes:
keeping the sensitivity unchanged, and reducing the exposure time corresponding to the second camera; or,
and keeping the exposure time unchanged, and reducing the corresponding sensitivity of the second camera.
With reference to the second aspect, in an implementation manner of the second aspect, before performing the virtual focus processing and/or the exposure value reduction processing on the second camera, the method further includes:
acquiring a second image by using the second camera;
performing brightness threshold recognition and shape detection on the first image and the second image, determining a region to be detected in the first image and a region to be detected in the second image, and matching;
determining a target light source included in the first image based on the reference image, including:
performing the brightness threshold recognition and the shape detection on the reference image, and determining a target area in the reference image, wherein the target area and the area to be detected in the second image have a corresponding relation;
and determining a region to be measured in the first image with the corresponding target region as a target light source.
With reference to the second aspect, in an implementation manner of the second aspect, the second camera includes at least one second camera.
In a third aspect, a light source detection apparatus is provided, which includes means for performing any one of the light source detection methods of the first aspect.
In one possible implementation, when the light source detection apparatus is an electronic device, the processing unit may be a processor, and the input unit may be a communication interface; the electronic device may further comprise a memory for storing computer program code which, when executed by the processor, causes the electronic device to perform any of the methods of the first aspect.
In a fourth aspect, a chip is provided, where the chip is applied to an electronic device, and the chip includes one or more processors, and the processor is configured to invoke computer instructions to cause the electronic device to execute any one of the light source detection methods in the first aspect.
In a fifth aspect, a computer-readable storage medium is provided, which stores computer program code, which, when executed by an electronic device, causes the electronic device to perform any one of the light source detection methods of the first aspect.
In a sixth aspect, there is provided a computer program product comprising: computer program code which, when run by an electronic device, causes the electronic device to perform any of the light source detection methods of the first aspect.
According to the light source detection method provided by the embodiment of the application, one first camera and at least one second camera are started, when shooting is carried out, the first camera is used for collecting a first image, the second camera is subjected to virtual focus and exposure value reduction processing, and a second reference image is collected by the processed second camera; performing brightness threshold recognition and shape detection on a second reference image acquired by a second camera; then, the detection result is utilized to assist the first image to screen out highlight areas generated by high-reflection objects, highlight metal, noise and the like, and a real light source is identified as a target light source.
Because relevant electronic equipment generally all includes a plurality of cameras, consequently, this application need not to do the structural modification, only needs to call a plurality of cameras, handles through simple virtual burnt and/or reduction exposure value and can detect, and the method is simple, and detection efficiency is higher, and detection accuracy is also very high.
Drawings
FIG. 1 is a schematic diagram of an application scenario suitable for use in the present application;
fig. 2 is a schematic flowchart of a light source detection method according to an embodiment of the present disclosure;
fig. 3 is a schematic layout diagram of a camera provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of an image during a light source detection process according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of an image of another light source detection process provided in the embodiment of the present application;
FIG. 6 is a schematic diagram of a display interface provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of a hardware system suitable for use in the electronic device of the present application;
FIG. 8 is a schematic diagram of a software system suitable for use with the electronic device of the present application;
fig. 9 is a schematic structural diagram of a light source detection device provided in the present application;
fig. 10 is a schematic structural diagram of a chip provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings.
First, some terms in the embodiments of the present application are explained so as to be easily understood by those skilled in the art.
1. An RGB (red, green, blue) color space, otherwise known as an RGB domain, refers to a color model that is related to the structure of the human visual system. All colors are considered as different combinations of red, green and blue depending on the structure of the human eye.
2. The YUV color space, otherwise known as the YUV domain, refers to a color coding method, Y denotes luminance and U and V denote chrominance. The RGB color space emphasizes the color sensing of human eyes, the YUV color space emphasizes the sensitivity of vision to brightness, and the RGB color space and the YUV color space can be converted with each other.
3. A brightness (LV) value, which is used to estimate the ambient brightness, and the specific calculation formula is as follows:
Figure 158566DEST_PATH_IMAGE001
where Exposure is Exposure time, aperture is Aperture size, ISO is sensitivity, and Luma is the average value of Y in XYZ color space.
4. Auto Focus (AF) refers to an electronic device that obtains the highest image frequency component by adjusting the position of a focus lens to obtain higher image contrast. The electronic device compares the contrast of the images shot by the lens at different positions, so as to obtain the position of the lens when the contrast of the images is the maximum, and further determine the focusing focal length.
5. The focal length, the size of which marks the amount of refractive power, is greater the shorter the focal length.
The focal length of the optical lens assembly determines the size of an image generated by a subject photographed by the optical lens assembly on an imaging plane. Assuming that the same subject is photographed at the same distance, the longer the focal length of the optical lens assembly, the larger the magnification of an image generated by the subject on a charge-coupled device (CCD).
6. The focusing distance is the distance between an object and an image, and is the sum of the distance from the lens to the object and the distance from the lens to the photosensitive element.
7. The photographing parameters may include a shutter, an exposure time, an Aperture Value (AV), an Exposure Value (EV), and a sensitivity ISO. The following are introduced separately.
The shutter is a device for controlling the time of light entering the camera to determine the exposure time of the image. The longer the shutter remains in the open state, the more light that enters the camera, and the longer the exposure time corresponding to the image. Conversely, the shorter the time the shutter remains in the open state, the less light enters the camera and the shorter the exposure time for the image.
The exposure time is the time during which the shutter is opened in order to project light onto the photosensitive surface of the photosensitive material of the camera. The exposure time is determined by the sensitivity of the photosensitive material and the illumination on the photosensitive surface. The longer the exposure time, the more light that enters the camera, the shorter the exposure time, the less light that enters the camera. Therefore, a long exposure time is required in a dark scene, and a short exposure time is required in a backlight scene.
The aperture value (f value) is the ratio of the focal length of a lens (lens) in the camera to the lens light-passing diameter. The larger the aperture value, the more light enters the camera. The smaller the aperture value, the less light enters the camera.
The exposure value is a numerical value in which the exposure time and the aperture value are combined to indicate the lens light-transmitting ability of the camera. The exposure value may be defined as:
Figure 718116DEST_PATH_IMAGE002
wherein N is an aperture value; t is the exposure time in seconds.
ISO, is used to measure the degree of sensitivity of a negative to light, i.e. sensitivity or gain. For insensitive films, longer exposure times are required to achieve the same brightness of the image as the sensitive film. For sensitive films, a shorter exposure time is required to achieve the same brightness of the image as for insensitive films.
Among the shooting parameters, the shutter, the exposure time, the aperture value, the exposure value, and the ISO, the electronic device may implement at least one of Auto Focus (AF), auto Exposure (AE), and Auto White Balance (AWB) through an algorithm to implement automatic adjustment of the shooting parameters.
Illustratively, the value of the exposure value can be any one of-24, -4, -3, -2, -1, 0, 1, 2, 3, 4 and 24.
EV0, for instructing the electronic device to capture an exposure image by the determined exposure value 0 when the electronic device implements exposure by the algorithm. EV-2 is used for instructing the electronic equipment to capture the exposure image through the determined exposure value-2 when the electronic equipment realizes exposure through the algorithm. EV1, which is used for indicating the exposure image captured by the determined exposure value 1 when the electronic equipment realizes exposure through the algorithm. The rest can be analogized in sequence, and the description is omitted.
In this case, each increase of 1 in the exposure value will change the exposure by one step, that is, the exposure (the integral of the illuminance received by a certain surface element of the object in the time t) is doubled, for example, the exposure time or the aperture area is doubled. Then an increase in exposure value will correspond to a slower shutter speed and a smaller f-number. Therefore, the exposure value of EV0 is increased by 2 relative to EV-2, and the two-stage exposure is changed; similarly, EV1 increases the exposure value by 1 relative to EV0, changing the first exposure.
Here, when the exposure value EV is equal to 0, the exposure value is generally the optimal exposure value under the current illumination condition. Accordingly, the exposure image acquired by the electronic device under the condition of EV0 is the best exposure image under the current illumination condition, and the best exposure image may also be referred to as a reference exposure image.
It should be understood that the "best" exposure image refers to an exposure image that is determined algorithmically for a given electronic device, and that the best exposure image is determined to be different when the electronic device is different, the algorithm is different, or the current lighting conditions are different.
The foregoing is a brief introduction to the nouns referred to in the embodiments of the present application, and will not be described in detail below.
The light source detection method provided by the embodiment of the application can be suitable for various electronic devices.
In some embodiments of the present application, the electronic device may be a variety of cameras such as a motion camera and a digital camera, a mobile phone, a smart screen, a tablet computer, a wearable electronic device, an in-vehicle electronic device, an Augmented Reality (AR) device, a Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), a projector, and the like, and the embodiments of the present application do not limit the specific type of the electronic device.
In conjunction with an electronic device, fig. 1 is a schematic view of an application scenario of a light source detection method provided in an embodiment of the present application.
For example, the light source detection method in the embodiment of the present application may be applied to the field of photographing; for example, the light source detection method of the present application may be applied to photographing a subject including a light source in a photographing environment having the light source. The shooting environment can be a natural outdoor scene, a real indoor scene, a studio indoor scene and the like. The light source may include natural light sources and artificial light sources, for example, the sun, turned on lights, burning candles, etc. are all light sources. The light source may include one light source or a plurality of light sources.
For example, as shown in fig. 1, taking the electronic device 100 as a mobile phone as an example, the photographic subject includes a glass door of a convenience store, a christmas tree in front of the door, and a light source 101 (e.g., a street lamp) that provides illumination beside the door. The electronic device 100 runs a camera application program, and in the process of acquiring an image including a photographic subject, the electronic device 100 can display a preview image of the photographic subject including the light source 101 on the display screen in real time; when the user views a preview image on the display screen of the electronic device 100, if the user wants to capture the image at the viewing angle shown in fig. 1, the user may click a capture control displayed on the display screen. When the photographing control of the electronic apparatus 100 is triggered, the electronic apparatus 100 may photograph an image of a photographing object including the light source 101 as shown in fig. 1.
As shown in fig. 1, when the electronic device shoots a shooting object including the light source 101, since the brightness generated by the light source is relatively high, the light and shadow effect of the surrounding environment is affected, and further the image captured by the image sensor is affected, and problems such as blurring and local overexposure may occur in the image captured by the electronic device. Therefore, the related art usually detects the position of the light source in the image according to a method of luminance threshold recognition, for example, a local area larger than a preset luminance threshold is determined as the area where the light source is located, and then performs a correction or other series of processing.
However, the detection method provided by the related art is not high in robustness, and the light source detection result is not accurate. For a real light-emitting light source, a bright area formed by high reflection, a bright area formed by noise and the like, the related methods cannot be correctly distinguished, and some high reflection objects, bright areas formed by high-brightness metal and the like can be wrongly detected as light sources, so that the trouble of subsequent processing is caused. In order to solve this problem, a new detection method is needed.
In view of this, an embodiment of the present application provides a light source detection method, where when a first camera and a second camera are enabled to perform shooting, a virtual focus and/or exposure value reduction process is performed on the second camera, so that an area of an unreal light source in a reference image acquired by the second camera after the process is blurred and blackened; then, the reference image is combined to assist the first image to screen out highlight areas generated by highly reflective objects, highlight metal, noise and the like, and a real light source is identified from the highlight areas, so that the accuracy of light source detection can be improved.
For example, when a camera application is in a preview state (for example, a shooting preview), a preview image displayed by the electronic device includes conditions such as local blur, overexposure, and the like, and after a shooting control of the electronic device is triggered, the light source detection method provided by the embodiment of the present application may be executed; or, when the electronic device detects that the user clicks a certain photographing mode (e.g., a large aperture mode, a night view mode), an image with higher definition, higher contrast, and more natural light and shadow effects can be obtained by combining the detection result with a related image processing method.
Optionally, in a case that the electronic device has sufficient computing capability, the light source detection method in the embodiment of the present application may also be applied to the field of video recording, the field of video call, or other image processing fields.
Illustratively, video call scenarios may include, but are not limited to, the following scenarios:
the method comprises the following steps of video call, video conference application, long and short video application, video live broadcast application, video network course application, portrait intelligent mirror moving application scene, video recording and video monitoring of a system camera video recording function, or portrait shooting scene such as an intelligent cat eye and the like.
It should be understood that the above description is illustrative of the application scenario and does not limit the application scenario of the present application in any way.
The light source detection method provided by the embodiment of the present application is described in detail below with reference to fig. 2 to 6.
Fig. 2 is a schematic flowchart of a light source detection method according to an embodiment of the present application. The method 200 may be performed by an electronic device comprising at least two cameras.
Exemplarily, fig. 3 is a layout diagram of a camera on an electronic device according to an embodiment of the present application.
Taking the electronic device 100 as a mobile phone as an example, as shown in fig. 3 (a), two cameras may be arranged on the rear cover of the mobile phone, and are respectively located in two circular areas at the upper left corner of the rear cover of the mobile phone. The two cameras may be a wide angle camera 1931 and a tele camera 1932, respectively. Alternatively, as shown in fig. 3 (b), four cameras may be arranged on the rear cover of the cellular phone, the four cameras being located in a circular area at the center above the rear cover of the cellular phone. The four cameras may be, for example, a super wide angle camera 1933, a wide angle camera 1931, a tele camera 1932, and a multispectral camera 1934, respectively.
It is to be understood that the wide-angle camera 1931 is suitable for photographing a close-up view because the focal distance is small, and, as the name suggests, is suitable for photographing a scene with a relatively large angle of view. The field angle range corresponding to the ultra-wide angle camera 1933 is relatively larger than the field angle range corresponding to the wide angle camera 1931, the focusing distance is smaller, and the camera is suitable for shooting scenes with larger field angles. The field angle range corresponding to the telephoto camera 1932 is relatively smaller than the field angle range corresponding to the wide-angle camera 1931, and the focal length is large, so that it is suitable for taking a long shot.
Multispectral camera 1934, which contains a camera of multispectral sensors, where multispectral sensors are other multispectral sensors that have a spectral response range wider than that of a three primary (RGB) sensor. For example, the multispectral camera may be a red, green, blue, cyan, yellow (RGBCMY) sensor that has improved color reproduction capability and signal-to-noise performance relative to RGB sensors. The field of view range corresponding to the multispectral camera 1934 can be consistent with the field of view range corresponding to the primary camera.
Relatively speaking, the image details acquired by the ultra-wide camera 1933 are richer and the definition is higher, and the image details acquired by the tele camera 1932 are less and the definition is relatively lower.
Of course, the above are only two examples, three or more than four cameras may be arranged on the rear cover of the mobile phone, the specific number and arrangement position of the cameras, and the type and function of each camera may be set and modified as required, and the embodiment of the present application does not limit this.
In the embodiment of the present application, two or more cameras may be used, where one of the two or more cameras may be referred to as a first camera and the other cameras are referred to as second cameras; the first camera is used for normally collecting images, and the second camera is used for processing and then assisting in identifying light sources in the images collected by the first camera. It can be appreciated that the first camera and the second camera are only one indication, and the specific camera indicated can be replaced as required.
For example, when the electronic device includes two cameras, one camera is a wide-angle camera, and the other camera is a super wide-angle camera, which may be referred to as a first camera, and the super wide-angle camera is a second camera. The ultra-wide-angle camera is used for assisting in identifying a light source in an image acquired by the wide-angle camera.
In combination with the above example, the light source detection method 200 provided in the embodiment of the present application may include the following S201 to S206, and the following S201 to S206 are respectively described in detail.
S201, starting a camera application program in the electronic equipment.
Illustratively, the user may instruct the electronic device to launch a camera application by clicking on an icon of a "camera" application; or, when the electronic device is in the screen locking state, the user may instruct the electronic device to start the camera application through a gesture of sliding rightward on the display screen of the electronic device. Or the electronic device is in a screen locking state, the screen locking interface includes an icon of the camera application program, and the user instructs the electronic device to start the camera application program by clicking the icon of the camera application program. Or when the electronic equipment runs other applications, the applications have the authority of calling the camera application program; the user may instruct the electronic device to launch the camera application by clicking on the corresponding control. For example, while the electronic device is running an instant messaging application, the user may instruct the electronic device to launch a camera application, etc., by selecting a control for a camera function.
It should be understood that the above is an illustration of the operation of launching a camera application; the camera application program can also be started by voice indication operation or other operation indication electronic equipment; this is not a limitation of the present application.
It should also be understood that launching the camera application may refer to running the camera application.
S202, acquiring a first image by using a first camera.
The first camera can be any one type of camera from a super wide-angle camera, a long-focus camera and a multispectral camera; the photographic subject may include a light source, the first image may include one or more regions under test, and the regions under test may be determined by brightness threshold recognition and shape detection.
It should be understood that the first image may be an image acquired by the electronic device by executing an algorithm related to the first camera, determining a target distance between the lens and the image sensor and adjusting the target distance and the exposure value according to the algorithm. The first image is a clear image. Here, the target distance between the lens corresponding to the first camera and the image sensor refers to a distance between the lens in the first camera and the image sensor when a sharp image is captured.
It should be understood that the first image includes one or more regions to be detected as positions of the light source to be detected, where bright regions corresponding to the actual light source may be possible, and bright regions formed by highly reflective objects and highly reflective metals may also be possible.
It should also be understood that the region to be detected may be determined after threshold recognition based on the RGB pixel values corresponding to each pixel in the image; alternatively, the threshold value may be determined after performing threshold value identification based on the luminance value. The shape of the region to be measured can be divided as required, which is not limited in the embodiment of the present application.
For example, the first image may be an image as shown in (a) of fig. 4, where the image includes four regions to be detected, i.e., a1, a2, a3, and a4, which are positions of the light source to be detected.
Alternatively, the first image may be an image in a RAW domain, an image in an RGB domain, or an image in a YUV domain.
And S203, performing virtual focus processing on the second camera, and acquiring a first reference image by using the second camera after the virtual focus processing.
The first reference image is an image acquired by the second camera after virtual focus processing is carried out on the shot object based on the first exposure value, and the second camera can be any one type of camera among a super wide-angle camera, a long-focus camera and a multispectral camera; the second camera may be of a different type than the first camera, or alternatively, may be the same. The shooting object shot by the second camera is the same as the shooting object shot by the first camera.
Optionally, as an implementation manner, before performing the virtual focus processing on the second camera, the electronic device may further acquire a second image by using the second camera. The second image is an image acquired by the electronic equipment by using the second camera after the electronic equipment executes an algorithm related to the second camera, and the target distance and the exposure value between the lens and the image sensor are determined according to the algorithm and adjusted. The second image is a clear frame of image. Here, the target distance between the lens corresponding to the second camera and the image sensor refers to a distance between the lens and the image sensor in the second camera when a sharp image is captured. The second image can comprise a plurality of regions to be detected, and the regions to be detected can be determined through brightness threshold recognition and shape detection; the position of the region to be measured in the second image and the position of the region to be measured in the first image have a matching relationship.
For example, the region to be measured with the center coordinate (i 1, j 1) in the first image and the region to be measured with the center coordinate (p 2, q 2) in the second image indicate the same light source, such as the same street lamp, in the photographic subject.
On the basis, virtual focus processing on the second camera refers to adjusting the determined target distance, so that the distance between a lens in the second camera and an image sensor is reduced, and a first reference image acquired by the second camera after virtual focus processing is blurred and is reduced in definition relative to a second image. The first reference image is also blurred relative to the first image and the sharpness is reduced.
It should be understood that the electronic device may include a focus processing part and a focus motor in addition to the second camera. The focusing processing component is used for collecting the image formed on the image sensor in the second camera and controlling the rotation of the focusing motor according to the definition degree of the collected image. The focusing motor rotates to drive the lens in the second camera to move, so that the focusing function is realized. For example, the focus motor may indicate an Optical Image Stabilization (OIS) motor.
When virtual focus processing is performed, the focusing processing component can control the focusing motor to rotate, and the focusing motor rotates to drive the lens in the second camera to move towards the direction close to the image sensor, so that the distance between the lens in the second camera and the image sensor is reduced, the second camera gradually performs virtual focus, and a fuzzy first reference image is acquired.
Optionally, as another implementation manner, when performing virtual focus processing on the second camera, a target distance between the lens in the first camera and the image sensor may be obtained first, and then the target distance between the lens in the second camera and the image sensor is set to a value smaller than the target distance between the lens in the first camera and the image sensor. Then, according to the set numerical value, a lens in the second camera is adjusted, and the adjusted second camera is used for collecting, so that a fuzzy first reference image can be obtained.
Illustratively, if the first camera is a wide-angle camera and the second camera is an ultra-wide-angle camera, it is determined through calculation that a target distance between a lens and an image sensor corresponding to the first camera when the first camera collects the first image is V1, and a distance between the lens and the image sensor in the second camera should be smaller than the target distance V1 of the first camera, so that a distance between the lens and the image sensor in the second camera may be set to be V2, and V2 is smaller than V1.
Further, if the minimum distance between the lens in the second camera and the image sensor is V3, the distance between the lens in the second camera and the image sensor may also be directly set to be V3, at this time, it is equivalent to that the focusing motor does not need to rotate to drive the lens in the second camera to move, V3 is less than V1, and V3 is less than or equal to V2.
Optionally, the field angle range corresponding to the first camera is less than or equal to the field angle range corresponding to the second camera.
It should be understood that when the field angle range corresponding to the first camera is smaller than or equal to the field angle range corresponding to the second camera, when the image acquired by the second camera is subsequently utilized to assist the first camera in identifying the real light source position, all the areas to be measured in the image acquired by the first camera can be identified.
For example, the field angle range corresponding to the first camera is equal to the field angle range corresponding to the second camera, and the first image acquired by the first camera may be an image as shown in (a) in fig. 4; after the second camera performs the virtual focus processing, the acquired first reference image may be an image as shown in (b) of fig. 4.
If the field angle range corresponding to the first camera is larger than that corresponding to the second camera, the image content collected by the second camera is equivalent to a part of the image content collected by the first camera, and when the image collected by the second camera is subsequently utilized to assist the first camera in identifying the real light source position, only a part of the region to be detected in the image collected by the first camera can be identified. If the whole content of the first camera is required to be detected, a second camera is required to be used for collecting a plurality of first reference images, the content corresponding to each first reference image is different, and the plurality of first reference images can cover the field angle range of the first camera after de-duplication.
In the embodiment of the present application, because the second camera is subjected to virtual focus processing, most of objects in the first reference image acquired by the second camera can be blurred, for example, objects in a background farther from the second camera can be blurred; the high-brightness metal and high-reflection objects reflect high-brightness light rays to form images, and the energy of reflected light is very low relative to that of a light source, so that the corresponding images of the high-brightness metal and high-reflection objects in the first reference image become blurred after virtual focus processing and are mixed with a background to form a group. In this case, only the true light-emitting source is not blurred after the virtual focus processing because of its high energy, and can be made to stand out in a blurred background and become brighter than the surroundings.
It should be noted that, since the second camera needs to be processed subsequently, the first reference image may not be acquired after only the virtual focus processing is performed on the second camera.
S204, reducing the exposure value corresponding to the second camera, and acquiring a second reference image by using the second camera with the reduced exposure value.
The second reference image is an image acquired by the second camera on the basis of the second exposure value for the shooting object; or the second reference image is an image acquired by the shooting object based on the second exposure value after the virtual focus processing of the second camera. The second exposure value is smaller than the first exposure value, and meanwhile, the second exposure value is also smaller than the exposure value corresponding to the first camera when the first camera collects the first image.
Here, the first exposure value may be the same as an exposure value corresponding to when the first camera captures the first image, or the first exposure value may be smaller than an exposure value corresponding to when the first camera captures the first image.
Alternatively, in order to enhance the contrast effect, the reduced second exposure value may be half of the exposure value corresponding to the first camera.
For example, the exposure value corresponding to the second camera is half of the exposure value corresponding to the first camera. The first image acquired by the first camera may be an image as shown in (a) in fig. 4; after the second camera performs virtual focus processing, the acquired first reference image may be an image as shown in (b) in fig. 4; after the exposure value corresponding to the second camera is decreased, the second reference image collected continuously may be the image shown in (c) of fig. 4.
Alternatively, since the exposure value is related to the exposure time and the sensitivity ISO, when the exposure value corresponding to the second camera is reduced, the exposure time can be reduced while keeping the sensitivity ISO unchanged; alternatively, the exposure time may be kept constant, and the sensitivity ISO may be reduced.
In the embodiment of the application, the exposure value is reduced, so that the second reference image acquired by the second camera becomes a relatively black image; at this time, the highlight metal and high-reflection object is imaged by reflecting the highlight light, and the energy of the reflected light is very low relative to the capability of the light source, so that the corresponding images of the highlight metal and high-reflection object in the second reference image also become gray or black areas along with the reduction of the exposure value; the bright area generated by the noise also becomes grey and black along with the reduction of the exposure value; in this case, only the actual light source is relatively bright and close to white even after the exposure value is reduced by the second camera because of its high energy.
It will be appreciated that, since the energy of the light-emitting sources may vary, the image of the light-emitting source with a relatively lower energy in the second reference image may also be grayed out after the exposure value is reduced.
And S205, performing brightness threshold recognition and shape detection on the second reference image, and determining a target area.
For example, if the second reference image is an image in a YUV domain, the Y value corresponding to the pixel in the second reference image may be determined, and a pixel region connected with pixels larger than a preset Y threshold is determined as a preselected region; then, the shape detection is performed on the preselected area, and the shape corresponding to the preselected area is determined, such as a circle, a rectangle, a square, and the like. The Y value indicates the magnitude of luminance.
If the second reference image is not the image in the YUV domain, the second reference image can be converted into the image in the YUV domain by using a color space conversion method, and then the judgment is carried out by combining a preset Y threshold; the size of the preset Y threshold may be set and modified as needed, and the embodiment of the present application does not limit this.
It should be understood that the detection result determined for the second reference image may be a target area of a certain shape, and the Y value of the pixels in the target area is greater than a preset Y threshold. If one or more target regions are determined from the second reference image, the shapes corresponding to the target regions may be the same or different.
Optionally, multiple frames of second reference images may be acquired by using multiple second cameras with exposure values reduced, or multiple frames of second reference images may also be acquired by using second cameras with exposure values reduced, and then luminance threshold recognition and shape detection are performed on the multiple frames of second reference images respectively to determine respective corresponding target areas; and then merging the target areas at the close positions in the multi-frame second reference images to determine the corresponding intersection as the final target area.
It should be understood that the target region determined in the second reference image has a corresponding relationship with the region to be measured determined in the second image. For example, when the target region determined in the second reference image and the region to be measured determined in the second image overlap, the target region and the region to be measured are used for indicating the light source at the same position.
And S206, combining the detection result to identify the target light source in the first image.
Optionally, in combination with the target region in the second reference image, an intersection region of the region to be detected in the first image and the corresponding target region in the second reference image may be determined as a region where the target light source in the first image is located. If the region to be measured in the first image cannot find the corresponding target region in the second reference image and cannot be combined, the region to be measured is a bright region formed by a non-real light source and cannot be used as the region where the target light source is located.
Here, it should be noted that, before S203, the content of the second image acquired by the second camera may be matched with the first image acquired by the first camera to generate a one-to-one correspondence relationship; therefore, after the second camera is processed, a target area corresponding to a certain area to be measured in the first image can be searched from the acquired second reference image by means of the corresponding relation.
For example, as shown in fig. 5 (a), the first image acquired by the first camera may include four regions to be detected, which are the light source positions to be detected, a1, a2, a3, and a 4.
The second reference image acquired after the virtual focus, exposure value reduction processing by the second camera may be as shown in (c) of fig. 4. After performing the brightness threshold recognition and the shape detection on the second reference image, the resulting image may include three target regions, b1, b2, and b3 respectively, as shown in (b) of fig. 5, and the target regions have a circular shape. The three target areas are real light sources determined by the aid of the second camera.
Combining the detection results shown in fig. 5 (a) and fig. 5 (b), it may be determined that all three regions to be detected in the first image are real light source positions, where the regions indicated by a1 in the first image and b1 in the second reference image may be merged to obtain an intersection region a11 as the region where the first target light source is located; the areas indicated by a2 in the first image and b2 in the second reference image may be merged, and the intersection area a12 is obtained as the area where the second target light source is located; similarly, the areas indicated by a3 in the first image and b3 in the second reference image are merged, and the intersection area a13 is obtained as the area where the third target light source is located.
Here, it may also be determined that one region a4 to be measured in the first image does not have a matching target region in the second reference image, and therefore, it may be determined that the region a4 to be measured is a non-true light source position.
According to the light source detection method provided by the embodiment of the application, one first camera and at least one second camera are started, when shooting is carried out, the first camera is used for collecting a first image, the second camera is subjected to virtual focus processing and exposure value reduction processing, and then the processed second camera is used for collecting a second reference image; performing brightness threshold recognition and shape detection on a second reference image acquired by a second camera; then, the detection result is utilized to assist the first image to screen out highlight areas generated by high-reflection objects, highlight metal, noise and the like, and a real light source is identified as a target light source.
Because relevant electronic equipment generally all includes a plurality of cameras, consequently, structural modification need not to be done to this application, only needs to call a plurality of cameras, and through simple virtual burnt, reduce exposure value and handle and can detect, the method is simple, and detection efficiency is higher, and detection accuracy is also very high.
Optionally, a first camera and at least one second camera may also be started, during shooting, the first camera is used to acquire a first image, the second camera is subjected to virtual focus processing, and the processed second camera is used to acquire a first reference image; performing brightness threshold recognition and shape detection on a first reference image acquired by a second camera; then, the detection result is utilized to assist the first image to screen out highlight areas generated by high-reflection objects, highlight metal, noise and the like, and a real light source is identified as a target light source.
Optionally, a first camera and at least one second camera may also be started, during shooting, the first camera is used to acquire a first image, the exposure value reduction processing is performed on the second camera, and then the processed second camera is used to acquire a second reference image; performing brightness threshold recognition and shape detection on a second reference image acquired by a second camera; then, the detection result is utilized to assist the first image to screen out highlight areas generated by high-reflection objects, highlight metal, noise and the like, and a real light source is identified as a target light source.
In addition, the first image may refer to a preview image in a preview interface in the electronic device; the preview interface may be a photo preview interface or a video preview interface. The first image may also be a captured image in an album of the electronic device or a frame of video in a video.
When the first image is displayed, the detected position of the target light source can be displayed; of course, this is not shown, and the present application is not limited in this respect.
An example interface diagram in an electronic device is described below in conjunction with fig. 6.
In a possible implementation manner, a function of turning on "light source detection" may be set in a setting interface of the electronic device, and after an application program for taking a picture in the electronic device runs, the function of turning on "light source detection" may be automatically turned on to execute the light source detection method according to the embodiment of the present application.
In another possible implementation manner, a function of turning on "light source detection" may be set in a camera of the electronic device, and the light source detection method according to the embodiment of the present application may be executed according to the function of turning on "light source detection" when a photo is taken.
In yet another possible implementation manner, the "light source detection" function may be added only in the "large aperture" mode in the camera of the electronic device, and the light source detection method of the embodiment of the present application may be executed according to the addition of the function that "light source detection" may be automatically turned on when the "large aperture" mode is selected.
With reference to the third implementation manner, taking the electronic device automatically turning on the "light source detection" function in the "large aperture" mode as an example, fig. 6 is an interface schematic diagram of the electronic device provided in the embodiment of the present application.
As shown in fig. 6 (a), the desktop 601 of the electronic device includes an icon 602 corresponding to a camera application program on the desktop 601. The electronic device detects that the user clicks on an icon 602 corresponding to the camera application on the desktop 601, as shown in fig. 6 (b). After the electronic device detects an operation of clicking an icon 602 of a camera application program on the desktop 601 by a user, in response to the operation of the user, the electronic device runs the camera application program, displays a display interface 603 shown in fig. 6 (c), where the display interface 603 includes a preview window, and the preview window may display a preview image acquired by the first camera in a default photographing mode; the display interface 603 also includes other modes, such as a large aperture mode 604, and the like.
After the electronic device detects that the user switches from the photographing mode to the large aperture mode 604, the light source detection method provided by the present application may be executed. For example, a first camera is used for collecting a first image, a second camera is started, virtual focus and exposure value reduction processing is carried out on the second camera, and a second reference image is collected by the processed second camera; performing brightness threshold recognition and shape detection on the second reference image to determine a target area; and then, combining the target area to assist in determining that the real light source in the first image is the target light source. Then, as shown in (d) of fig. 6, the first image is displayed in the preview window and the light source position in the first image may be prompted.
In this example, since the light of the street lamp is strong, when it is irradiated on the glass door and on the christmas tree, the light projected from the street lamp is strongly reflected by the glass and the metal decoration, and if it is detected by the related art, it is possible to erroneously use the light reflected from the glass, which is a fake street lamp, and the light reflected from the metal decoration as a light source; when the light source detection method provided by the embodiment of the application is used for detection, in the process of processing virtual focus and exposure value reduction of the second camera, the bright areas corresponding to the false street lamps reflected on the glass and the light reflected by the metal decorations are blurred, darkened and then screened out, and the remaining bright areas are identified as real light sources, namely street lamps.
The above is an example of the display interface in the electronic device, and the present application is not limited to this.
It is to be understood that the above description is intended to assist those skilled in the art in understanding the embodiments of the present application and is not intended to limit the embodiments of the present application to the particular values or particular scenarios illustrated. It will be apparent to those skilled in the art from the foregoing description that various equivalent modifications or changes may be made, and such modifications or changes are intended to fall within the scope of the embodiments of the present application.
The light source detection method, the applicable scenario and the related display interface of the embodiment of the present application are described above with reference to fig. 1 to 6. Software systems, hardware systems, devices and chips of the electronic device to which the present application is applicable will be described in detail below with reference to fig. 7 to 10. It should be understood that software systems, hardware systems, apparatuses and chips in the embodiments of the present application may perform various methods of the foregoing embodiments of the present application, that is, specific working processes of various products below, and reference may be made to corresponding processes in the foregoing embodiments of the methods.
Fig. 7 shows a hardware system of an electronic device suitable for use in the present application. The electronic device 100 may be used to implement the light source detection method described in the above method embodiments.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
The configuration shown in fig. 7 is not intended to specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than those shown in FIG. 7, or electronic device 100 may include a combination of some of the components shown in FIG. 7, or electronic device 100 may include sub-components of some of the components shown in FIG. 7. The components shown in fig. 7 may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units. For example, the processor 110 may include at least one of the following processing units: an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and a neural Network Processor (NPU). The different processing units may be independent devices or integrated devices.
The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. For example, the processor 110 may include at least one of the following interfaces: an inter-integrated circuit (I2C) interface, an inter-integrated circuit audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a SIM interface, and a USB interface.
Illustratively, in the embodiments of the present application, the light source detection method provided by the present application may be executed in the processor 110.
The connection relationship between the blocks shown in fig. 7 is only illustrative, and does not limit the connection relationship between the blocks of the electronic apparatus 100. Alternatively, the modules of the electronic device 100 may also adopt a combination of the connection manners in the above embodiments.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, the baseband processor, and the like. The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
The electronic device 100 may implement display functionality through the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 may be used to display images or video. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), a Mini light-emitting diode (Mini LED), a Micro light-emitting diode (Micro LED), a Micro OLED (Micro OLED), or a quantum dot light-emitting diode (QLED). In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement a photographing function through the ISP, the camera 193, the video codec, the GPU, the display screen 194, the application processor, and the like.
Illustratively, the ISP is used to process data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can perform algorithm optimization on the noise, brightness and color of the image, and can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
Illustratively, camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into a standard Red Green Blue (RGB), YUV, or the like format image signal. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is illustratively used to process digital signals, and may process other digital signals in addition to digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Illustratively, video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, and MPEG4.
Illustratively, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A may be of a wide variety, and may be, for example, a resistive pressure sensor, an inductive pressure sensor, or a capacitive pressure sensor. The capacitive pressure sensor may be a sensor including at least two parallel plates having conductive materials, and when a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes, and the electronic device 100 determines the strength of the pressure based on the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message; and when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
Illustratively, the ambient light sensor 180L is used to sense ambient light levels. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L can also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
Illustratively, the fingerprint sensor 180H is used to capture a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to implement functions such as unlocking, accessing an application lock, taking a picture, and answering an incoming call.
Illustratively, the touch sensor 180K is also referred to as a touch device. The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also referred to as a touch screen. The touch sensor 180K is used to detect a touch operation applied thereto or in the vicinity thereof. The touch sensor 180K may pass the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided via the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100 and at a different location than the display screen 194.
The hardware system of the electronic device 100 is described above in detail, and the software system of the electronic device 100 is described below. The software system may adopt a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture, and the embodiment of the present application exemplarily describes the software system of the electronic device 100 by taking the layered architecture as an example.
As shown in fig. 8, the system architecture may include an application layer 210, an application framework layer 220, a hardware abstraction layer 230, a driver layer 240, and a hardware layer 250.
The application layer 210 may include a camera application.
Optionally, the application layer 210 may also include gallery, calendar, call, map, navigation, WLAN, bluetooth, music, video, short message, and other applications.
The application framework layer 220 provides an Application Programming Interface (API) and a programming framework for the application program of the application layer; the application framework layer may include some predefined functions.
For example, the application framework layer 220 may include a camera access interface; camera management and camera devices may be included in the camera access interface. Wherein camera management may be used to provide an access interface to manage the camera; the camera device may be used to provide an interface for accessing the camera.
The hardware abstraction layer 230 is used to abstract the hardware. For example, the hardware abstraction layer may include a camera abstraction layer and other hardware device abstraction layers; the camera abstraction layer may include a camera device 1, a camera device 2, and the like; the camera hardware abstraction layer may be connected to a camera algorithm library, and the camera hardware abstraction layer may invoke algorithms in the camera algorithm library.
Illustratively, the camera algorithm library may include a light source detection method, and the light source detection method is executed to perform the light source detection method provided in the embodiments of the present application.
The driver layer 240 is used to provide drivers for different hardware devices. For example, the driver layer may include a camera device driver.
The hardware layer 250 may include an image sensor, a multispectral sensor, an image signal processor, and other hardware devices.
Fig. 9 is a schematic structural diagram of a light source detection device according to an embodiment of the present application. The light source detection device 300 includes an acquisition unit 310 and a processing unit 320.
The processing unit 320 is configured to start a camera application of the electronic device;
the collecting unit 310 is configured to collect a first image by using the first camera;
the processing unit 320 is configured to perform virtual focus processing on the second camera and/or reduce an exposure value;
the acquisition unit 310 is configured to acquire a reference image by using the processed second camera;
the processing unit 320 is further configured to determine a target light source comprised in the first image based on the reference image.
Optionally, as an embodiment, the processing unit 320 is further configured to:
and reducing the distance between the lens in the second camera and the image sensor to be smaller than the distance between the lens in the first camera and the image sensor when the first image is acquired.
Optionally, as an embodiment, the processing unit 320 is further configured to:
and reducing the exposure value corresponding to the second camera to be half of the exposure value corresponding to the first camera when the first image is acquired.
Optionally, as an embodiment, the processing unit 320 is further configured to:
keeping the sensitivity unchanged, and reducing the exposure time corresponding to the second camera; or,
and keeping the exposure time unchanged, and reducing the corresponding sensitivity of the second camera.
Optionally, as an embodiment, the processing unit 320 is further configured to:
acquiring a second image by using the second camera;
performing brightness threshold recognition and shape detection on the first image and the second image, determining a region to be detected in the first image and a region to be detected in the second image, and matching;
performing the brightness threshold recognition and the shape detection on the reference image, and determining a target area in the reference image, wherein the target area and the area to be detected in the second image have a corresponding relation;
and determining a region to be measured in the first image with the corresponding target region as a target light source.
Optionally, as an embodiment, the second camera includes at least one second camera.
The light source detection device 300 is embodied as a functional unit. The term "unit" herein may be implemented in software and/or hardware, and is not particularly limited thereto.
For example, a "unit" may be a software program, a hardware circuit, or a combination of both that implement the above-described functions. The hardware circuitry may include an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (e.g., a shared processor, a dedicated processor, or a group of processors) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that support the described functionality.
Accordingly, the units of the respective examples described in the embodiments of the present application can be realized in electronic hardware, or a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Fig. 10 shows a schematic structural diagram of an electronic device provided in the present application. The dashed lines in fig. 10 indicate that the unit or the module is optional and that the electronic device 400 may be used to implement the light source detection method described in the above method embodiments.
Electronic device 400 includes one or more processors 401, and the one or more processors 402 may support electronic device 400 to implement the methods in the method embodiments. The processor 401 may be a general purpose processor or a special purpose processor. For example, the processor 401 may be a Central Processing Unit (CPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), or other programmable logic device such as a discrete gate, a transistor logic device, or a discrete hardware component.
Alternatively, the processor 401 may be configured to control the electronic device 400, execute a software program, and process data of the software program. The electronic device 400 may further include a communication unit 405 to enable input (reception) and output (transmission) of signals.
For example, the electronic device 400 may be a chip and the communication unit 405 may be an input and/or output circuit of the chip, or the communication unit 405 may be a communication interface of the chip, and the chip may be a component of a terminal device or other electronic devices.
For another example, the electronic device 400 may be a terminal device and the communication unit 405 may be a transceiver of the terminal device, or the communication unit 405 may be a transceiver circuit of the terminal device.
The electronic device 400 may include one or more memories 402, on which programs 404 are stored, and the programs 404 may be executed by the processor 401 to generate instructions 403, so that the processor 401 executes the light source detection method described in the above method embodiments according to the instructions 403.
Optionally, data may also be stored in the memory 402.
Alternatively, the processor 401 may also read data stored in the memory 402, the data may be stored at the same memory address as the program 404, and the data may be stored at a different memory address from the program 404.
Alternatively, the processor 401 and the memory 402 may be provided separately or integrated together; for example, on a System On Chip (SOC) of the terminal device.
For example, the memory 402 may be configured to store the program 404 related to the light source detection method provided in the embodiment of the present application, and the processor 301 may be configured to call the program 404 related to the light source detection method stored in the memory 402 during video processing, so as to execute the light source detection method of the embodiment of the present application.
The present application further provides a computer program product, which when executed by the processor 401, implements the light source detection method according to any of the method embodiments of the present application.
The computer program product may be stored in the memory 402, for example, as a program 404, and the program 404 is finally converted into an executable object file capable of being executed by the processor 401 through preprocessing, compiling, assembling and linking.
The present application further provides a computer-readable storage medium having a computer program stored thereon, which, when executed by a computer, implements the light source detection method according to any of the method embodiments of the present application. The computer program may be a high-level language program or an executable object program.
Optionally, the computer-readable storage medium is, for example, a memory 402. Memory 402 may be either volatile memory or nonvolatile memory, or memory 402 may include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), dynamic random access memory (dynamic RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), SLDRAM (synchronous DRAM), and direct rambus RAM (DR RAM).
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described embodiments of the electronic device are merely illustrative, and for example, the division of the modules is only one logical division, and the actual implementation may have another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the processes do not mean the execution sequence, and the execution sequence of the processes should be determined by the functions and the inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that three kinds of relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a portable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be defined by the claims, and the above description is only a preferred embodiment of the present application, and is not intended to limit the protection scope of the present application. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (9)

1. The light source detection method is characterized by being applied to electronic equipment comprising a first camera and a second camera, wherein the first camera and the second camera are used for shooting the same scene;
the method comprises the following steps:
starting a camera application of the electronic device;
acquiring a first image by using the first camera;
performing virtual focus processing on the second camera and/or reducing exposure value processing;
acquiring a reference image by using the processed second camera;
based on the reference image, a target light source included in the first image is determined.
2. The light source detection method according to claim 1, wherein performing the virtual focus processing on the second camera includes:
and reducing the distance between the lens in the second camera and the image sensor to be smaller than the distance between the lens in the first camera and the image sensor when the first image is acquired.
3. The light source detection method according to claim 1 or 2, wherein the exposure value reduction processing for the second camera includes:
and reducing the exposure value corresponding to the second camera to be half of the exposure value corresponding to the first camera when the first image is acquired.
4. The light source detection method according to claim 1 or 2, wherein the exposure value reduction processing for the second camera includes:
keeping the sensitivity unchanged, and reducing the exposure time corresponding to the second camera; or,
and keeping the exposure time unchanged, and reducing the corresponding sensitivity of the second camera.
5. The light source detection method according to claim 1 or 2, wherein before the virtual focus processing and/or the exposure value reduction processing is performed on the second camera, the method further comprises:
acquiring a second image by using the second camera;
performing brightness threshold recognition and shape detection on the first image and the second image, determining a region to be detected in the first image and a region to be detected in the second image, and matching;
determining a target light source included in the first image based on the reference image, including:
performing the brightness threshold recognition and the shape detection on the reference image, and determining a target area in the reference image, wherein the target area and the area to be detected in the second image have a corresponding relation;
and determining a region to be measured in the first image with the corresponding target region as a target light source.
6. The light source detection method of claim 1, wherein the second camera comprises at least one second camera.
7. An electronic device comprising a processor and a memory;
the memory for storing a computer program operable on the processor;
the processor for performing the light source detection method of any one of claims 1 to 6.
8. A chip, comprising: a processor for calling and running a computer program from a memory so that a device in which the chip is installed performs the light source detection method according to any one of claims 1 to 6.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to perform the light source detection method according to any one of claims 1 to 6.
CN202211354642.2A 2022-11-01 2022-11-01 Light source detection method and related equipment thereof Active CN115426458B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211354642.2A CN115426458B (en) 2022-11-01 2022-11-01 Light source detection method and related equipment thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211354642.2A CN115426458B (en) 2022-11-01 2022-11-01 Light source detection method and related equipment thereof

Publications (2)

Publication Number Publication Date
CN115426458A true CN115426458A (en) 2022-12-02
CN115426458B CN115426458B (en) 2023-04-07

Family

ID=84207878

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211354642.2A Active CN115426458B (en) 2022-11-01 2022-11-01 Light source detection method and related equipment thereof

Country Status (1)

Country Link
CN (1) CN115426458B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006018056A1 (en) * 2004-08-16 2006-02-23 Fotonation Vision Limited Method and apparatus for detecting and correcting red-eye effect
JP2015090562A (en) * 2013-11-05 2015-05-11 カシオ計算機株式会社 Image processing device, method, and program
US20170163862A1 (en) * 2013-03-14 2017-06-08 Fotonation Cayman Limited Systems and Methods for Reducing Motion Blur in Images or Video in Ultra Low Light with Array Cameras
CN109816734A (en) * 2019-01-23 2019-05-28 武汉精立电子技术有限公司 Camera calibration method based on target optical spectrum
CN110198417A (en) * 2019-06-28 2019-09-03 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006018056A1 (en) * 2004-08-16 2006-02-23 Fotonation Vision Limited Method and apparatus for detecting and correcting red-eye effect
US20170163862A1 (en) * 2013-03-14 2017-06-08 Fotonation Cayman Limited Systems and Methods for Reducing Motion Blur in Images or Video in Ultra Low Light with Array Cameras
JP2015090562A (en) * 2013-11-05 2015-05-11 カシオ計算機株式会社 Image processing device, method, and program
CN109816734A (en) * 2019-01-23 2019-05-28 武汉精立电子技术有限公司 Camera calibration method based on target optical spectrum
CN110198417A (en) * 2019-06-28 2019-09-03 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN115426458B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN113452898B (en) Photographing method and device
CN116744120B (en) Image processing method and electronic device
CN113905182B (en) Shooting method and equipment
WO2023130922A1 (en) Image processing method and electronic device
CN117177062B (en) Camera switching method and electronic equipment
CN114390212B (en) Photographing preview method, electronic device and storage medium
CN115604572B (en) Image acquisition method, electronic device and computer readable storage medium
CN115272138B (en) Image processing method and related device
CN114463191A (en) Image processing method and electronic equipment
CN117499779B (en) Image preview method, device and storage medium
CN115631250B (en) Image processing method and electronic equipment
CN116668838B (en) Image processing method and electronic equipment
WO2023160221A1 (en) Image processing method and electronic device
CN115767290B (en) Image processing method and electronic device
CN116668862B (en) Image processing method and electronic equipment
WO2023124201A1 (en) Image processing method and electronic device
CN116437198B (en) Image processing method and electronic equipment
CN115426458B (en) Light source detection method and related equipment thereof
CN113891008B (en) Exposure intensity adjusting method and related equipment
CN116055855B (en) Image processing method and related device
CN117133252B (en) Image processing method and electronic device
CN116051368B (en) Image processing method and related device
WO2023160220A1 (en) Image processing method and electronic device
CN115526786B (en) Image processing method and related device
CN116723409B (en) Automatic exposure method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant