CN116132820A - Shooting method and device and electronic equipment - Google Patents

Shooting method and device and electronic equipment Download PDF

Info

Publication number
CN116132820A
CN116132820A CN202310157155.5A CN202310157155A CN116132820A CN 116132820 A CN116132820 A CN 116132820A CN 202310157155 A CN202310157155 A CN 202310157155A CN 116132820 A CN116132820 A CN 116132820A
Authority
CN
China
Prior art keywords
images
image
condition
noise
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310157155.5A
Other languages
Chinese (zh)
Inventor
康波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202310157155.5A priority Critical patent/CN116132820A/en
Publication of CN116132820A publication Critical patent/CN116132820A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region

Abstract

The application discloses a shooting method, a shooting device and electronic equipment, which belong to the technical field of communication, and the shooting method comprises the following steps: respectively shooting images through M cameras to obtain M first images, wherein M is an integer greater than 1; screening at least one second image from the M first images according to the first parameters; outputting a target image in at least one second image; wherein the first parameter comprises at least one of: brightness, texture, and noise.

Description

Shooting method and device and electronic equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to a shooting method, a shooting device and electronic equipment.
Background
The user can continuously change the magnification ratio to obtain better scene shooting content when shooting and previewing, however, the magnification or reduction exceeding a certain magnification ratio can cause the switching of the cameras. In order to enable a user to obtain smoother lens switching experience when photographing, the mobile phone can start a double-opening mode after the magnification reaches a certain range, namely, two cameras perform data photographing, and then data of one of the cameras is selected according to a certain rule and displayed to the user.
Specifically, the existing selection rules are mainly dominated by the magnification and the object distance of the shot scenery, i.e. the existing selection rules are more based on the physical characteristics of the camera to perform image selection. For example, assuming that the electronic device includes a camera a with a focal length f1 and a camera B with a focal length f2, and the angle of view a of the camera a is larger than the angle of view B of the camera B, if the angle of view C corresponding to the image required by the user is smaller than the angle of view a and the angle of view B, the electronic device may control the camera a to capture the image 1 and control the camera B to capture the image 2, and if the object distance between the camera and the capturing object is L, and f1 < f2 < L, the electronic device may display the image 2, that is, the electronic device may display the image captured by the camera B corresponding to the focal length f2 closest to the object distance L. Thus, the magnification or the visual field range of the screened image can be ensured to meet the requirement of a user.
However, according to the above-described method, since the screening of the image is performed based only on physical characteristics (such as angle of view, focal length) of the camera, etc. in the related art, there is a possibility that the visual effect of the output image is poor.
Disclosure of Invention
The embodiment of the application aims to provide a shooting method, a shooting device and electronic equipment, which can solve the problem that the visual effect of a shot image is poor.
In a first aspect, an embodiment of the present application provides a photographing method, including: respectively shooting images through M cameras to obtain M first images, wherein M is an integer greater than 1; screening at least one second image from the M first images according to the first parameters; outputting a target image in at least one second image; wherein the first parameter comprises at least one of: brightness, texture, and noise.
In a second aspect, an embodiment of the present application provides a photographing apparatus, including: the device comprises a shooting module, a screening module and an output module; the shooting module is used for respectively shooting images through M cameras to obtain M first images, wherein M is an integer greater than 1; the screening module is used for screening at least one second image from the M first images obtained by the shooting module according to the first parameters; the output module is used for outputting the target image in the at least one second image obtained by the screening module; wherein the first parameter comprises at least one of: brightness, texture, and noise.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, the images are respectively shot through M cameras to obtain M first images, wherein M is an integer greater than 1; screening at least one second image from the M first images according to the first parameters; outputting a target image in at least one second image; wherein the first parameter comprises at least one of: brightness, texture, and noise. According to the scheme, the electronic equipment can screen at least one second image from M first images shot by the M cameras based on at least one of brightness, texture and noise, and then output the target image in the at least one second image, so that the output target image can give consideration to the visual effect of imaging on the basis of conforming to the physical characteristics of the electronic equipment, and the visual effect of the output image can be improved.
Drawings
Fig. 1 is one of flowcharts of a photographing method provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of a photographing device according to an embodiment of the present application;
fig. 3 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application;
fig. 4 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the objects identified by "first," "second," etc. are generally of a type and do not limit the number of objects, for example, the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The shooting method, the shooting device and the electronic equipment provided by the embodiment of the application are described in detail through specific embodiments and application scenes thereof with reference to the accompanying drawings.
Today, in a multi-shot mode (i.e. taking pictures with multiple cameras), the selection of the final output image is typically dominated by the camera's field angle (i.e. the camera's field of view) and the object distance between the electronic device and the subject. In other words, the existing selection rules are mainly selected according to the physical characteristics of the camera.
However, the above selection rule may cause poor visual effect of the output image because the filtering of the image is performed based only on physical characteristics (such as angle of view, focal length) of the camera, and the like.
According to the shooting method, the electronic device can screen at least one second image from M first images shot by the M cameras based on at least one of brightness, texture and noise, and then output the target image in the at least one second image, so that the output target image can give consideration to the visual effect of imaging on the basis of conforming to the physical characteristics of the electronic device, and the visual effect of the output image can be improved.
Optionally, the electronic device may screen at least one second image with brightness satisfying the brightness condition, texture satisfying the texture condition, or noise satisfying the noise condition from the M first images captured by the M cameras. Thus, the electronic equipment can screen at least one second image with good visual effect.
Further, the electronic device may screen at least two target images with brightness satisfying brightness conditions, texture satisfying texture conditions, noise satisfying noise conditions, or a minimum difference between the magnification of the camera corresponding to the image and the required magnification from the at least one second image. Therefore, the output target image can be enabled to give consideration to the imaging visual effect on the basis of conforming to the physical characteristics of the electronic equipment, and the visual effect of the output image can be improved.
An embodiment of the present application provides a shooting method, and fig. 1 shows a flowchart of the shooting method provided in the embodiment of the present application, where the method may be applied to an electronic device. As shown in fig. 1, the photographing method provided in the embodiment of the present application may include the following steps 101 to 103.
And 101, the electronic equipment respectively shoots images through M cameras to obtain M first images.
Wherein M is an integer greater than 1.
It can be appreciated that the M cameras may be cameras mounted on the same electronic device.
Specifically, the electronic device may use M cameras to capture M first images at the same time when the magnification set by the user is greater than or equal to the preset magnification.
Alternatively, the user may set the magnification in real time at the photographing preview interface.
It can be understood that each of the M cameras collects one first image, i.e., the M first images are in one-to-one correspondence with the M cameras.
Optionally, the preset magnification may be default to the electronic device, or may be manually set by the user.
Alternatively, shooting performance of the M cameras may be different.
Optionally, the shooting performance of the camera may indicate parameters of focal length, angle of view, lens parameters, resolution, etc. of the camera.
In the related art, an electronic device (such as a mobile phone) usually employs a fixed-focus camera for photographing, that is, the camera of the electronic device has a fixed focal length and a fixed angle of view. When the set magnification is different from that of the camera, the electronic device can change the magnification of the image in a digital zooming mode.
For detailed description of digital zoom, reference may be made to related description in the prior art, and detailed description thereof is omitted herein.
Step 102, the electronic device screens out at least one second image from the M first images according to the first parameters.
Wherein the first parameter comprises at least one of: brightness, texture, and noise.
In this embodiment, the first parameter is used to characterize a visual effect of the image.
The electronic equipment can screen the M first images according to the first parameters of the M first images, so that the visual effect of the screened second images can be ensured to be good.
Alternatively, the brightness may indicate the visually perceived brightness of the M first images.
Alternatively, the texture may indicate the richness of the detail portion of the M first images.
Alternatively, the noise may indicate the duty cycle of the interference information in the M first images.
Alternatively, the above step 102 may be specifically implemented by the following step 102 a.
Step 102a, the electronic device determines an image of which the first parameter satisfies the first condition in the M first images as an image of at least one second image.
Wherein the first parameter meeting the first condition may comprise any one of:
one possible implementation: the brightness satisfies the brightness condition;
Another possible implementation: the texture satisfies the texture condition;
yet another possible implementation is: the noise satisfies the noise condition.
Alternatively, the brightness satisfying the brightness condition may include one of: the top N images with the maximum brightness and the images with the brightness greater than or equal to the preset brightness.
Optionally, the texture satisfying the texture condition may include one of: the top Q images with the clearest texture and the images with the texture definition being greater than or equal to the preset texture definition.
Optionally, the noise satisfying the noise condition may include one of: the front K images with the minimum noise and the images with the noise smaller than the preset noise.
Wherein N, Q and K are positive integers and N, Q and K are less than M.
For other descriptions of the first parameter satisfying the first condition, reference may be made to the related descriptions in the following embodiments, which are not repeated here.
In the embodiment of the present application, the "top N images with maximum brightness" may be understood as: the first N images in the image sequence arranged in order of luminance from high to low. The "top Q images with the clearest texture" can be understood as: the top Q images in the image sequence are arranged in order of the texture sharpness from large to small. The "top K images with minimal noise" can be understood as: the first K images in the image sequence are arranged in order of small to large noise.
Therefore, the electronic device can screen the M first images based on any one of the brightness condition, the texture condition and the noise condition, so that the screened second images at least meet at least one of the 3 conditions, the second images can be ensured to have better imaging effects, and the visual effect of the at least one second image is better.
One possible implementation, another possible implementation, and yet another possible implementation of the above are described in detail below with reference to the embodiments.
One possible implementation: the first parameter may include brightness, i.e. the first parameter satisfying the first condition includes: the brightness satisfies the brightness condition; the above step 102a can be specifically realized by the following steps a and B.
And step A, the electronic equipment performs normalization processing on the M first images to obtain M normalized images.
Optionally, the electronic device may normalize the M first images by local division according to the following formula (1) to obtain M normalized images, where the M first images are in one-to-one correspondence with the M normalized images.
Figure BDA0004092867110000061
Taking the j-th first image as an example, j is any integer between 1 and M; i is the pixel index of the first image; i j (i) Is the gray value of pixel i; mu (i) is the average value of the P neighborhood local gray values of the pixel i; sigma (i) is the standard deviation of the local gray value of the P neighborhood of the pixel i; c (C) j Is a constant for stabilizing I nj (i) Is a value range of (a); i nj (i) Representing the perceived brightness of pixel i; p is a positive integer, e.g., p=8, 7, 6, 5, 4, 3, 2 or 1, i is a positive integer.
In this embodiment of the present application, the comparison of the brightness of the image may be a comparison of perceptibility in the neighborhood of the image pixels, that is, a comparison of perceived brightness of the image, rather than a simple comparison of brightness values of the image pixels.
It can be understood that the electronic device may perform pixel-by-pixel calculation on each of the M first images through the formula (1), to obtain M normalized images, where the M first images are in one-to-one correspondence with the M normalized images.
And B, the electronic equipment determines an image with the pixel mean value meeting the pixel mean value condition in the M normalized images as an image in at least one second image.
Optionally, the electronic device may perform average calculation on the pixel perceived brightness in the M normalized images, to obtain an average of the M pixel perceived brightness. The brightness degree of the image can be judged according to the perceived brightness average value of the image.
Alternatively, the above-mentioned brightness satisfying the brightness condition may include the pixel mean satisfying the pixel mean condition.
Alternatively, the pixel mean condition may include one of: the first N images with the maximum average brightness and the images with the average brightness larger than or equal to the preset perceptibility.
For example, assume that there are a total of 3 normalized images, respectively normalized image I n1 Normalized image I n2 And normalizing image I n3
Wherein the normalized image I n1 The pixel has 5 pixels, and the perceived brightness of the 5 pixels is as follows: 2,5,8,7,8; then normalize image I n1 The average value of the pixel perceived brightness of (2) is 6.
Normalized image I n2 The pixel has 3 pixels, and the perceived brightness of the 3 pixels is as follows: 2,5,8. Then normalize image I n2 The average value of the pixel perceived brightness of (2) is 3.
Normalized image I n3 The pixel has 6 pixels, and the perceived brightness of the 6 pixels is as follows: 2,5,8,7,8,6. Then normalize image I n3 The average value of the pixel perceived brightness of (2) is 5.
If n=1, i.e. the pixel mean condition can be the image with the maximum average perceived brightness, the electronic device can normalize the image I n1 The corresponding first image is determined as an image in the at least one second image.
Optionally, when the camera corresponding to the image with the pixel mean value meeting the pixel mean value condition in the M normalized images shoots in the next frame, the exposure time of the camera can be reduced, so as to reduce the power consumption of the electronic device.
In this way, the electronic device can screen out the image with high brightness from M first images, namely the image with good brightness perception effect, as the image in at least one second image, so that the image with brighter imaging picture and better imaging effect can be screened out from M first images,
another possible implementation: the first parameter may comprise noise, i.e. the first parameter satisfying the first condition comprises: the noise satisfies a noise condition; the above step 102a can be specifically realized by the following steps C and D.
And C, the electronic equipment determines the number of noise points in the M first images.
Alternatively, since the image noise is usually visually represented as a dark spot or a bright spot, the electronic device may perform noise detection on the M first original images by the following formula (2) to obtain the number of noise points in the M first images.
Figure BDA0004092867110000081
Wherein i is the pixel index of the first image; i (I) is the gray value of pixel point I; u (U) P (i) Is a P neighborhood taking a pixel point i as a center; y is the pixel point in the P neighborhood of the pixel point i; i (y) represents the average value of the gray values of pixels in the P neighborhood with the pixel point I as the center; d (i) indicates whether or not the pixel point i is a noise point.
In the embodiment of the present application, the formula (1) in the above formula (2) compares the gray value of the pixel point i with one half of the average value of the gray values of the pixels in the P neighborhood centered on the gray value, if the calculation result D (i) is greater than 0, it indicates that the pixel point i is a dark point, that is, the electronic device uses the pixel point i with a smaller gray value as the dark point.
The formula (2) in the above formula (2) compares the gray value of the pixel point i with twice the average value of the gray values of the pixels in the P-neighborhood centered on the gray value, if the calculation result D (i) is greater than 0, it indicates that the pixel point i is a bright point, that is, the electronic device uses the pixel point i with a larger gray value as a bright point.
It can be understood that if the calculation result D (i) obtained in the formula (2) of the pixel point i is greater than 0, the pixel point i is represented as a noise point.
Optionally, the electronic device may perform pixel-by-pixel calculation on the M first images through the above formula (2), and count the number of M noise points.
And D, the electronic equipment determines images with the noise number meeting the noise number condition in the M first images as images in at least one second image.
Alternatively, the above-described noise satisfying noise condition may include that the number of noise satisfies the number of noise condition.
Alternatively, the noise count condition may include one of: the front K images with the least noise number are images with the noise number smaller than the preset noise number.
For example, assume that there are a total of 3 first images, namely image 1, image 2, and image 3.
Image 1 has 50 total noise points, image 2 has 20 noise points, and image 3 has 35 noise points.
If k=1, the noise number condition may be the image with the smallest noise number, that is, the electronic device may determine the image 2 as the image in the at least one second image.
Therefore, the electronic device can determine the images with the small noise in the M first images as the images in the at least one second image, and can screen the images with low noise level from the M first images, so that the imaging effect and the visual effect of the output images can be improved.
Yet another possible implementation is: the first parameter may include texture. That is, the first parameter satisfying the first condition may include: the texture satisfies the texture condition; the step 102a may be specifically implemented by the following steps E to H.
And E, the electronic equipment acquires texture images of the M first images.
Optionally, the texture image may indicate visual features of homogeneity and fineness of imaging effect in the image, representing surface structure organization arrangement property of the object surface with slow variation or periodicity variation.
Alternatively, the electronic device may acquire texture images of the M first images by using a local binary pattern (Local Binary Pattern, LBP) operator in a manner of quantizing texture portions of the M first images.
For detailed description of the local binary pattern operator, reference may be made to the related art, and details are not repeated here.
And F, the electronic equipment carries out local variance processing on the texture images of the M first images to obtain local variances of the M first images.
Alternatively, the local variance processing may perform pixel-by-pixel P-neighborhood variance calculation for the texture image of the M first images for the electronic device.
It will be appreciated that the larger the value of the local variance, the more rich the image detail information representing that local.
And G, the electronic equipment sums the local variances of the first images to obtain the global variances of the first images.
And step H, the electronic equipment determines images with global variances meeting global variance conditions in the M first images as images in at least one second image.
Optionally, the global variance satisfying the global variance condition may include one of: the top Q images with the largest global variance are images with the global variance larger than or equal to the preset global variance.
It will be appreciated that the larger the value of the global variance of an image, the more rich the global detail information representing that image.
For example, assume that there are a total of 3 first images, namely image 1, image 2, and image 3. The electronic device may acquire texture images corresponding to the 3 first images: texture image 1, texture image 2, and texture image 3. The electronic equipment performs local variance processing on the 3 texture images to obtain local variances of the 3 first images, and then sums the local variances of the first images to obtain global variances of the first images: the global variance of image 1 is 8, the global variance of image 2 is 5, and the global variance of image 3 is 9.
If q=1, the global variance condition may be the image with the largest global variance condition, i.e. the electronic device may determine image 3 as the image of the at least one second image.
Therefore, the electronic device can determine the image with the global variance meeting the global variance condition in the M first images as the image in the at least one second image, and can determine the image with better imaging details in the M first images as the image in the at least one second image, so that the imaging effect and the visual effect of the output image can be improved.
Step 103, the electronic device outputs the target image in the at least one second image.
In the embodiment of the application, the target image may be one or more images in the at least one second image.
Alternatively, the electronic device may select and output the target image from the at least one second image at random or by a screening condition (a third condition described below).
According to the shooting method, the electronic equipment can screen at least one second image from M first images shot by the M cameras based on at least one of brightness, texture and noise, and then output the target image in the at least one second image, so that the output target image can give consideration to the visual effect of imaging on the basis of conforming to the physical characteristics of the electronic equipment, and the visual effect of the output image can be improved.
Optionally, before the step 101, the photographing method provided in the embodiment of the present application may further include the following step a. The above step 102 may be specifically implemented by the following step b.
And a, the electronic equipment determines the magnification based on the input of a user.
Alternatively, the user's input may be an input to a shooting preview interface before shooting.
For example, the user's input may be an input of selection of the screen content in the photographing preview interface by enlarging and reducing the screen content in the photographing preview interface.
It can be appreciated that the electronic device may determine, based on the user input, a picture to be photographed, thereby determining a magnification of the content of the picture in the photographing preview interface.
And b, under the condition that the M first images meet the view angle condition and the object distance condition, the electronic equipment screens at least one second image from the M first images according to the first parameters.
The object distance condition is that the object distance is larger than or equal to a first preset threshold value.
Alternatively, the angle of view may be an actual angle of view of the first image.
Alternatively, the angle of view corresponding to the above magnification may be the angle of view of the captured image at a specific magnification.
The above-described angle of view corresponding to the magnification may be, for example, an angle of view corresponding to the magnification of an image containing a complete photographic subject. That is, the angle of view of the first image including the complete photographic subject is greater than or equal to the angle of view corresponding to the magnification, i.e., the first image including the complete photographic subject satisfies the angle of view condition.
In the embodiment of the application, through the judgment of the angle of view and the object distance, a clear and complete image of the shooting object can be selected from M first images.
It can be understood that the size of the field angle of the camera determines the field of view of the camera, and the larger the field angle is, the larger the field of view is, and the smaller the magnification is. That is, only in the case where the magnification of the image is greater than or equal to that of the camera, the camera can capture an image containing the entire subject.
Optionally, the electronic device may perform angle judgment on the M first images according to the angles of view of the M first images, and screen out images that meet the angle of view condition, that is, screen out images that include the complete shooting object, or perform magnification judgment on the M first images according to the magnifications of the M first images, screen out images that include the complete shooting object. Optionally, the electronic device may further perform object distance judgment on the M first images, and screen out images that satisfy the object distance condition.
It can be appreciated that the object distance mainly affects the sharpness of the camera shooting the focused object. Since the lens size of the camera (e.g., radius, thickness, etc. of the lens) is fixed, i.e., the focal length is fixed, imaging is only clear when the object distance is greater than or equal to the preset distance. Therefore, in the embodiment of the present application, the image satisfying the object distance condition may be a clear image.
Optionally, the preset distance may be a minimum focusing distance of the camera.
Illustratively, taking camera a and camera b as examples, the magnification of camera a is 1X and the magnification of camera b is 2X. If the magnification set by the user is 1.5X, only the image shot by the camera a meets the magnification set by the user; if the magnification set by the user is 2.5X, the images shot by the camera a and the camera b at this time all contain complete shooting objects, and then the electronic device can perform object distance judgment on the camera a and the camera b.
Let camera a have a minimum focusing distance of 35mm and camera b have a minimum focusing distance of 25mm. The electronic equipment can measure that the object distance (namely, the distance between the shooting object and the electronic equipment) is 30mm through the self-contained laser module, and the electronic equipment outputs an image shot by a camera (namely, a camera b) with the minimum focusing distance smaller than or equal to the object distance; if the object distance is 50mm, the camera a and the camera b can both shoot to obtain clear images, and then the electronic equipment can screen at least one second image from the shot images according to the first parameters.
Optionally, before the step 103, the photographing method provided in the embodiment of the present application may further include a step 104 described below.
Step 104, the electronic device determines an image with the second parameter satisfying the second condition in the at least one second image as a target image.
Wherein the second parameter may comprise at least one of: brightness, texture, noise, and magnification.
Optionally, the second parameter satisfying the second condition may include at least two of:
case 1: the brightness satisfies the brightness condition;
case 2: the texture satisfies the texture condition;
case 3: the noise satisfies a noise condition;
case 4: the difference between the magnification of the camera corresponding to the image and the required magnification is the smallest.
Alternatively, the priorities of the above cases 1 to 3 are higher than the priorities of the above case 4, that is: the electronic device preferentially judges whether at least one second image includes images satisfying at least two of cases 1 to 3. If so, the electronic device may take the image satisfying at least two of cases 1 to 3 as the target image. If not, the electronic device may use the image satisfying the condition 4 in the at least one second image as the target image.
Alternatively, if there are at least two or more images satisfying the condition 1 to 3 in the at least one second image, the electronic device may set the image satisfying the condition 4 in the two or more images as the target image.
Step 104 is specifically described below in connection with specific examples.
Illustratively, assume that there are three second images of image 1, image 2, and image 3.
If the magnification of the camera corresponding to the image 1 is 1X, and the image 1 meets the brightness condition; the magnification of the camera corresponding to the image 2 is 2X, and the image 2 meets the brightness condition and the noise condition; the magnification of the camera corresponding to the image 3 is 1X, and the image 3 meets the brightness condition and the texture condition. If the required magnification is 2X, image 1 satisfies case 1, image 2 satisfies case 1, case 3, and case 4, and image 3 satisfies case 1 and case 2, so the electronic apparatus can determine image 2 as the target image.
If the magnification of the camera corresponding to the image 1 is 1X, and the image 1 meets the brightness condition; the magnification of the camera corresponding to the image 2 is 2X, and the image 2 meets the noise condition; the magnification of the camera corresponding to the image 3 is 1X, and the image 3 meets the texture condition. If the required magnification is 2X, then image 1 satisfies case 1, image 2 satisfies case 3 and case 4, and image 3 satisfies case 2, so the electronic device can determine image 2 as the target image.
If the magnification of the camera corresponding to the image 1 is 1X, and the image 1 meets the brightness condition, the texture condition and the noise condition; the magnification of the camera corresponding to the image 2 is 2X, and the image 2 meets the noise condition; the magnification of the camera corresponding to the image 3 is 1X, and the image 3 meets the texture condition. If the required magnification is 2X, the image 1 satisfies the case 1, the case 2, and the case 3, the image 2 satisfies the case 3, and the case 4, and the image 3 satisfies the case 2, so the electronic apparatus can determine the image 1 as the target image.
Therefore, the electronic equipment can determine the second image meeting more conditions as the target image, so that the target image has better imaging effect, and the visual effect of the image output by the electronic equipment can be improved.
Optionally, before the step 102, the photographing method provided in the embodiment of the present application may further include a step 105 described below.
Step 105, the electronic device performs a viewing angle and resolution alignment process on the M first images.
It can be understood that, due to the difference of the positions and the magnifications of the cameras in the electronic device, imaging visual angles are different, so that imaging contents of the M first images can be different, and therefore the electronic device can perform visual angle alignment processing on the M first images.
Further, because the shooting performance (such as pixel level, lens quality, etc.) of different cameras also can be different, the resolution of the captured images can be different, so in order to facilitate the subsequent comparison of pixel levels for M first images, the electronic device can perform resolution alignment for M first images.
Alternatively, the electronic device may perform perspective alignment on the M first images by a Scale-invariant feature transform (SIFT) operator.
Specifically, the electronic device may detect key pixel points in M first images through a SIFT operator, and then rotate other (M-1) first images with one of the first images as a target, so that imaging angles of the M first images are the same. Meanwhile, the electronic equipment can cut scene contents in more than the first images with larger magnification in the first images with smaller magnification so as to ensure that imaging contents in M first images are the same.
Optionally, the electronic device may further perform reduction processing on the first image with the larger resolution by using a bilinear interpolation method, so that the resolutions of the M first images are identical.
Therefore, the electronic equipment can perform visual angle alignment processing on the M first images before screening the M first images, so that differences of imaging visual angles caused by different camera positions can be avoided, imaging contents of the M first images are the same, and subsequent screening is facilitated; on the other hand, the alignment processing of the M first images and the retrograde resolution can be realized, so that the subsequent comparison of pixel levels of the M first images can be conveniently realized. Thus, the subsequent comparison between M first images can be facilitated.
According to the shooting method provided by the embodiment of the application, the execution subject can be a shooting device. In the embodiment of the present application, taking an example of a photographing method performed by a photographing device, the photographing device provided in the embodiment of the present application is described.
Fig. 2 shows a schematic diagram of a possible configuration of a photographing device according to an embodiment of the present application. As shown in fig. 2, the photographing device 20 may include: a photographing module 21, a screening module 22, and an output module 23;
the shooting module 21 is configured to respectively shoot images through M cameras to obtain M first images, where M is an integer greater than 1; a screening module 22, configured to screen at least one second image from the M first images captured by the capturing module 21 according to the first parameter; an output module 23, configured to output the target image in the at least one second image screened by the selection module 22; wherein the first parameter comprises at least one of: brightness, texture, and noise.
In one possible implementation, the screening module 22 specifically includes a determination sub-module;
the determining submodule is used for determining images with first parameters meeting first conditions in M first images as images in at least one second image; wherein the first parameter meeting the first condition comprises any one of: the brightness satisfies the brightness condition; the texture satisfies the texture condition; the noise satisfies the noise condition.
In one possible implementation, the brightness satisfying the brightness condition includes one of: the front N images with the maximum brightness and the images with the brightness larger than or equal to the preset brightness;
texture satisfies texture conditions including one of: front Q images with the clearest texture and images with the texture definition being greater than or equal to the preset texture definition;
the noise satisfying the noise condition includes one of: the front K images with the minimum noise and the images with the noise smaller than the preset noise;
wherein N, Q and K are positive integers and N, Q and K are less than M.
In one possible implementation, the first parameter includes: brightness;
the determining submodule is specifically configured to:
carrying out normalization processing on the M first images to obtain M normalized images;
and determining an image with the pixel mean value meeting the pixel mean value condition in the M normalized images as an image in at least one second image.
In one possible implementation, the first parameter includes: noise;
the determining submodule is specifically configured to:
determining the number of noise points in M first images;
and determining the images with the noise number meeting the noise number condition in the M first images as images in at least one second image.
In one possible implementation, the first parameter includes: texture;
the determining submodule is specifically configured to:
obtaining texture images of M first images;
carrying out local variance processing on texture images of M first images to obtain local variances of the M first images;
summing the local variances of the first images to obtain global variances of the first images;
and determining the images with the global variances meeting the global variance conditions in the M first images as images in at least one second image.
In one possible implementation manner, the apparatus further includes: a determining module;
the determining module is configured to determine, as the target image, an image in which a second parameter in the at least one second image satisfies a second condition before the output module 23 outputs the target image in the at least one second image;
wherein the second parameter meeting the second condition comprises at least two of: the brightness satisfies the brightness condition; the texture satisfies the texture condition; the noise satisfies a noise condition; the difference between the magnification of the camera corresponding to the image and the required magnification is the smallest.
In one possible implementation manner, the apparatus further includes: a processing module;
The processing module is configured to perform alignment processing on the viewing angles and resolutions of the M first images before the screening module 22 screens at least one second image from the M first images according to the first parameter.
In a possible implementation manner, the determining module is further configured to determine the magnification based on the input of the user before the photographing module 21 photographs the images through the M cameras, respectively, to obtain M first images;
the screening module is specifically configured to:
under the condition that the M first images meet the view angle condition and the object distance condition, at least one second image is screened out from the M first images according to a first parameter;
the view angle condition is that the view angle is larger than or equal to the view angle corresponding to the magnification, and the object distance condition is that the object distance is larger than or equal to a first preset threshold.
According to the shooting device, at least one second image can be screened out from M first images shot by M cameras based on at least one of brightness, texture and noise, and then the target image in the at least one second image is output, so that the output target image can give consideration to the visual effect of imaging on the basis of conforming to the physical characteristics of the shooting device, and the visual effect of the output image can be improved.
The photographing device in the embodiment of the application may be an electronic device, or may be a component in the electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The photographing device in the embodiment of the application may be a device having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The photographing device provided in the embodiment of the present application can implement each process implemented by the method embodiment of fig. 1, so as to achieve the same technical effect, and in order to avoid repetition, a detailed description is omitted here.
Optionally, as shown in fig. 3, the embodiment of the present application further provides an electronic device 300, including a processor 301 and a memory 302, where a program or an instruction capable of running on the processor 301 is stored in the memory 302, and the program or the instruction realizes each step of the above-mentioned shooting method embodiment when being executed by the processor 301, and the steps can achieve the same technical effect, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 4 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 400 includes, but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, and processor 410.
Those skilled in the art will appreciate that the electronic device 400 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 410 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 4 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The processor 410 is configured to capture images by using M cameras, respectively, to obtain M first images, where M is an integer greater than 1, and is configured to screen at least one second image from the captured M first images according to a first parameter; a display unit 406, configured to output a target image in the at least one second image screened by the processor 410; wherein the first parameter comprises at least one of: brightness, texture, and noise.
Optionally, the processor 410 is configured to determine an image, of the M first images, for which the first parameter meets the first condition, as an image of the at least one second image; wherein the first parameter meeting the first condition comprises any one of: the brightness satisfies the brightness condition; the texture satisfies the texture condition; the noise satisfies the noise condition.
Optionally, the brightness satisfying the brightness condition includes one of: the front N images with the maximum brightness and the images with the brightness larger than or equal to the preset brightness;
texture satisfies texture conditions including one of: front Q images with the clearest texture and images with the texture definition being greater than or equal to the preset texture definition;
the noise satisfying the noise condition includes one of: the front K images with the minimum noise and the images with the noise smaller than the preset noise;
Wherein N, Q and K are positive integers and N, Q and K are less than M.
Optionally, the first parameter includes: brightness;
the processor 410 is configured to:
carrying out normalization processing on the M first images to obtain M normalized images;
and determining an image with the pixel mean value meeting the pixel mean value condition in the M normalized images as an image in at least one second image.
Optionally, the first parameter includes: noise;
the processor 410 is configured to:
determining the number of noise points in M first images;
and determining the images with the noise number meeting the noise number condition in the M first images as images in at least one second image.
Optionally, the first parameter includes: texture;
the processor 410 is configured to:
obtaining texture images of M first images;
carrying out local variance processing on texture images of M first images to obtain local variances of the M first images;
summing the local variances of the first images to obtain global variances of the first images;
and determining the images with the global variances meeting the global variance conditions in the M first images as images in at least one second image.
Optionally, the processor 410 is configured to determine, as the target image, an image in which a second parameter in the at least one second image satisfies a second condition before outputting the target image in the at least one second image;
Wherein the second parameter meeting the second condition comprises at least two of: the brightness satisfies the brightness condition;
the texture satisfies the texture condition; the noise satisfies a noise condition; the difference between the magnification of the camera corresponding to the image and the required magnification is the smallest.
Optionally, the processor 410 is further configured to perform a viewing angle and resolution alignment process on the M first images before screening at least one second image from the M first images according to the first parameter.
Optionally, the processor 410 is further configured to determine the magnification based on the input of the user before capturing the images by the M cameras, respectively, to obtain M first images;
the processor 410 is specifically configured to:
under the condition that the M first images meet the view angle condition and the object distance condition, at least one second image is screened out from the M first images according to a first parameter;
the view angle condition is that the view angle is larger than or equal to the view angle corresponding to the magnification, and the object distance condition is that the object distance is larger than or equal to a first preset threshold.
According to the electronic equipment, at least one second image can be screened out from M first images shot by M cameras based on at least one of brightness, texture and noise, and then the target image in the at least one second image is output, so that the output target image can give consideration to the visual effect of imaging on the basis of conforming to the physical characteristics of the electronic equipment, and the visual effect of the output image can be improved.
It should be appreciated that in embodiments of the present application, the input unit 404 may include a graphics processor (Graphics Processing Unit, GPU) 4041 and a microphone 4042, with the graphics processor 4041 processing image data of still pictures or video obtained by an image capture device (e.g., a camera) in a video capture mode or an image capture mode. The display unit 406 may include a display panel 4061, and the display panel 4061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 407 includes at least one of a touch panel 4071 and other input devices 4072. The touch panel 4071 is also referred to as a touch screen. The touch panel 4071 may include two parts, a touch detection device and a touch controller. Other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
Memory 409 may be used to store software programs as well as various data. The memory 409 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 409 may include volatile memory or nonvolatile memory, or the memory 409 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 409 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 410 may include one or more processing units; optionally, the processor 410 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the program or the instruction realizes each process of the above-mentioned shooting method embodiment, and the same technical effect can be achieved, so that repetition is avoided, and no redundant description is provided herein.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running a program or instructions, implementing each process of the shooting method embodiment, and achieving the same technical effect, so as to avoid repetition, and no redundant description is provided herein.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the foregoing shooting method embodiments, and achieve the same technical effects, and for avoiding repetition, a detailed description is omitted herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (11)

1. A photographing method, the method comprising:
respectively shooting images through M cameras to obtain M first images, wherein M is an integer greater than 1;
screening at least one second image from the M first images according to the first parameter;
outputting a target image in the at least one second image;
wherein the first parameter comprises at least one of: brightness, texture, and noise.
2. The method of claim 1, wherein the screening at least one second image from the M first images based on the first parameter comprises:
determining images of which the first parameters meet the first conditions in the M first images as images of the at least one second image;
wherein the first parameter meeting the first condition comprises any one of:
the brightness satisfies the brightness condition;
the texture satisfies the texture condition;
the noise satisfies the noise condition.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
the brightness satisfying the brightness condition includes one of: the front N images with the maximum brightness and the images with the brightness larger than or equal to the preset brightness;
texture satisfies texture conditions including one of: front Q images with the clearest texture and images with the texture definition being greater than or equal to the preset texture definition;
The noise satisfying the noise condition includes one of: the front K images with the minimum noise and the images with the noise smaller than the preset noise;
wherein N, Q and K are positive integers and N, Q and K are less than M.
4. The method of claim 2, wherein the first parameter comprises: brightness;
the determining the image of which the first parameter in the M first images meets the first condition as the image in the at least one second image comprises the following steps:
carrying out normalization processing on the M first images to obtain M normalized images;
and determining an image with the pixel mean value meeting the pixel mean value condition in the M normalized images as an image in the at least one second image.
5. The method of claim 2, wherein the first parameter comprises: noise;
the determining the image of which the first parameter in the M first images meets the first condition as the image in the at least one second image comprises the following steps:
determining the number of noise points in the M first images;
and determining the images with the noise number meeting the noise number condition in the M first images as the images in the at least one second image.
6. The method of claim 2, wherein the first parameter comprises: texture;
the determining the image of which the first parameter in the M first images meets the first condition as the image in the at least one second image comprises the following steps:
acquiring texture images of the M first images;
carrying out local variance processing on texture images of the M first images to obtain local variances of the M first images;
summing the local variances of the first images to obtain global variances of the first images;
and determining the image with the global variance meeting the global variance condition in the M first images as the image in the at least one second image.
7. The method of claim 1, wherein prior to outputting the target image in the at least one second image, the method further comprises:
determining an image with a second parameter meeting a second condition in the at least one second image as the target image;
wherein the second parameter meeting the second condition comprises at least two of:
the brightness satisfies the brightness condition;
the texture satisfies the texture condition;
the noise satisfies a noise condition;
The difference between the magnification of the camera corresponding to the image and the required magnification is the smallest.
8. The method of claim 1, wherein before screening at least one second image from the M first images according to the first parameter, the method further comprises:
and performing visual angle and resolution alignment processing on the M first images.
9. The method according to claim 1, wherein before capturing the images by the M cameras, respectively, to obtain M first images, the method further comprises:
determining a magnification based on the user's input;
the step of screening at least one second image from the M first images according to the first parameter comprises the following steps:
under the condition that the M first images meet the view angle condition and the object distance condition, screening the at least one second image from the M first images according to the first parameter;
the view angle condition is that the view angle is larger than or equal to the view angle corresponding to the magnification, and the object distance condition is that the object distance is larger than or equal to a first preset threshold.
10. A photographing apparatus, the apparatus comprising: the device comprises a shooting module, a screening module and an output module;
The shooting module is used for respectively shooting images through M cameras to obtain M first images, wherein M is an integer greater than 1;
the screening module is used for screening at least one second image from the M first images obtained by the shooting module according to a first parameter;
the output module is used for outputting the target image in the at least one second image obtained by the screening module;
wherein the first parameter comprises at least one of: brightness, texture, and noise.
11. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the shooting method of any one of claims 1 to 9.
CN202310157155.5A 2023-02-21 2023-02-21 Shooting method and device and electronic equipment Pending CN116132820A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310157155.5A CN116132820A (en) 2023-02-21 2023-02-21 Shooting method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310157155.5A CN116132820A (en) 2023-02-21 2023-02-21 Shooting method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN116132820A true CN116132820A (en) 2023-05-16

Family

ID=86299133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310157155.5A Pending CN116132820A (en) 2023-02-21 2023-02-21 Shooting method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN116132820A (en)

Similar Documents

Publication Publication Date Title
US7911513B2 (en) Simulating short depth of field to maximize privacy in videotelephony
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
CN112565589B (en) Photographing preview method and device, storage medium and electronic equipment
US11172145B2 (en) Method of image fusion on camera device equipped with multiple cameras
US20090154821A1 (en) Method and an apparatus for creating a combined image
CN109923850B (en) Image capturing device and method
CN112637500B (en) Image processing method and device
CN107105172B (en) Focusing method and device
CN110971841A (en) Image processing method, image processing device, storage medium and electronic equipment
CN111405185B (en) Zoom control method and device for camera, electronic equipment and storage medium
US7684688B2 (en) Adjustable depth of field
CN114449174A (en) Shooting method and device and electronic equipment
CN112437237B (en) Shooting method and device
CN108810326B (en) Photographing method and device and mobile terminal
CN112508820A (en) Image processing method and device and electronic equipment
CN113473018B (en) Video shooting method and device, shooting terminal and storage medium
CN112653841B (en) Shooting method and device and electronic equipment
CN116132820A (en) Shooting method and device and electronic equipment
CN113891018A (en) Shooting method and device and electronic equipment
CN113989387A (en) Camera shooting parameter adjusting method and device and electronic equipment
CN112446848A (en) Image processing method and device and electronic equipment
CN112367464A (en) Image output method and device and electronic equipment
CN111866383A (en) Image processing method, terminal and storage medium
CN115514895B (en) Image anti-shake method, apparatus, electronic device, and computer-readable storage medium
CN113873160B (en) Image processing method, device, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination