CN111866476B - Image shooting method and device and electronic equipment - Google Patents

Image shooting method and device and electronic equipment Download PDF

Info

Publication number
CN111866476B
CN111866476B CN202010899232.0A CN202010899232A CN111866476B CN 111866476 B CN111866476 B CN 111866476B CN 202010899232 A CN202010899232 A CN 202010899232A CN 111866476 B CN111866476 B CN 111866476B
Authority
CN
China
Prior art keywords
image
pixel point
gray
depth
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010899232.0A
Other languages
Chinese (zh)
Other versions
CN111866476A (en
Inventor
王丹妹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010899232.0A priority Critical patent/CN111866476B/en
Publication of CN111866476A publication Critical patent/CN111866476A/en
Application granted granted Critical
Publication of CN111866476B publication Critical patent/CN111866476B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Abstract

The application discloses an image shooting method and device and electronic equipment, and belongs to the technical field of communication. The problem that the operation process of shooting images with good acquisition effects is tedious and time-consuming can be solved. The method comprises the following steps: acquiring a first color image acquired by a first camera and a second image acquired by a second camera, wherein the second image comprises a second gray image or a second depth image; based on the first color image and the second image, a target image is generated. The method can be applied to a scene in which a camera of the electronic equipment is used for shooting images.

Description

Image shooting method and device and electronic equipment
Technical Field
The embodiment of the application relates to the technical field of communication, in particular to an image shooting method and device and electronic equipment.
Background
With the development of electronic technology, an electronic device may use an installed camera to capture an image, for example, the electronic device may capture a night view image of a city through the camera.
At present, after an electronic device can shoot an image of an object through a camera, if a user is not satisfied with the shot image and wants to obtain an image with a good shooting effect, the user needs to manually adjust the image through image processing software, for example, the degree of sharpness of the image, the degree of edge sharpening of the image, the brightness of the image, and the like. Thus, the operation process of obtaining a good-quality photographed image may be cumbersome and time-consuming.
Disclosure of Invention
The embodiment of the application aims to provide an image shooting method, an image shooting device and electronic equipment, and can solve the problems that the operation process of shooting images with a good obtaining effect is complex and time-consuming.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image capturing method, including: acquiring a first color image acquired by a first camera and a second image acquired by a second camera, wherein the second image comprises a second gray image or a second depth image; based on the first color image and the second image, a target image is generated.
In a second aspect, an embodiment of the present application provides an image capturing apparatus, including: the device comprises an acquisition module and a processing module. The acquisition module is used for acquiring a first color image acquired by the first camera and a second image acquired by the second camera, and the second image comprises a second gray image or a second depth image. And the processing module is used for generating a target image based on the first color image and the second image.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium on which a program or instructions are stored, which when executed by a processor, implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, and the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, a first color image can be acquired by acquiring a first camera, and a second image can be acquired by acquiring a second camera, wherein the second image comprises a second gray-scale image or a second depth image; and generating a target image based on the first color image and the second image. By the method, the electronic equipment can directly generate the target image containing more image information, such as the image information of the gray image or the image information of the depth image, according to the first color image and the second image acquired by the two cameras, so that compared with the related technology, the method avoids that a user carries out image processing again after shooting the image, thereby acquiring the image with better shooting effect and saving the time of the user.
Drawings
Fig. 1 is a schematic diagram of an image capturing method according to an embodiment of the present disclosure;
fig. 2 is a second schematic diagram of an image capturing method according to an embodiment of the present disclosure;
fig. 3 is a third schematic diagram of an image capturing method according to an embodiment of the present disclosure;
fig. 4 is a fourth schematic view of an image capturing method according to an embodiment of the present application;
fig. 5 is a fifth schematic view illustrating an image capturing method according to an embodiment of the present application;
fig. 6 is a sixth schematic view illustrating an image capturing method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an image capturing apparatus according to an embodiment of the present disclosure;
FIG. 8 is a hardware diagram of an electronic device according to an embodiment of the present application;
fig. 9 is a second hardware schematic diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, of the embodiments of the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be implemented in sequences other than those illustrated or described herein. The objects distinguished by "first", "second", and the like are usually a class, and the number of the objects is not limited, and for example, the first object may be one or a plurality of objects. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/", and generally means that the former and latter related objects are in an "or" relationship.
The image capturing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
The image shooting method of the embodiment of the application can be applied to the following scenes.
Scene one, in a scene photographed in dim light. For example, the user wants to take a night scene image of a city, or the user wants to take a starry sky image of a scene.
And scene two, performing edge sharpening on the shooting object. For example, the user takes a close-up of an insect using macro mode, or the user takes a picture of a scene of multiple vehicles on an overpass.
And a third scene, namely a scene for carrying out local cutout on the shot object. For example, a user takes an image of a tree and the user needs to crop the outline of the tree from the scene in which the image was taken.
The electronic equipment in the embodiment of the application is provided with a first camera and a second camera, and can acquire a first color image through acquiring the first camera and acquire a second image through the second camera, wherein the second image comprises a second gray image or a second depth image; and generating a target image based on the first color image and the second image. By the method, the electronic equipment can directly generate the target image containing more image information, such as the image information of the gray-scale image or the image information of the depth image, according to the first color image and the second image acquired by the two cameras, so that compared with the related technology, the method avoids the situation that a user carries out image processing again after shooting the image, thereby acquiring the image with better shooting effect and saving the time of the user.
As shown in fig. 1, an embodiment of the present application provides an image capturing method, which may include steps 101 and 102 described below.
Step 101, the electronic device acquires a first color image acquired by a first camera and a second image acquired by a second camera.
Wherein the second image includes a second gray image or a second depth image.
Optionally, in this embodiment of the application, the luminance of the second grayscale image is greater than the luminance of the first color image.
In this embodiment, the electronic device includes a first camera and a second camera, where the first camera is any camera except a time of flight (TOF) camera, such as a main camera, a wide-angle camera, a telescopic camera, and the like. The second camera is the TOF camera, and first camera and second camera can constitute the camera array, install on electronic equipment in the lump. For example, both as a front camera mounted below the screen of the electronic device, or both as a rear camera mounted on the back of the electronic device.
In addition, in this embodiment of the present application, the TOF camera includes a transmitting module and a receiving module. The transmitting module may specifically adopt a multi-point array laser generator, and the receiving module may specifically adopt an image receiving module of 85nm (nanometer) or 940 nm. Specifically, when the TOF camera is in operation, the transmitting module may transmit a pulse (e.g., a square wave pulse) to the object to be photographed, and the receiving module may receive the transmitted pulse signal and obtain the depth information (e.g., obtain the second depth image) by calculating a time interval between the transmission and the reception. Meanwhile, the TOF camera can also acquire a grayscale image of the photographed object (for example, acquire a second grayscale image), and since the filter of the TOF camera can pass more light rays than the filter of the ordinary color camera (that is, the TOF camera is equipped with a color filter, or the light transmittance of the filter of the TOF camera is higher), the luminance of the grayscale image (for example, the second grayscale image) acquired by the TOF camera is higher than that of the image (for example, the first color image) acquired by the ordinary camera, so that the image contains more image information, and more image details can be recorded.
Optionally, in this embodiment of the application, a specific acquisition mode in which the electronic device acquires the first color image through the first camera and a specific acquisition mode in which the electronic device acquires the second image through the second camera are not specifically limited, and may be set according to an actual use requirement. For example, the first color image or the second color image may be acquired in a preview mode, or may be acquired by capturing an image.
And 102, generating a target image based on the first color image and the second image by the electronic equipment.
It should be noted that, in the embodiment of the present application, the electronic device may generate a target image including more feature information, for example, image information of a grayscale image, image information of a depth image, and the like, from the first color image and the second image.
Optionally, in this embodiment of the application, a specific method of the following practical methods one to three may be referred to for a specific method of generating a target image according to the first color image and the second image, which is not described herein again.
Illustratively, assuming that the user wants an image of an object, the user may acquire a first color image of the object through a normal camera (i.e., a first camera) of the electronic device, and a second gray scale image of the object according to a TOF camera (i.e., a second camera). Then, the electronic device may optimize the first color image according to image information included in the second image, such as grayscale image information (i.e., the second grayscale image) or depth image information (the second depth image), so as to obtain a target image with higher definition, clearer image details and better brightness.
The embodiment of the application provides an image shooting method, which can be used for acquiring a first color image through a first camera in electronic equipment and acquiring a second image through a second camera in the electronic equipment, wherein the second image comprises at least one of a second gray image and a second depth image, the brightness of the second gray image is greater than that of the first color image, and a target image is generated according to the first color image and the second image. By the method, the electronic equipment can directly generate the target image containing more image information, such as the image information of the gray-scale image and the image information of the depth image, according to the first color image and the second image acquired by the two cameras, so that compared with the related technology, the method avoids the situation that a user carries out image processing again after shooting the image, thereby acquiring the image with better shooting effect and saving the time of the user.
Optionally, with reference to fig. 1, as shown in fig. 2, after the step 101 and before the step 102, the image capturing method provided in the embodiment of the present application further includes the following step 103, and the corresponding step 102 may be specifically implemented by the following step 102A.
And 103, the electronic equipment respectively performs image preprocessing on the first color image and the second color image.
And the number of the pixel points of the first color image and the second image after the preprocessing is the same.
Optionally, in an embodiment of the present application, the image preprocessing includes at least one of: the method comprises the steps of image noise reduction processing, dead pixel compensation processing, white balance correction processing, image scaling processing, color correction processing, pixel point alignment processing and the like. Specifically, for a certain item of image preprocessing, the electronic device may analyze the actual situation of the image and determine whether to perform the preprocessing step. Exemplarily, if the pixel points of the first color image and the second image are the same (i.e., the number of the pixel points is the same, and the corresponding position relationship of the pixel points is also the same), the electronic device may skip the preprocessing step of aligning the pixel points and continue to execute the next preprocessing step. If the pixel points of the first color image and the second image are different, the electronic equipment performs pixel point alignment processing on the pixel points of the first color image and the second image, and continues to execute the next preprocessing step after the preprocessing of the pixel point alignment processing is completed.
It should be noted that, in the embodiment of the present application, reference may be specifically made to an image preprocessing method in the related art for performing the above preprocessing, and details are not described here again.
Step 102A, the electronic device generates a target image based on the preprocessed first color image and the preprocessed second image.
Optionally, in this embodiment of the application, the image size and the number of pixels of the first color image after the preprocessing are the same as those of the second image after the preprocessing, and the electronic device may directly perform image fusion (specifically, refer to the following first mode and second mode) or image matting (specifically, refer to the following third mode) on the first color image and the second image to generate the target image.
It should be noted that, in the following embodiments, the first color image and the second image (e.g., the first color image and the second image in the following manner one, manner two, and manner three) capable of performing the image fusion operation or the image matting operation are both the first color image after being preprocessed and the second image after being preprocessed. For convenience of description, the description of the first color image and the second image is directly adopted, and the description is not limited to the present application.
In addition, in this embodiment of the present application, for a method for generating, by an electronic device, a target image according to a preprocessed first color image and a preprocessed second image, reference may be made to any one of a first mode, a second mode, or a third mode in the following embodiments, which is not described herein again.
It can be understood that, because the electronic device preprocesses the first color image and the second image, the image sizes and the pixel numbers of the two processed images are the same, and there is no image dead pixel, therefore, the electronic device can directly generate the target image according to the first color image after preprocessing and the second image after preprocessing, thereby facilitating the operation of the user, and being capable of more quickly and conveniently obtaining the target image required by the user.
Optionally, in this embodiment of the application, the above-mentioned step 102 may be specifically implemented in any one of the following manners according to a difference that the second image includes content.
In a first mode
It should be noted that the following method can be applied to a scene of dark light shooting. For example, the user wants to take a night scene image of a city, or the user wants to take a starry sky image of a scene.
Alternatively, as shown in fig. 3 in conjunction with fig. 1, the second image includes a second gray-scale image having a luminance greater than that of the first color image. The step 102 may be specifically realized by the following steps 102a to 102 c.
Step 102a, the electronic device converts the first color image into a first gray image.
Optionally, in this embodiment of the application, a method for converting the first color image into the first grayscale image may be any one of the following methods: weighting, averaging, maximization, and the like. Specifically, the weighting method is to multiply three values (R, G, and B values, respectively) of the RGB values of each pixel of the first color image by a weight coefficient, and add the values to obtain the gray value of the pixel. The averaging method is to calculate an arithmetic mean value of the RGB value of each pixel point, and take the arithmetic mean value as the gray value of the pixel point. The maximum value method is to use the maximum value (i.e. the maximum value of three RGB values) of the RGB value of each pixel point as the gray value of the pixel point.
And 102b, the electronic equipment performs image fusion on the first gray level image and the second gray level image to obtain a third gray level image.
It should be noted that, in the embodiment of the present application, the third grayscale image obtained through image fusion may include more feature information than the first grayscale image (i.e., the feature information is from the second grayscale image acquired by the TOF camera), so that the detailed features of the object to be photographed in the third grayscale image are clearer. And, since the brightness of the second gray scale image before fusion is higher than the brightness of the first gray scale image (determined by the camera, i.e., the brightness of the gray scale image captured by the TOF camera is higher than the brightness of the image captured by the color camera), the brightness value of the third gray scale image after fusion is not lower than the brightness of the first gray scale image.
Optionally, in this embodiment of the application, a fusion algorithm for fusing the first grayscale image and the second grayscale image by the electronic device may be any of the following: the gray-scale weighted average method, the contrast modulation method, the wavelet transform method, and the like may be specifically determined according to actual use requirements, and the embodiment of the present application is not specifically limited.
The following embodiments are exemplified by a gray-scale weighted average method, and do not limit the present application in any way.
Optionally, in this embodiment of the application, the step 102b may be specifically implemented by the following step 102b 1.
Step 102b1, for each pixel point in the third gray image, the electronic device takes the sum of the first numerical value and the second numerical value as the gray value of one pixel point in the third gray image.
The first numerical value is a numerical value obtained by weighting the gray value of a first pixel point in the first gray image, the second numerical value is a numerical value obtained by weighting the gray value of a second pixel point in the second gray image, and the first pixel point and the second pixel point are pixel points with mutually corresponding positions.
Optionally, in this embodiment of the application, the first numerical value is a numerical value obtained by multiplying a gray value of the first pixel point in the first gray image by the first weighting coefficient. The second numerical value is a numerical value obtained by multiplying the gray value of the second pixel point in the second gray image by the second weighting coefficient. The first weighting coefficient and the second weighting coefficient are used to indicate the weight of the data of the pixel point in the second image and the data of the corresponding pixel point in the second image (such as the first pixel point and the second pixel point) in the image fusion by using the gray-scale weighted average method. Specifically, the weighting coefficient may be a weight value or a weight ratio. The embodiment of the present application is not particularly limited, and may be determined according to actual use requirements.
It should be noted that, in this embodiment of the application, the position of the first pixel point corresponds to the position of the second pixel point, which means that in a process of fusing the first grayscale image and the second grayscale image into the third grayscale image, the first pixel point in the first grayscale image and the second pixel point in the second grayscale image can be correspondingly fused into one pixel point in the third grayscale image, for example, the third pixel point, that is, the positions correspond to corresponding positions where image fusion can be performed.
Optionally, in this embodiment of the application, the first weighting coefficient and the second weighting coefficient may be preset values. The user can also adjust the first weighting coefficient and the second weighting coefficient according to actual use requirements so as to obtain a high-quality third gray-scale image which is higher in definition, better in dynamic range, clearer in image detail characteristics and improved in brightness.
It can be understood that, the electronic device may use a sum of a weighted value of the gray value of the first pixel in the first gray image and a weighted value of the gray value of the second pixel in the second gray image as the gray value of a pixel in the third gray image. Therefore, the electronic equipment can obtain the weighted third grayscale image, so that the third grayscale image has higher definition, clearer image details and better brightness, and the image quality of the obtained third grayscale image is improved.
And 102c, converting the third gray level image into a second color image by the electronic equipment.
Wherein, the target image is a second color image.
Optionally, in this embodiment of the application, the method for converting the third grayscale image into the second color image may be any of the following manners: mode a, the grayscale image is converted to a pseudo-color image. The specific conversion method into a pseudo-color image may be any one of the following methods: a gray-scale division method, a gray-scale-color conversion method, a filtering method, and the like. Illustratively, taking the grayscale-to-color conversion method as an example, the electronic device may feed the third grayscale image to three R, G, B converters of different characteristics (typically, three converters are represented using three different piecewise functions), and then feed the different outputs of the three converters to the color display for display. In mode B, the electronic device may perform the coloring process on the third grayscale image with reference to the first color image, and convert the third grayscale image into the second color image. Specifically, the electronic device may obtain RGB data in each first color image, calculate sub-data of R, sub-data of G, and sub-data of B, which include gray values, according to the RGB data and gray data of corresponding pixel points in the third gray image, and generate two color images according to newly generated RGB data (each sub-data in the RGB data has a corresponding relationship with the gray value). The selection can be specifically performed according to actual use requirements, and the embodiment of the application is not particularly limited.
Exemplarily, assuming that a user wants to capture a night scene of a city, the user may acquire a first color image of the night scene through a general camera of the electronic device, and acquire a second gray scale image of the night scene according to the TOF camera. Then, the electronic device may convert the first color image into a first grayscale image according to a grayscale weighting method, and then, the electronic device may perform image fusion on the first grayscale image and the second grayscale image according to a weighting coefficient to obtain a third grayscale image. The electronic device can then convert the third grayscale image into a color image (i.e., a second color image) for output or display, so as to obtain a color image with higher definition, clearer image details and better brightness.
It is to be appreciated that the electronic device can convert the first color image to a first grayscale image and then image-fuse with the second grayscale image to obtain a third grayscale image containing more feature information, after which the electronic device can convert the third grayscale image to a second color image. Therefore, the electronic equipment can shoot a second color image with higher definition, clearer image details and better brightness in a dark environment.
Mode two
The following second mode can be applied to a scene in which an edge of a photographic subject is sharpened. For example, the user takes a close-up of an insect using macro mode, or the user takes a scene of multiple vehicles on an overpass.
Optionally, with reference to fig. 1, as shown in fig. 4, the second image includes a second depth image, and the step 102 may be specifically implemented by the following steps 102d to 102 e.
Step 102d, the electronic device obtains the depth value of the first edge pixel point of the second depth image.
The first edge pixel point is a pixel point of which the absolute value of the difference value between the depth values of the first edge pixel point and the adjacent pixel point is greater than or equal to a target threshold value, and the adjacent pixel point is a neighborhood pixel point of the edge pixel point.
It should be noted that, in the embodiment of the present application, the neighborhood pixel point refers to a neighborhood of a pixel point. Specifically, the neighborhood can be 4 or 8. Exemplarily, assuming that the coordinates of one pixel point are (x, y), the 4 neighborhood pixel points corresponding to the pixel point are respectively: (x +1, y), (x-1, y), (x, y + 1) and (x, y-1). Then the 8 neighborhood pixels corresponding to the pixel point are respectively: (x +1, y), (x-1, y), (x, y + 1), (x, y-1), (x-1, y + 1), (x-1, y-1), (x +1, y + 1), and (x +1, y-1).
Optionally, in this embodiment of the application, the manner of obtaining the edge pixel point by the electronic device may specifically be: the electronic equipment obtains the depth value of the adjacent pixel point, compares the absolute value of the difference value of the two pixel points with a target threshold value, and when the difference value of the absolute value is larger than or equal to the target threshold value, the electronic equipment takes the adjacent 4 or 8 pixel points as first edge pixel points to obtain the depth value of the adjacent pixel point. The target threshold may be set according to an actual use requirement, and the embodiment of the present application is not particularly limited.
And 102e, the electronic equipment adds the depth value of the first edge pixel point to the edge pixel point corresponding to the first color image to obtain the target image.
Optionally, in this embodiment of the application, the adding, by the electronic device, the depth value of the first edge pixel to the edge pixel at the position corresponding to each other in the first color image means that the electronic device sharpens an edge corresponding to the first color image according to the depth value of the first edge pixel, so that an edge contour of the obtained sharpened color image (that is, the target image) includes more image detail features according to being clear.
Specifically, in this embodiment of the present application, the electronic device may add the depth value of the first edge pixel to the edge pixel corresponding to the first color image at the position of the first edge pixel specifically: the electronic device optimizes RGB values of edge pixel points corresponding to each other in position in the first color image according to the depth value of each first edge pixel point (specifically, an edge optimization algorithm in the related art may be referred to), so as to obtain an optimized RGB value of the pixel point, and uses the optimized RGB value to replace an original RGB value (where a non-edge pixel point retains the original RGB value), so as to obtain a color image (i.e., a target image) with a sharpened edge.
It should be noted that, in the embodiment of the present application, the target image is an image obtained by sharpening and enhancing corresponding edge pixels in the first color image according to the depth values of the edge pixels, and non-edge pixels are not sharpened. In this way, the edge contour of the processed color image (i.e. the target image) is made to be clear, and more detailed image features are included.
It can be understood that the electronic device may obtain the depth value of the first edge pixel point of the second depth image, and add the depth value of the first edge pixel point to the edge pixel points corresponding to each other in the first color image, so as to obtain the image detail features, and the edge contour is according to a clear color image (i.e., a target image), so that the electronic device can shoot the edge detail features in a scene that needs to embody the edge feature of a shot object, such as macro shooting, and the edge contour is according to a clear color target image.
Mode III
In the embodiment of the present application, the following third mode may be applied to a scene in which a part of an image captured by an electronic device is scratched. For example, a user takes a picture of a tree and the user needs to scratch the outline of the tree out of the scene in the taken picture.
Optionally, with reference to fig. 1, as shown in fig. 5, the second image includes a second depth image, and the step 102 may be specifically implemented by the following steps 102f to 102 h.
Step 102f, the electronic device determines a first edge region of the target object in the second depth image.
Optionally, in this embodiment of the application, the target object is an object in which a user needs to perform a target operation (for example, a matting operation) in a user-captured image (a second depth image and a first color image). For example, an edge region of a subject is photographed, and a first edge region of the subject is a region formed by an edge contour of the composition subject, and the region may be a closed region or a semi-closed region (i.e., a closed region has a non-closed region locally).
Optionally, in this embodiment of the present application, the manner of determining the first edge region of the target object from the second depth image may specifically be: the electronic device determines the target object from the second depth image according to the user input, then compares the pixel point in the center of the target object with the depth values of the surrounding pixel points in sequence (for example, comparing by using a region growing method, or comparing in sequence), and determines the pixel point with smaller gray value difference with the pixel point in the center of the target object as a first edge pixel point when the absolute value of the difference between two adjacent pixel points is not greater than a first threshold (the first threshold is a preset value and can be adjusted according to actual use requirements), and connects the first edge pixel points around the target object in series in sequence to form a first edge region.
Exemplarily, assuming that a user needs to extract an edge of a tree in an image, the tree in the second depth image is a target object, and a region formed by an edge contour of the tree in the second depth image is a first edge region of the target object.
In the embodiment of the present application, the target object in the second depth image is also the target object in the first color image, and both of them are used to indicate the object that the user needs to perform the target operation in the captured image.
And step 102g, the electronic equipment determines a second edge area of the target object in the first color image according to the first edge area.
Optionally, in this embodiment of the present application, the second edge area is an edge area in the first color image, which corresponds to the first edge area in the second depth image. Specifically, the electronic device may perform an alignment operation on the first color image and the second depth image according to the target object, determine a pixel point corresponding to each pixel point in the first edge region in the first color image as a target pixel point, and then determine an edge region formed by the target pixel points in the first color image as a second edge region.
It should be noted that, in the embodiment of the present application, each pixel point in the second edge region corresponds to one pixel point at the same position as the first edge region.
And 102h, extracting the image of the second edge area by the electronic equipment to obtain a target image.
Optionally, in this embodiment of the present application, the electronic device may perform matting from the first color image according to the second edge region to obtain an object image, where the object image is a color image. Or the electronic device may extract a contour curve of the second edge region according to the second edge region to obtain a target image, where the target image is an image formed by an outer contour curve of the target object (i.e., a contour curve of the second edge region).
Alternatively, in this embodiment of the application, in the case that the target image is an image formed by an external contour curve of the target object (i.e., a contour curve of the second edge region), the electronic device may fill the contour curve with a pseudo color for display. The electronic device may be filled with one or more false colors. Specifically, the electronic device may add a pseudo color corresponding to the depth data according to the depth data corresponding to the enclosed region enveloped by the contour curve, where a user may set a corresponding relationship between the depth data (0 to 255) and the pseudo color, and the electronic device may also preset the corresponding relationship. For example, the unification of depth data within the (200, 210) interval is filled with blue.
For example, in a case where the electronic device captures a first color image including a target object using a general camera and captures a second depth image including the target object using a TOF camera, the electronic device may determine a first edge region of the target object in the second depth image using a region growing method according to the target object selected by a user. Then, the electronic device performs an alignment operation on the first color image and the second depth image with the target object, and determines a second edge area of the target object from the first color image according to the first edge area. Then, the electronic device may extract the image of the second edge area, and fill the false color to obtain the target image.
It is understood that the electronic device determines a first edge region of the target object from the second depth image, and determines a second edge region of the target object from the first color image according to the correspondence between the second depth image and the first color image, and then the electronic device may extract an image of the second edge region to obtain the target image. Therefore, the electronic equipment obtains the target image reflecting more edge contour features of the target object image. Therefore, the electronic equipment can obtain the target image reflecting more edge contour characteristics of the target object image in the scene of local image matting, especially the scene of the image extracting the edge contour of the target object.
Optionally, with reference to fig. 5, as shown in fig. 6, before the step 102f, the image capturing method provided in the embodiment of the present application further includes the following steps 104 and 105, and accordingly, the step 102f may be specifically implemented by the following step 102f 1.
And step 104, the electronic equipment displays the second depth image.
Wherein, the second depth image comprises a target object.
Optionally, in this embodiment of the application, the electronic device may display the depth value in the second depth image by using different grayscale colors according to the range of the depth value in the second depth image. Specifically, the electronic device may sequentially divide (0, 255) into a plurality of sub-regions, each of which is displayed by a corresponding gray color. The gray color corresponding to each self-interval may be the median of the sub-area range. Illustratively, the user may set display using a first grayscale color (grayscale value of 50) within the (0, 100) subinterval; displaying using a second gray scale color (gray scale value of 150) within the [100, 200) subinterval; display is performed using a third gray-scale color (gray-scale value of 225) within the [200, 255) subinterval. The determination may be specifically performed according to actual use requirements, and the embodiment of the present application is not specifically limited.
It should be noted that, when the electronic device can sequentially divide (0, 255) into a plurality of subintervals, the subintervals may be manually set according to actual use requirements of a user, or may be set according to a preset interval of the electronic device. In general, the gray value of a naturally captured image rarely appears at two end values (i.e., pure black when the gray value is 0 and pure white when the gray value is 255) in (0, 255), and therefore, the two limit values are not limited, that is, when a pixel point with a gray value of 0 or 255 appears in the image, the electronic device directly displays pure black or pure white.
Step 105, the electronic device receives a first input of the target object by the user.
Optionally, in this embodiment of the application, in a case that the second depth image is displayed in different grayscale colors, the user may select a grayscale color corresponding to the target object through the first input to select the target object.
Optionally, in this embodiment of the application, when the second depth image is displayed in different grayscale colors, the first input may be a touch input to a grayscale color corresponding to the target object, and the first input may also be a touch click input to the target object. The touch input may be any one of the following: single click, double click, long press, circle selection, move according to a preset track, etc. The determination may be specifically performed according to actual use requirements, and the embodiment of the present application is not specifically limited.
Step 102f1, the electronic device determines a first edge area in response to the first input.
The first edge region is an edge region of a target object in the first depth image.
Optionally, in this embodiment of the application, the manner of determining the first edge region of the target object from the second depth image may specifically be: after the electronic device determines the target object from the second depth image according to the first input of the user, the electronic device compares the pixel points in the center of the target object with the depth values of the pixel points around the target object in sequence (for example, comparing by using a region growing method, or comparing by traversing in sequence), and determines the pixel points with smaller gray value difference with the pixel points in the center of the target object in the adjacent pixel points as first edge pixel points when the absolute value of the difference between the two adjacent pixel points is not greater than the first threshold, and the first edge pixel points around the target object are connected in series in sequence to form a first edge region.
Optionally, in this embodiment of the application, in addition to determining the first edge area by the user input, the electronic device may further preset the first edge area. Specifically, the electronic device may determine an area corresponding to a preset depth value interval as the first edge area, and for example, assuming that the image to be processed is a picture of a starry sky, the picture includes only a luminous star and a black night sky background, and the depth value of the night sky background is located in a (K1, K2) interval, the electronic device may determine an image area corresponding to a (K1, K2) depth value interval as the first edge area.
Illustratively, after the electronic device captures a first color image including a target object using a normal camera and captures a second depth image including the target object using a TOF camera, the electronic device may display the second depth image displayed in different grayscale colors, wherein the target object in the second depth image is displayed as grayscale color a. The user can press the gray color a displayed by the target object for a long time, the electronic device responds to the long press input (namely, the first input), determines the pixel points with smaller gray value difference with the pixel points in the center of the target object in the pixel points around the target object as first edge pixel points by using a region growing method, sequentially connects the first edge pixel points around the target object in series, and determines a first edge region.
It will be appreciated that where the electronic device may display the second depth image, the user may select the target object from the second depth image via a first input, and the electronic device may determine a first edge region of the target object in response to the first input. Therefore, the first edge area of the target object can be obtained, and a user can conveniently perform other operations on the first edge area of the target object, such as edge sharpening, edge matting and the like.
It should be noted that, in the image capturing method provided in the embodiment of the present application, the execution subject may be an image capturing apparatus, or a control module in the image capturing apparatus for executing the image capturing method. In the embodiment of the present application, an image capturing method performed by an image capturing apparatus is taken as an example to describe the apparatus provided in the embodiment of the present application.
As shown in fig. 7, an embodiment of the present application provides an image capturing apparatus 700. The image photographing apparatus 700 may include an acquisition module 701 and a processing module 702. The acquiring module 701 may be configured to acquire a first color image acquired by the first camera and a second image acquired by the second camera, where the second image includes a second grayscale image or a second depth image. A processing module 702 configured to generate a target image based on the first color image and the second image.
Optionally, in this embodiment of the application, the second image includes a second grayscale image, and the luminance of the second grayscale image is greater than the luminance of the first color image. A processing module 702, which may be specifically configured to convert the first color image into a first grayscale image; carrying out image fusion on the first gray level image and the second gray level image to obtain a third gray level image; and converting the third grayscale image into a second color image. Wherein the target image is the second color image.
Optionally, in this embodiment of the application, the processing module 702 may be specifically configured to use a sum of the first numerical value and the second numerical value as a gray value of a pixel in the third gray image. The first value is a gray value weighted by a gray value of a first pixel point in a first gray image, the second value is a gray value weighted by a gray value of a second pixel point in a second gray image, and the first pixel point and the second pixel point are pixel points with mutually corresponding positions.
Optionally, in this embodiment of the application, the second image includes a second depth image. The processing module 702 is specifically configured to obtain a depth value of a first edge pixel of the second depth image, where the first edge pixel is a pixel whose absolute value of a difference between the depth values of the first edge pixel and an adjacent pixel is greater than or equal to a target threshold, and the adjacent pixel is a neighbor pixel of an edge pixel. The processing module 702 is further specifically configured to add the depth value of the first edge pixel to edge pixels corresponding to each other in the first color image, so as to obtain a target image.
Optionally, in this embodiment of the application, the second image includes a second depth image. The processing module 702 may be specifically configured to determine a first edge region of the target object in the second depth image; determining a second edge area of the target object in the first color image according to the first edge area; and extracting the image of the second edge area to obtain a target image.
Optionally, the image capturing apparatus 700 provided in the embodiment of the present application further includes: a display module 703 and a receiving module 704. The display module 703 may be configured to display the second depth image before determining the first edge region of the target object in the second depth image, where the second depth image includes the target object. The receiving module 704 may be configured to receive a first input of the target object from a user. The processing module 702 may be further configured to determine the first edge region in response to the first input received by the receiving module 704.
Optionally, in this embodiment of the application, the processing module 702 is further configured to perform image preprocessing on the first color image and the second image, respectively, before generating the target image based on the first color image and the second image. The processing module 702 may be specifically configured to generate the target image based on the preprocessed first color image and the second image. The number of the pixel points of the first color image after being preprocessed is the same as that of the pixel points of the second image after being preprocessed.
The image capturing apparatus in the embodiment of the present application may be a functional entity and/or a functional module in an electronic device, which executes an image capturing method, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. Illustratively, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiment of the present application is not particularly limited.
The image capturing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image capturing device provided in the embodiment of the present application can implement each process implemented by the image capturing device in the method embodiments of fig. 1 to 6, and is not described herein again to avoid repetition.
The embodiment of the application provides an image shooting device, which can acquire a first color image through acquiring a first camera and acquire a second image through a second camera, wherein the second image comprises a second gray image or a second depth image; and generating a target image based on the first color image and the second image. By the method, the electronic equipment can directly generate the target image containing more image information, such as the image information of the gray-scale image or the image information of the depth image, according to the first color image and the second image acquired by the two cameras, so that compared with the related technology, the method avoids the situation that a user carries out image processing again after shooting the image, thereby acquiring the image with better shooting effect and saving the time of the user.
Optionally, as shown in fig. 8, an electronic device 900 is further provided in an embodiment of the present application, and includes a processor 901, a memory 902, and a program or an instruction that is stored in the memory 902 and is executable on the processor 901, where the program or the instruction is executed by the processor 901 to implement each process of the foregoing image capturing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 9 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 2000 includes, but is not limited to: a radio frequency unit 2001, a network module 2002, an audio output unit 2003, an input unit 2004, a sensor 2005, a display unit 2006, a user input unit 2007, an interface unit 2008, a memory 2009, and a processor 2010.
Among them, the input unit 2004 may include a graphic processor 20041 and a microphone 20042, the display unit 2006 may include a display panel 20061, the user input unit 2007 may include a touch panel 20071 and other input devices 20072, and the memory 2009 may be used to store software programs (e.g., an operating system, application programs required for at least one function), and various data.
Those skilled in the art will appreciate that the electronic device 2000 may further include a power supply (e.g., a battery) for supplying power to various components, and the power supply may be logically connected to the processor 2010 through a power management system, so as to manage charging, discharging, and power consumption management functions through the power management system. The electronic device structure shown in fig. 9 does not constitute a limitation to the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The input unit 2004 may be configured to acquire a first color image acquired by the first camera and a second image acquired by the second camera, where the second image includes a second grayscale image or a second depth image. A processor 2010 for generating a target image based on the first color image and the second image.
The embodiment of the application provides an electronic device, which can acquire a first color image through acquiring a first camera and acquire a second image through a second camera, wherein the second image comprises a second gray image or a second depth image; and generating a target image based on the first color image and the second image. By the method, the electronic equipment can directly generate the target image containing more image information, such as the image information of the gray-scale image or the image information of the depth image, according to the first color image and the second image acquired by the two cameras, so that compared with the related technology, the method avoids the situation that a user carries out image processing again after shooting the image, thereby acquiring the image with better shooting effect and saving the time of the user.
Optionally, in this embodiment of the application, the second image includes a second grayscale image, and the luminance of the second grayscale image is greater than the luminance of the first color image. A processor 2010, specifically configured to convert the first color image into a first grayscale image; carrying out image fusion on the first gray level image and the second gray level image to obtain a third gray level image; and converting the third grayscale image into a second color image. Wherein the target image is the second color image.
It is to be appreciated that the electronic device can convert the first color image into a first grayscale image, and then image-fuse with the second grayscale image to obtain a third grayscale image containing more feature information, after which the electronic device can convert the third grayscale image into a second color image. Therefore, the electronic equipment can shoot a second color image with higher definition, clearer image details and better brightness in a dark environment.
Optionally, in this embodiment, the processor 2010 may be specifically configured to use a sum of the first numerical value and the second numerical value as a gray value of a pixel in the third gray image. The first numerical value is a numerical value obtained by weighting the gray value of a first pixel point in a first gray image, the second numerical value is a numerical value obtained by weighting the gray value of a second pixel point in a second gray image, and the first pixel point and the second pixel point are pixel points with mutually corresponding positions.
It can be understood that the electronic device may use a sum of a weighted value of the gray value of the first pixel in the first gray image and a weighted value of the gray value of the second pixel in the second gray image as the gray value of a pixel in the third gray image. Therefore, the electronic equipment can obtain the weighted third grayscale image, so that the third grayscale image has higher definition, clearer image details and better brightness, and the image quality of the obtained third grayscale image is improved.
Optionally, in this embodiment of the application, the second image includes a second depth image. The processor 2010 is specifically configured to obtain a depth value of a first edge pixel of the second depth image, where the first edge pixel is a pixel whose absolute value of a difference between the depth values of the first edge pixel and an adjacent pixel is greater than or equal to a target threshold, and the adjacent pixel is a neighbor pixel of an edge pixel. The processing module 702 is further configured to add the depth value of the first edge pixel to edge pixels corresponding to each other in the first color image, so as to obtain a target image.
It can be understood that the electronic device may obtain a depth value of a first edge pixel of the first depth image, and add the depth value of the first edge pixel to edge pixels corresponding to each other in the first color image, so as to obtain an image including more image detail features, and the edge contour is according to a clear color image (i.e., a target image), so that the electronic device can shoot more edge detail features in a scene such as macro shooting that needs to embody the edge feature of a shot object, and the edge contour is according to a clear color target image.
Optionally, in this embodiment of the application, the second image includes a second depth image. The processor 2010 may be specifically configured to determine a first edge region of the target object in the second depth image; determining a second edge area of the target object in the first color image according to the first edge area; and extracting the image of the second edge area to obtain a target image.
It is understood that the electronic device determines a first edge region of the target object from the first depth image, and determines a second edge region of the target object from the first color image according to the correspondence between the first depth image and the first color image, and then the electronic device may extract an image of the second edge region to obtain the target image. Therefore, the electronic equipment obtains the target image reflecting more edge contour features of the target object image. Therefore, the electronic equipment can obtain the target image reflecting more edge contour features of the target object image in the scene of local matting, especially the scene of the image extracting the edge contour of the target object.
Optionally, in this embodiment of the application, the display unit 2006 may be configured to display the second depth image before determining the first edge region of the target object in the second depth image, where the second depth image includes the target object. The user input unit 2007 may be configured to receive a first input of the target object from the user. The processor 2010, may also be configured to determine the first edge region in response to the first input received by the user input unit 2007.
It is to be understood that, in the case where the electronic device may display the first depth image, the user may select the target object from the first depth image through a first input, and the electronic device may determine a first edge region of the target object in response to the first input. So, can obtain the first edge region of target object to convenience of customers carries out other operations to the first edge region of this target object, for example, operations such as edge sharpening, edge matting, and then promotes user's use and experiences.
Optionally, in this embodiment of the application, the processor 2010 is further configured to perform image preprocessing on the first color image and the second image, respectively, before generating the target image based on the first color image and the second image. The processor 2010 may be specifically configured to generate the target image based on the preprocessed first color image and the preprocessed second image. The number of the pixel points of the first color image after being preprocessed is the same as that of the pixel points of the second image after being preprocessed.
It can be understood that, because the electronic device preprocesses the first color image and the second image, the image sizes and the pixel numbers of the two processed images are the same, and there is no image dead pixel, therefore, the electronic device can directly generate the target image according to the first color image after preprocessing and the second image after preprocessing, thereby facilitating the operation of the user, and being capable of more quickly and conveniently obtaining the target image required by the user.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned image capturing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the image capturing method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one of 8230, and" comprising 8230does not exclude the presence of additional like elements in a process, method, article, or apparatus comprising the element. Further, it should be noted that the scope of the methods and apparatuses in the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions recited, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present application or portions thereof that contribute to the prior art may be embodied in the form of a software product, where the computer software product is stored in a storage medium (such as a ROM/RAM, a magnetic disk, and an optical disk), and includes several instructions for enabling a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to execute the method described in the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (5)

1. An image capturing method, characterized in that the method comprises:
acquiring a first color image acquired by a first camera and a second image acquired by a second camera, wherein the second camera is a TOF camera;
generating a target image based on the first color image and the second image;
the second image comprises a second gray scale image, and the brightness of the second gray scale image is greater than that of the first color image;
the generating a target image based on the first color image and the second image comprises:
converting the first color image into a first grayscale image;
carrying out image fusion on the first gray level image and the second gray level image to obtain a third gray level image;
converting the third grayscale image into a second color image;
wherein the target image is the second color image;
the second image comprises a second depth image;
the generating a target image based on the first color image and the second image comprises:
acquiring the depth value of a first edge pixel point of the second depth image, wherein the first edge pixel point is a pixel point of which the absolute value of the difference value between the depth value of the first edge pixel point and the depth value of an adjacent pixel point is greater than or equal to a target threshold value, and the adjacent pixel point is a neighborhood pixel point of an edge pixel point;
adding the depth value of the first edge pixel point to the edge pixel point with the mutually corresponding position in the first color image to obtain the target image;
the image fusion of the first gray level image and the second gray level image to obtain a third gray level image includes:
taking the sum of the first numerical value and the second numerical value as the gray value of one pixel point in the third gray image;
the first numerical value is a numerical value obtained by weighting the gray value of a first pixel point in the first gray image, the second numerical value is a numerical value obtained by weighting the gray value of a second pixel point in the second gray image, and the first pixel point and the second pixel point are pixel points with mutually corresponding positions.
2. The method of claim 1, wherein the second image comprises a second depth image;
generating a target image based on the first color image and the second image, comprising:
determining a first edge region of a target object in the second depth image;
determining a second edge region of the target object in the first color image according to the first edge region;
and extracting the image of the second edge area to obtain the target image.
3. The method of claim 2, wherein prior to determining the first edge region of the target object in the second depth image, the method further comprises:
displaying the second depth image, the second depth image including the target object;
receiving a first input of the target object by a user;
the determining a first edge region of a target object in the second depth image includes:
in response to the first input, the first edge region is determined.
4. The method of any of claims 1-3, wherein prior to generating a target image based on the first color image and the second image, the method further comprises:
respectively carrying out image preprocessing on the first color image and the second image;
generating a target image based on the first color image and the second image, comprising:
generating the target image based on the preprocessed first color image and the second image;
and the number of the pixel points of the first color image and the second image after the preprocessing is the same.
5. An image capturing apparatus, characterized in that the apparatus comprises: the device comprises an acquisition module and a processing module;
the acquisition module is used for acquiring a first color image acquired by a first camera and a second image acquired by a second camera, and the second camera is a TOF camera;
the processing module is used for generating a target image based on the first color image and the second image;
the second image comprises a second gray scale image, and the brightness of the second gray scale image is greater than that of the first color image;
the processing module is specifically configured to convert the first color image into a first grayscale image; performing image fusion on the first gray level image and the second gray level image to obtain a third gray level image; converting the third gray level image into a second color image;
wherein the target image is the second color image; the second image comprises a second depth image;
the processing module is specifically configured to obtain a depth value of a first edge pixel point of the second depth image, where the first edge pixel point is a pixel point whose absolute value of a difference between the depth values of the first edge pixel point and an adjacent pixel point is greater than or equal to a target threshold, and the adjacent pixel point is a neighboring pixel point of an edge pixel point;
the processing module is further specifically configured to add the depth value of the first edge pixel to edge pixels corresponding to each other in the first color image, so as to obtain the target image;
the processing module is specifically configured to use a sum of the first numerical value and the second numerical value as a gray value of a pixel point in the third gray image;
the first numerical value is a numerical value obtained by weighting the gray value of a first pixel point in the first gray image, the second numerical value is a numerical value obtained by weighting the gray value of a second pixel point in the second gray image, and the first pixel point and the second pixel point are pixel points with mutually corresponding positions.
CN202010899232.0A 2020-08-31 2020-08-31 Image shooting method and device and electronic equipment Active CN111866476B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010899232.0A CN111866476B (en) 2020-08-31 2020-08-31 Image shooting method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010899232.0A CN111866476B (en) 2020-08-31 2020-08-31 Image shooting method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111866476A CN111866476A (en) 2020-10-30
CN111866476B true CN111866476B (en) 2023-04-07

Family

ID=72967637

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010899232.0A Active CN111866476B (en) 2020-08-31 2020-08-31 Image shooting method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111866476B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113599066B (en) * 2021-09-14 2023-05-09 广州蓝仕威克医疗科技有限公司 Device for temperature regulation in life

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107534735B (en) * 2016-03-09 2019-05-03 华为技术有限公司 Image processing method, device and the terminal of terminal
CN106447677A (en) * 2016-10-12 2017-02-22 广州视源电子科技股份有限公司 Image processing method and apparatus thereof
CN107622480B (en) * 2017-09-25 2020-11-24 长春理工大学 Kinect depth image enhancement method
CN108198161A (en) * 2017-12-29 2018-06-22 深圳开立生物医疗科技股份有限公司 A kind of fusion method, device and the equipment of dual camera image
CN108717691B (en) * 2018-06-06 2022-04-15 成都西纬科技有限公司 Image fusion method and device, electronic equipment and medium
CN110545375B (en) * 2019-08-08 2021-03-02 RealMe重庆移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN111866476A (en) 2020-10-30

Similar Documents

Publication Publication Date Title
US9934562B2 (en) Method for dynamic range editing
US8594451B2 (en) Edge mapping incorporating panchromatic pixels
US8237813B2 (en) Multiple exposure high dynamic range image capture
US20080043120A1 (en) Image processing apparatus, image capture apparatus, image output apparatus, and method and program for these apparatus
CN114693580B (en) Image processing method and related device
CN112929558B (en) Image processing method and electronic device
CN107317967A (en) Image processing method, device, mobile terminal and computer-readable recording medium
CN107968919A (en) Method and apparatus for inverse tone mapping (ITM)
WO2010089836A1 (en) Image processing device
CN102223480A (en) Image processing device and image processing method
CN109274949A (en) A kind of method of video image processing and its device, display equipment
CN111866476B (en) Image shooting method and device and electronic equipment
CN112437237B (en) Shooting method and device
CN110838088B (en) Multi-frame noise reduction method and device based on deep learning and terminal equipment
CN111901519B (en) Screen light supplement method and device and electronic equipment
CN113112428A (en) Image processing method and device, electronic equipment and readable storage medium
CN112672055A (en) Photographing method, device and equipment
CN112508820A (en) Image processing method and device and electronic equipment
US20230342895A1 (en) Image processing method and related device thereof
CN115550575B (en) Image processing method and related device
CN111970451B (en) Image processing method, image processing device and terminal equipment
CN110874816B (en) Image processing method, device, mobile terminal and storage medium
CN114125319A (en) Image sensor, camera module, image processing method and device and electronic equipment
CN112446848A (en) Image processing method and device and electronic equipment
CN116703813B (en) Image processing method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant