WO2015196802A1 - Photographing method and apparatus, and electronic device - Google Patents

Photographing method and apparatus, and electronic device Download PDF

Info

Publication number
WO2015196802A1
WO2015196802A1 PCT/CN2015/071663 CN2015071663W WO2015196802A1 WO 2015196802 A1 WO2015196802 A1 WO 2015196802A1 CN 2015071663 W CN2015071663 W CN 2015071663W WO 2015196802 A1 WO2015196802 A1 WO 2015196802A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
area
focus
camera
user
Prior art date
Application number
PCT/CN2015/071663
Other languages
French (fr)
Chinese (zh)
Inventor
蔡志刚
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to CN201410291156.X priority Critical
Priority to CN201410291156.XA priority patent/CN104104869A/en
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2015196802A1 publication Critical patent/WO2015196802A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/225Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, camcorders, webcams, camera modules specially adapted for being embedded in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/232Devices for controlling television cameras, e.g. remote control ; Control of cameras comprising an electronic image sensor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/225Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, camcorders, webcams, camera modules specially adapted for being embedded in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/232Devices for controlling television cameras, e.g. remote control ; Control of cameras comprising an electronic image sensor
    • H04N5/23212Focusing based on image signals provided by the electronic image sensor
    • H04N5/232133Bracketing relating to the capture of varying focusing conditions

Abstract

A photographing method is disclosed in the present invention, and relates to the field of data processing and is used for solving the problem of the complexity in photographing operation. According to the technical solution provided by the present invention, multiple images are obtained through one-time photographing, a depth map is calculated, and according to the definition of the multiple images, through performing the photographing operation for one time by a user, the image with the highest definition in a focus area can be obtained, and the image with blurring effect can also be generated. Embodiments of the present invention can be applied to the scene that an electronic terminal takes a photograph.

Description

Photographing method, device and electronic device

This application claims priority to Chinese Patent Application No. 201410291156.X, entitled "A Photographing Method, Apparatus, and Electronic Device", filed on June 25, 2014, the entire contents of which are incorporated by reference. In this application.

Technical field

The present invention relates to the field of data processing, and in particular, to a photographing method, device, and electronic device.

Background technique

With the development of electronic devices and image processing, many electronic devices have camera functions. Users can also take pictures of the scenes they want to record when they are not carrying the camera, bringing more fun to life. Moreover, the user can also do some simple processing against a good photo to achieve a better effect.

In the process of implementing the existing shooting method, the inventors have found that at least the following problems exist in the prior art: when the electronic device performs photographing, the user is required to select a focus area, and then the electronic device selects a focus area for the user to shoot. Obtain an image; if the user wants to select another area as the focus area, he must reselect a focus area, and then the mobile terminal re-shoots based on the new focus area again, which is cumbersome and inconvenient; it is also possible that the electronic device does not support the user to select the focus area. So that the user can only select the default focus area to take pictures, so that they can not get the image effect they want.

Summary of the invention

Embodiments of the present invention provide a photographing method capable of simplifying a photographing operation.

In order to achieve the above object, embodiments of the present invention adopt the following technical solutions:

In a first aspect, an embodiment of the present invention provides a photographing method, which is applied to an electronic terminal of a single camera, and includes:

Collecting images on a plurality of focus points along the shooting direction of the single camera;

Calculate according to the acquired images, and obtain the corresponding depth of each image. Figure

Determining the focus area selected by the user in the finder frame;

By comparing the local sharpness of the focus area in each image, one image with the highest local definition is selected as the image to be processed;

Defining an image other than the focus area in the image to be processed according to a depth map of the image to be processed to obtain a target image;

The target picture is output for viewing by the user.

In conjunction with the first aspect, in a first possible implementation manner of the first aspect, the single camera is capable of moving the lens under the driving of the electric motor; then, collecting the images on the plurality of focus points along the shooting direction of the user include:

Configuring different current values for the electric motor to cause the lenses of the single camera to image on a plurality of focus points respectively;

An image is acquired while the single camera is stuck at any of the focus points.

In conjunction with the first possible implementation of the first aspect, in a second possible implementation, the plurality of focus points includes at least two focus points, and the object distances of the two focus points are not less than one Set the threshold.

In conjunction with the first aspect, in a third possible implementation manner of the first aspect, before the determining the focus area selected by the user in the finder frame, the method further includes:

Perform mesh area division on the collected images, and calculate the regional definition of each mesh area;

Wherein, the grid area used for each acquired image is divided in a consistent manner;

Then, before comparing the local sharpness of the focus area in each image, and selecting an image with the highest local sharpness as the image to be processed, the method further includes:

Determining the position, shape and area of the focus area selected by the user in the finder frame;

Determining a target focus area according to the position, shape, and area and an area ratio relationship between the acquired image and the finder frame;

Determining one grid area or a plurality of grid areas in each image that can cover the target focus area;

The area resolution of a determined grid area or the determined plurality of grid areas The sum of the regional sharpness of the domain is used as the local sharpness of the focal region in the image.

In combination with the first aspect or any one of the first three possible implementation manners of the first aspect, in the four possible implementation manners, the images on the plurality of focus points are collected in the shooting direction along the user After that, it also includes:

Assign the same identification information to each captured image, and store the assigned identification information and images in a unified manner.

The photographing method provided by the embodiment of the invention collects and calculates images on a plurality of focus points in the finder frame along the direction of the camera, and obtains a depth map corresponding to each image. Moreover, when the user selects the focus area of the finder frame, the sharpness of each image focus area is compared, and the image with the highest definition is selected as the image to be processed, and the image is to be processed outside the focus area according to the depth map of the image to be processed. The image is blurred to get the picture that the user wants. The method of the present invention only needs to take multiple images at a time, and then the user selects a focus area for subsequent blurring processing, as compared with a method in which the user needs to repeatedly operate the electronic device to perform repeated photographing based on multiple focus regions. In this process, the user only needs to complete one shooting action to select all the focus areas in the image, which is relatively easy to operate.

In a second aspect, the present invention also provides a photographing apparatus provided with a single camera, comprising:

a collecting unit, configured to collect images on a plurality of focus points along a shooting direction of the single camera;

a first calculating unit, configured to perform calculation according to the collected images, to obtain a depth map corresponding to each image;

a first determining unit, configured to determine a focus area selected by the user in the finder frame;

a selecting unit for selecting an image with the highest local definition as the image to be processed by comparing the local sharpness of the focus area in each image;

a blurring processing unit, configured to perform a blurring process on an image other than the focus area in the image to be processed according to a depth map of the image to be processed, to obtain a target image;

An output unit, configured to output the target picture for viewing by a user.

In conjunction with the second aspect, in a first possible implementation, the apparatus further includes An electric motor, wherein the single camera is capable of moving the lens under the driving of the electric motor;

The collecting unit is further configured to configure different current values for the electric motor, so that the lenses of the single camera are respectively imaged on a plurality of focus points; when the single camera stays at any focus point, the collection is performed. image.

With reference to the first possible implementation manner of the second aspect, in a second possible implementation manner, the plurality of focus points include at least two focus points, and the object distance difference between the two focus points is not less than one Set the threshold.

In combination with the second aspect, in a third possible implementation manner, the method further includes:

a dividing unit, configured to perform mesh area division on the collected image; wherein the mesh area used by each acquired image is divided in a consistent manner;

a second calculating unit, configured to calculate a regional definition of each of the mesh regions;

a second determining unit, configured to determine a position, a shape, and an area of the focus area selected by the user in the finder frame;

a third determining unit, configured to determine a target focus area according to the position, shape, and area, and a ratio of a width relationship and a height ratio of the captured image to the width of the finder frame;

a fourth determining unit, configured to determine one mesh area or multiple mesh areas in each image that can cover the target focus area;

And a fifth determining unit, configured to use the sum of the area definition of one determined grid area or the area definition of the determined plurality of grid areas as the local definition of the focus area in the image.

With reference to the second aspect, and any one of the first three possible implementation manners of the second aspect, in a fourth possible implementation manner, the method further includes:

An identifier allocation unit, configured to allocate the same identification information for each captured image;

a storage unit configured to store the assigned identification information and images in a unified manner.

In a third aspect, an embodiment of the present invention further provides an electronic device, which is provided with a single camera, a processor, a memory, and an input/output interface. The memory stores a computer program, and the processor calls the computer program to control the Said single camera and input and output interface;

The single camera is configured to collect images on a plurality of focus points along a shooting direction;

The processor is configured to perform calculation according to the collected image to obtain a depth map corresponding to each image; determine a focus area selected by the user in the finder frame; and perform local definition of the focus area in each image Comparing, selecting one image with the highest local resolution as the image to be processed; according to the depth map of the image to be processed, the image other than the focus area in the image to be processed is blurred to obtain a target image;

An input/output interface, configured to output the target image for viewing by a user;

The memory is configured to store an image of a plurality of focus points, a depth map corresponding to each image, and a target picture.

In combination with the third aspect, in the first embodiment of the Kenergy, an electric motor is further provided;

The processor is further configured to configure different current values for the electric motor to cause the lenses of the single camera to image on a plurality of focus points respectively;

The single camera is also used to acquire images at any of the focus points.

With reference to the first possible implementation manner of the third aspect, in a second possible implementation manner, the plurality of focus points include at least two focus points, and the object distances of the two focus points are not less than a preset Threshold.

With reference to the third aspect, in a third possible implementation, the processor is further configured to perform mesh area division on the collected image, and calculate a regional definition of each of the mesh areas; The acquired image uses a grid area division manner; determines the position, shape, and area of the focus area selected by the user in the finder frame; and according to the position, shape, and area, and the captured image and the finder frame Width ratio relationship and height proportional relationship, determining a target focus area; determining a grid area or a plurality of grid areas in each image that can cover the target focus area; and determining a region definition of a grid area or The sum of the regional sharpness of the determined plurality of mesh regions as the local sharpness of the focus region in the image;

The memory is also used for the area definition of each grid area.

In combination with the third aspect and any one of the first three possible implementation manners of the third aspect, in a fourth possible implementation, the processor is further configured to collect Each image is assigned the same identification information;

The memory is further configured to uniformly store the allocated identification information and images.

The photographing method provided by the embodiment of the invention collects and calculates images on a plurality of focus points along the direction of the camera to obtain a depth map corresponding to each image. Moreover, when the user selects the focus area of the finder frame, the sharpness of each image focus area is compared, and the image with the highest definition is selected as the image to be processed, and the image is to be processed outside the focus area according to the depth map of the image to be processed. The image is blurred to get the picture that the user wants. The method of the present invention only needs to take multiple images at a time, and then the user selects a focus area for subsequent blurring processing, as compared with a method in which the user needs to repeatedly operate the electronic device to perform repeated photographing based on multiple focus regions. In this process, the user only needs to complete one shooting action to select all the focus areas in the image, which is relatively easy to operate. Moreover, the focus area can be determined according to the user's selection, and the user experience is improved.

DRAWINGS

In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below. Obviously, the drawings in the following description are only It is a certain embodiment of the present invention, and other drawings can be obtained from those skilled in the art without any creative work.

FIG. 1 is a flowchart of a photographing method according to an embodiment of the present invention;

2 is a flowchart of another photographing method according to an embodiment of the present invention;

FIG. 3 is a flowchart of another photographing method according to an embodiment of the present invention;

4 is a schematic structural diagram of area division according to an embodiment of the present invention;

FIG. 5 is a flowchart of another photographing method according to an embodiment of the present invention;

FIG. 6 is a schematic diagram of comparison of a focus area and a mesh division according to an embodiment of the present invention; FIG.

FIG. 7 is a schematic diagram of another focus area and mesh division according to an embodiment of the present invention; FIG.

FIG. 8 is a comparison of another focus area and mesh division according to an embodiment of the present invention. schematic diagram;

FIG. 9 is a flowchart of a photographing method according to an embodiment of the present invention;

FIG. 10 is a structural block diagram of a camera device according to an embodiment of the present invention;

FIG. 11 is a structural block diagram of another imaging apparatus according to an embodiment of the present invention;

FIG. 12 is a structural block diagram of another imaging apparatus according to an embodiment of the present invention;

FIG. 13 is a structural block diagram of an electronic terminal according to an embodiment of the present invention.

detailed description

The technical solutions in the embodiments of the present invention are clearly and completely described in the following with reference to the accompanying drawings in the embodiments of the present invention. It is obvious that the described embodiments are only a part of the embodiments of the present invention, but not all embodiments. All other embodiments obtained by those skilled in the art based on the embodiments of the present invention without creative efforts are within the scope of the present invention.

The embodiment of the invention provides a photographing method, which can be applied to a single camera mobile electronic device with a camera function, which is specifically implemented by a specific application program or service in an operating system on an electronic device, and the flow thereof is as shown in FIG. ,include:

101. Acquire a plurality of images on the focus point along the shooting direction of the single camera.

Wherein, the shooting direction is the lens orientation of the single camera. The operation principle of acquiring an image is to project an optical signal collected in the lens to a light collecting unit (generally a light sensing array) in the electronic device, and then the optical collecting unit converts the optical signal into an electrical signal, and then the electrical signal is digitally The original image data is quantized. In order to eliminate image noise and unreal color in the original image data, it is necessary to perform ISP (Image Signal Processing) on the original image data, and finally obtain YUV data that meets certain quality requirements.

In addition, it should be noted that the plurality of focus points include at least two focus points, and the object distance difference between the two focus points is not less than a preset threshold, and the preset threshold is generally an empirical value, and the setting principle is The depth map of the acquired image can be generated. The depth of field of images acquired at different focus points is different.

102. Perform calculation according to the collected images to obtain a depth map corresponding to each image.

After performing step 101, a plurality of images corresponding to the focus points, that is, images of different depths of field are obtained, and the principle of calculating the depth map based on the obtained images is to select two images capable of clearly distinguishing the front and back scenes to calculate the foreground. The depth map of the image, such as pic0 to pic7, has a total of 7 focus points, that is, pic0 is the closest one to the focus point, pic7 is the farthest point to the focus point, and pic0 and pic6 can be selected to calculate pic0 when calculating the map map. The depth map, you can also choose pic1 and pic7 to calculate the depth map of pic1, and so on.

103. Determine a focus area selected by the user in the finder frame.

The user can manually click on the area in the finder frame, and the system will set the user's click area as the focus area. Moreover, when the user does not click, the system default center area is the focus area.

104. By comparing the local sharpness of the focus area in each image, one image with the highest local definition is selected as the image to be processed.

105. Perform, according to the depth map of the image to be processed, an image other than the focus area in the image to be processed, to obtain a target image.

In the embodiment, a fast Gaussian blurring algorithm based on depth map pixel-by-pixel calculation may be used to perform blurring processing, that is, the degree of blurring of each point is controlled according to the value of the depth map. Get a picture that is sharpest around the focus area and fades toward the outside. Of course, the embodiment of the present invention is not limited to the above-mentioned virtualization mode, and any existing disclosed method based on the depth map implementation blur can be applied to the embodiment of the present invention.

106. Output the target image for viewing by a user.

The output target image is an image that has been blurred.

The photographing method provided by the embodiment of the invention collects and calculates images on a plurality of focus points in the finder frame along the direction of the camera, and obtains a depth map corresponding to each image. Moreover, when the user selects the focus area of the finder frame, the sharpness of each image focus area is compared, and the image with the highest definition is selected as the image to be processed, and the image is to be processed outside the focus area according to the depth map of the image to be processed. The image is blurred to get the picture that the user wants. The present invention is compared to a method in which a user of the prior art needs to repeatedly operate an electronic device based on a plurality of focus regions to perform repeated photographing. The technical solution only needs to take multiple images at a time, and then the user selects the focus area for subsequent blurring processing. In this process, the user only needs to complete one shooting action to select all the focus areas in the image, and the operation is relatively simple. Moreover, the focus area can be determined according to the user's selection, and the user experience is improved.

In another implementation manner of the embodiment of the present invention, a specific method flow for implementing the foregoing step 101 is provided. The method flow requires that a single camera of the electronic device can move the lens under the driving of the electric motor, and the method flow is as shown in FIG. 2 . include:

201. Configuring different current values for the electric motor to cause the lenses of the single camera to image at a plurality of focus points respectively.

Among them, the motor that controls the movement of the camera will move according to different current values. Each time the motor moves, the lens will move a small distance, in millimeters or micrometers, when the lens stays in a new position, so that The focus point of the lens changes to image on several focus points.

202. Acquire an image when the single camera stays at any focus point.

Among them, when the motor stays at the new focus point, the camera will capture the image at the new position. For the method of collecting the image, refer to the detailed description of the foregoing step 101, and details are not described herein again.

In order to achieve the comparison of the local definition, the pre-processing of the acquired image is also required. In another implementation manner of the embodiment of the present invention, the following method flow for implementing the pre-processing is provided, which is required to be performed before step 103. As shown in Figure 3, it also includes:

301. Perform mesh area division on the collected images, and calculate regional definition of each mesh area.

Among them, the grid area used for each acquired image is divided in the same manner. The present embodiment provides an example as shown in FIG. 4, in which the actual width and height (in pixels) of the acquired image are respectively divided into multiple intervals in the horizontal direction and the vertical direction (Fig. In 4, the rectangle is taken as an example), and the intersection position between every two rows or two columns of multi-interval regions is also a spacer region (for the convenience of distinction, the circle in FIG. 4 is taken as an example, and the actual use of the technical solution should be compared with other The rectangular space is set to a rectangle. The content shown in FIG. 4 above is only an example, and the shape and the number of divisions of the multi-interval area are The embodiment of the invention is not particularly limited.

Wherein, the local definition can be calculated by the adjacent pixel gray-scale variance method, or can be calculated along with other existing image sharpness calculation methods that have been disclosed.

In this embodiment, by performing mesh division on the image, the definitions of each part of the image are separately calculated, so that when the user selects the focus area, the electronic device can directly obtain the corresponding sharpness for the focus area, and the operation is quick and easy. .

After the foregoing step 301 is performed, in another implementation manner of the embodiment of the present invention, a matching method of the focus area and the mesh division area selected by the user is further provided. Before the execution of the step 104, as shown in FIG. 5, the method includes:

401. Determine a position, a shape, and an area of the focus area selected by the user in the finder frame.

The position of the focus area selected by the user in the framing frame can be manually selected by the user, and the shape is taken by the user, and the size of the area is determined by the circled range. Of course, the user can directly use the system default focus area. Electronic devices typically use the center coordinates and edge coordinates of the focus area to determine the focus area.

402. Determine a target focus area according to the position, the shape, and the area, and a ratio of the width of the captured image to the width of the finder frame and a height proportional relationship.

In general, the framing frame has the same shape as the captured image, but the number of pixels included in the framing frame is different from the number of pixels in the framing frame due to the different resolution of the captured image. The proportional relationship and the wide proportional relationship. For example, 8M YUV image data is 3264 pixels wide and 2448 pixels high. The frame size is 1440 pixels wide and 1080 pixels high. The width ratio is 102 to 45, and the height ratio is 102 to 45. That is, when the user selects the coordinates of the 90th pixel in the framing frame and the 180th pixel, the image is 204 pixels wide and 408 pixels high, that is, the position and image selected by the user in the finder frame. The location is the same. When the coordinates selected by the user in the finder frame are, the coordinates in the image are corresponding, and the correspondences of other edge coordinates are consistent with this. It should be noted that the foregoing example is only a possible pixel setting manner in the embodiment of the present invention, and of course, other pixel setting manners may be adopted, which is not limited by the embodiment of the present invention.

403. Determine a mesh area in each image that can cover the target focus area. Domain or multiple grid regions.

Here, a comparison diagram of the focus area and the mesh division is shown in FIGS. 6 to 8. FIG. 6 shows that the target focus area is covered by only one mesh area, and FIG. 7 shows that the target focus area is covered by a plurality of mesh areas. The target focus area is shown inside a grid area in FIG.

404. Use the sum of the area definition of one determined grid area or the area definition of the determined plurality of grid areas as the local definition of the focus area in the image.

If only one grid area is determined to cover the focus area, the area definition of the grid area is the local definition of the focus area; if multiple grid areas are determined to cover the focus area, then multiple grid areas are The sum of the regional intelligibility can be calculated by direct summation or weighted summation, and the sum of the regional intelligibility is taken as the local definition of the focus area.

In the weighting calculation, each grid area has its own weight, and the weight is set by using the center of the focus area as a reference to calculate the distance between the center of each grid area and the center of the focus area. The closer the distance is, the weight is The higher the value.

In addition, in the embodiment of the present invention, when the user performs the subsequent operation, the electronic terminal conveniently calls the image, and the captured image is uniformly stored. After performing step 101, as shown in FIG. 9, the method further includes:

501. All the collected images are assigned the same identification information, and the assigned identification information and images are uniformly stored.

In this embodiment, a set of the same logo can be set for a group of captured images. When the user selects an image, an image set to the same logo is called, so that the user can re-image the image. When used.

The present invention also provides a photographing device, which is required to be provided with a single camera, which can be used to implement the method flow shown in FIG. 1 to FIG.

The collecting unit 11 is configured to collect images of a plurality of focus points along the shooting direction of the single camera.

a first calculating unit 12, configured to perform calculation according to the collected image, and obtain each The corresponding depth map of each image.

The first determining unit 13 is configured to determine a focus area selected by the user in the finder frame.

The selecting unit 14 is configured to select one image with the highest local definition as the image to be processed by comparing the local sharpness of the focus area in each image.

The blurring processing unit 15 is configured to perform a blurring process on the image other than the focus area in the image to be processed according to the depth map of the image to be processed, to obtain a target image.

The output unit 16 is configured to output the target picture for viewing by a user.

Optionally, the device also requires an electric motor that is capable of moving the lens under the drive of the electric motor.

The collecting unit 11 is further configured to configure different current values for the electric motor, so that the lenses of the single camera are respectively imaged on a plurality of focus points; when the single camera stays at any focus point, Capture images.

Optionally, the plurality of focus points include at least two focus points, and the object distance difference between the two focus points is not less than a predetermined threshold.

Optionally, as shown in FIG. 11, the method further includes:

The dividing unit 21 is configured to perform mesh area division on the collected image; wherein the mesh area used in each captured image is divided in a consistent manner.

The second calculating unit 22 is configured to calculate the regional definition of each mesh area.

The second determining unit 23 is configured to determine a position, a shape, and an area of the focus area selected by the user in the finder frame.

The third determining unit 24 is configured to determine the target focus area according to the position, shape, and area, and the ratio of the width of the captured image to the width of the finder frame and the height proportional relationship.

The fourth determining unit 25 is configured to determine one mesh area or a plurality of mesh areas in each image that can cover the target focus area.

The fifth determining unit 26 is configured to use the sum of the area definition of one determined grid area or the area definition of the determined plurality of grid areas as the local definition of the focus area in the image.

Optionally, as shown in FIG. 12, the device further includes:

The identifier assigning unit 31 is configured to allocate the same identification information for each captured image.

The storage unit 32 is configured to store the allocated identification information and the image in a unified manner.

According to an embodiment of the present invention, a photographing device collects and calculates images on a plurality of focus points in a finder frame along a direction of a camera to obtain a depth map corresponding to each image. Moreover, when the user selects the focus area of the finder frame, the sharpness of each image focus area is compared, and the image with the highest definition is selected as the image to be processed, and the image is to be processed outside the focus area according to the depth map of the image to be processed. The image is blurred to get the picture that the user wants. The method of the present invention only needs to take multiple images at a time, and then the user selects a focus area for subsequent blurring processing, as compared with a method in which the user needs to repeatedly operate the electronic device to perform repeated photographing based on multiple focus regions. In this process, the user only needs to complete one shooting action to select all the focus areas in the image, which is relatively easy to operate. Moreover, the focus area can be determined according to the user's selection, and the user experience is improved.

An embodiment of the present invention further provides an electronic device. As shown in FIG. 13, a single camera 41, a processor 42, a memory 43, and an input/output interface 44 are provided. The memory 43 stores a computer program, and the processor 42 is stored. The computer program is invoked to control the single camera 41 and the input and output interface 44. Each module communicates over the bus. Each of the above modules can be used to implement the method flow as shown in FIGS. 1 to 9.

The single camera 41 is used to collect images on a plurality of focus points along the shooting direction.

The processor 41 is configured to perform calculation according to the collected images to obtain a depth map corresponding to each image; determine a focus area selected by the user in the finder frame; and pass local definition of the focus area in each image For comparison, an image with the highest local sharpness is selected as the image to be processed; according to the depth map of the image to be processed, the image other than the focus region in the image to be processed is blurred to obtain a target image.

The input and output interface 44 is configured to output the target picture for the user to view.

The memory 43 is configured to store and collect images on several focus points, each image The corresponding depth map and target image.

Optionally, the electronic device is further provided with an electric motor.

The processor 42 is further configured to configure the electric motor with different current values such that the lenses of the single camera are respectively imaged on a plurality of focus points.

The single camera 41 is also used to acquire an image at any of the focus points.

Optionally, the plurality of focus points include at least two focus points, and the object distance difference between the two focus points is not less than a predetermined threshold.

Optionally, the processor 42 is further configured to perform mesh area division on the collected image, and calculate a regional definition of each of the mesh areas; wherein the network used for each acquired image is used. The division manner of the grid area is consistent; determining the position, shape and area of the focus area selected by the user in the finder frame; determining according to the position, the shape and the area, and the proportional relationship between the acquired image and the width of the finder frame and the height proportional relationship a target focus area; determining one grid area or a plurality of grid areas in each image that can cover the target focus area; determining the area definition of one of the grid areas or the determined plurality of grid areas The sum of the regional sharpness is used as the local sharpness of the focus area in the image.

The memory 43 is also used for the area definition of each grid area.

Optionally, the processor 42 is further configured to allocate the same identification information for each captured image.

The memory 43 is further configured to uniformly store the allocated identification information and images.

The electronic terminal for photographing according to an embodiment of the present invention collects and calculates images on a plurality of focus points in the finder frame along the direction of the camera, and obtains a depth map corresponding to each image. Moreover, when the user selects the focus area of the finder frame, the sharpness of each image focus area is compared, and the image with the highest definition is selected as the image to be processed, and the image is to be processed outside the focus area according to the depth map of the image to be processed. The image is blurred to get the picture that the user wants. Compared with the prior art, the user needs to repeatedly operate the electronic device based on multiple focus areas to perform repeated photographing. The technical solution of the present invention only needs to take multiple images at one time, and then select by the user. The focus area is selected for subsequent blurring processing. In this process, the user only needs to complete one shooting action to select all the focus areas in the image, and the operation is relatively simple. Moreover, the focus area can be determined according to the user's selection, and the user experience is improved.

Through the description of the above embodiments, those skilled in the art can clearly understand that the present invention can be implemented by means of software plus necessary general hardware, and of course, by hardware, but in many cases, the former is a better implementation. . Based on the understanding, the technical solution of the present invention, which is essential or contributes to the prior art, can be embodied in the form of a software product stored in a readable storage medium, such as a floppy disk of a computer. A hard disk or optical disk, etc., includes instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform the methods described in various embodiments of the present invention.

The above is only a specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily think of changes or substitutions within the technical scope of the present invention. It should be covered by the scope of the present invention. Therefore, the scope of the invention should be determined by the scope of the appended claims.

Claims (15)

  1. A photographing method, characterized in that the electronic terminal applied to a single camera comprises:
    Collecting images on a plurality of focus points along the shooting direction of the single camera;
    Calculating according to the collected images, and obtaining a depth map corresponding to each image;
    Determining the focus area selected by the user in the finder frame;
    By comparing the local sharpness of the focus area in each image, one image with the highest local definition is selected as the image to be processed;
    Defining an image other than the focus area in the image to be processed according to a depth map of the image to be processed to obtain a target image;
    The target picture is output for viewing by the user.
  2. The method according to claim 1, wherein said single camera is capable of moving a lens under driving of an electric motor;
    Then, along the shooting direction of the user, collecting images on a plurality of focus points includes:
    Configuring different current values for the electric motor to cause the lenses of the single camera to image on a plurality of focus points respectively;
    An image is acquired while the single camera is stuck at any of the focus points.
  3. The method according to claim 2, wherein the plurality of focus points comprise at least two focus points, and the object distance difference between the two focus points is not less than a predetermined threshold.
  4. The method according to claim 1, wherein before the determining the focus area selected by the user in the finder frame, the method further comprises:
    Perform mesh area division on the collected images, and calculate the regional definition of each mesh area;
    Wherein, the grid area used for each acquired image is divided in a consistent manner;
    Then, before comparing the local sharpness of the focus area in each image, and selecting an image with the highest local sharpness as the image to be processed, the method further includes:
    Determining the position, shape and area of the focus area selected by the user in the finder frame;
    Determining a target focus area according to the position, the shape and the area, and the relationship between the width of the captured image and the width of the finder frame and the height proportional relationship;
    Determining one grid area or a plurality of grid areas in each image that can cover the target focus area;
    The sum of the area definition of one determined grid area or the area definition of the determined plurality of grid areas is taken as the local definition of the focus area in the image.
  5. The method according to any one of claims 1 to 4, further comprising: after collecting the images on the plurality of focus points in the shooting direction of the user, further comprising:
    Assign the same identification information to each captured image, and store the assigned identification information and images in a unified manner.
  6. A photographing device, characterized in that a single camera is provided, comprising:
    a collecting unit, configured to collect images on a plurality of focus points along a shooting direction of the single camera;
    a first calculating unit, configured to perform calculation according to the collected images, to obtain a depth map corresponding to each image;
    a first determining unit, configured to determine a focus area selected by the user in the finder frame;
    a selecting unit for selecting an image with the highest local definition as the image to be processed by comparing the local sharpness of the focus area in each image;
    a blurring processing unit, configured to perform a blurring process on an image other than the focus area in the image to be processed according to a depth map of the image to be processed, to obtain a target image;
    An output unit, configured to output the target picture for viewing by a user.
  7. The device according to claim 6, wherein said device further comprises an electric motor, wherein said single camera is capable of moving the lens under the drive of the electric motor;
    The collecting unit is further configured to configure different current values for the electric motor, so that the lenses of the single camera are respectively imaged on a plurality of focus points; when the single camera stays at any focus point, the collection is performed. image.
  8. The apparatus according to claim 7, wherein the plurality of focus points comprise at least two focus points, and the object distance difference between the two focus points is not less than a predetermined threshold.
  9. The device according to claim 6, further comprising:
    a dividing unit, configured to perform mesh area division on the collected image; wherein the mesh area used by each acquired image is divided in a consistent manner;
    a second calculating unit, configured to calculate a regional definition of each of the mesh regions;
    a second determining unit, configured to determine a position, a shape, and an area of the focus area selected by the user in the finder frame;
    a third determining unit, configured to determine a target focus area according to the position, shape, and area, and a ratio of a width relationship and a height ratio of the captured image to the width of the finder frame;
    a fourth determining unit, configured to determine one mesh area or multiple mesh areas in each image that can cover the target focus area;
    And a fifth determining unit, configured to use the sum of the area definition of one determined grid area or the area definition of the determined plurality of grid areas as the local definition of the focus area in the image.
  10. The device according to any one of claims 6 to 9, further comprising:
    An identifier allocation unit, configured to allocate the same identification information for each captured image;
    a storage unit configured to store the assigned identification information and images in a unified manner.
  11. An electronic device characterized by being provided with a single camera, a processor, a memory, an input/output interface, the memory storing a computer program, the processor calling the computer program to control the single camera and an input/output interface ;
    The single camera is configured to collect images on a plurality of focus points along a shooting direction;
    The processor is configured to perform calculation according to the collected image to obtain a depth map corresponding to each image; determine a focus area selected by the user in the finder frame; and perform local definition of the focus area in each image Comparing, selecting one image with the highest local resolution as the image to be processed; according to the depth map of the image to be processed, the image other than the focus area in the image to be processed is blurred to obtain a target image;
    An input/output interface, configured to output the target image for viewing by a user;
    The memory is configured to store an image of a plurality of focus points, a depth map corresponding to each image, and a target picture.
  12. The electronic terminal according to claim 11, further comprising an electric motor;
    The processor is further configured to configure different current values for the electric motor to cause the lenses of the single camera to image on a plurality of focus points respectively;
    The single camera is also used to acquire images at any of the focus points.
  13. The electronic terminal according to claim 12, wherein the plurality of focus points comprise at least two focus points, and the object distance difference between the two focus points is not less than a predetermined threshold.
  14. The electronic terminal according to claim 11, wherein the processor is further configured to perform mesh area division on the collected images, and calculate regional definition of each mesh area; wherein, for each The captured image uses a grid area division manner; determines the position, shape, and area of the focus area selected by the user in the finder frame; and according to the position, shape, and area, and the acquired image and the width of the finder frame a proportional relationship and a height proportional relationship, determining a target focus area; determining a mesh area or a plurality of mesh areas in each image that can cover the target focus area; and determining a regional definition of the one mesh area or Determining the sum of the regional sharpness of the plurality of mesh regions as the local sharpness of the focus region in the image;
    The memory is also used for the area definition of each grid area.
  15. The electronic terminal according to any one of claims 11 to 14, wherein the processor is further configured to allocate the same identification information for each of the collected images;
    The memory is further configured to uniformly store the allocated identification information and images.
PCT/CN2015/071663 2014-06-25 2015-01-27 Photographing method and apparatus, and electronic device WO2015196802A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201410291156.X 2014-06-25
CN201410291156.XA CN104104869A (en) 2014-06-25 2014-06-25 Photographing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
WO2015196802A1 true WO2015196802A1 (en) 2015-12-30

Family

ID=51672638

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/071663 WO2015196802A1 (en) 2014-06-25 2015-01-27 Photographing method and apparatus, and electronic device

Country Status (2)

Country Link
CN (1) CN104104869A (en)
WO (1) WO2015196802A1 (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104104869A (en) * 2014-06-25 2014-10-15 华为技术有限公司 Photographing method and device and electronic equipment
CN104408687B (en) * 2014-10-31 2018-07-27 酷派软件技术(深圳)有限公司 A kind of method and device of picture playing
CN104680563B (en) 2015-02-15 2018-06-01 青岛海信移动通信技术股份有限公司 The generation method and device of a kind of image data
CN104680478B (en) 2015-02-15 2018-08-21 青岛海信移动通信技术股份有限公司 A kind of choosing method and device of destination image data
CN105160695B (en) * 2015-06-30 2019-02-01 Oppo广东移动通信有限公司 A kind of photo processing method and mobile terminal
CN106550184B (en) * 2015-09-18 2020-04-03 中兴通讯股份有限公司 Photo processing method and device
CN105654463A (en) * 2015-11-06 2016-06-08 乐视移动智能信息技术(北京)有限公司 Image processing method applied to continuous shooting process and apparatus thereof
CN105827980B (en) * 2016-05-04 2018-01-19 广东欧珀移动通信有限公司 Focusing control method and device, image formation control method and device, electronic installation
CN106339476B (en) * 2016-08-30 2019-10-29 北京寺库商贸有限公司 A kind of image processing method and system
CN108496352A (en) * 2017-05-24 2018-09-04 深圳市大疆创新科技有限公司 Image pickup method and device, image processing method and device
CN107277354B (en) * 2017-07-03 2020-04-28 瑞安市智造科技有限公司 Virtual photographing method, virtual photographing terminal and computer readable storage medium
CN109474780A (en) * 2017-09-07 2019-03-15 虹软科技股份有限公司 A kind of method and apparatus for image procossing
CN107566723B (en) * 2017-09-13 2019-11-19 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer readable storage medium
CN107809583A (en) * 2017-10-25 2018-03-16 努比亚技术有限公司 Take pictures processing method, mobile terminal and computer-readable recording medium
CN108234865A (en) * 2017-12-20 2018-06-29 深圳市商汤科技有限公司 Image processing method, device, computer readable storage medium and electronic equipment
CN110602397A (en) * 2019-09-16 2019-12-20 RealMe重庆移动通信有限公司 Image processing method, device, terminal and storage medium
CN110691191A (en) * 2019-09-16 2020-01-14 RealMe重庆移动通信有限公司 Image blurring method and device, computer storage medium and terminal equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8284258B1 (en) * 2008-09-18 2012-10-09 Grandeye, Ltd. Unusual event detection in wide-angle video (based on moving object trajectories)
CN103702032A (en) * 2013-12-31 2014-04-02 华为技术有限公司 Image processing method, device and terminal equipment
CN103826064A (en) * 2014-03-06 2014-05-28 华为技术有限公司 Image processing method, device and handheld electronic equipment
CN104104869A (en) * 2014-06-25 2014-10-15 华为技术有限公司 Photographing method and device and electronic equipment

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2075802B (en) * 1980-05-12 1984-05-31 Control Data Corp Network access device
JPH11125522A (en) * 1997-10-21 1999-05-11 Sony Corp Image processor and method
CN1327681C (en) * 2005-08-08 2007-07-18 华为技术有限公司 Method for realizing initial Internet protocol multimedia subsystem registration
US8553093B2 (en) * 2008-09-30 2013-10-08 Sony Corporation Method and apparatus for super-resolution imaging using digital imaging devices
TWI394085B (en) * 2008-10-28 2013-04-21 Asustek Comp Inc Method of identifying the dimension of a shot subject
CN101500133A (en) * 2009-02-27 2009-08-05 中兴通讯股份有限公司 Camera control method and system for visual user equipment
CN102025695A (en) * 2009-09-11 2011-04-20 中兴通讯股份有限公司 Method, equipment and system for recognizing PUI (average power utilization index) type
JP4779041B2 (en) * 2009-11-26 2011-09-21 株式会社日立製作所 Image photographing system, image photographing method, and image photographing program
CN103207664B (en) * 2012-01-16 2016-04-27 联想(北京)有限公司 A kind of image processing method and equipment
CN103634588A (en) * 2012-08-27 2014-03-12 联想(北京)有限公司 Image composition method and electronic apparatus
CN102891966B (en) * 2012-10-29 2015-07-01 珠海全志科技股份有限公司 Focusing method and device for digital imaging device
CN103491309B (en) * 2013-10-10 2017-12-22 魅族科技(中国)有限公司 The acquisition method and terminal of view data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8284258B1 (en) * 2008-09-18 2012-10-09 Grandeye, Ltd. Unusual event detection in wide-angle video (based on moving object trajectories)
CN103702032A (en) * 2013-12-31 2014-04-02 华为技术有限公司 Image processing method, device and terminal equipment
CN103826064A (en) * 2014-03-06 2014-05-28 华为技术有限公司 Image processing method, device and handheld electronic equipment
CN104104869A (en) * 2014-06-25 2014-10-15 华为技术有限公司 Photographing method and device and electronic equipment

Also Published As

Publication number Publication date
CN104104869A (en) 2014-10-15

Similar Documents

Publication Publication Date Title
ES2626174T3 (en) Image blurring procedure and apparatus
US9521320B2 (en) Image processing apparatus, image capturing apparatus, image processing method, and storage medium
US9900510B1 (en) Motion blur for light-field images
CN105323425B (en) Scene motion correction in blending image system
US9544503B2 (en) Exposure control methods and apparatus
US10015469B2 (en) Image blur based on 3D depth information
JP6288952B2 (en) Imaging apparatus and control method thereof
JP5909540B2 (en) Image processing display device
KR101893047B1 (en) Image processing method and image processing device
US20160191788A1 (en) Image processing apparatus and image pickup apparatus
US9444991B2 (en) Robust layered light-field rendering
JP5762356B2 (en) Apparatus and method for depth reconstruction of dynamic scene based on focus
WO2017016050A1 (en) Image preview method, apparatus and terminal
US9361680B2 (en) Image processing apparatus, image processing method, and imaging apparatus
US8340422B2 (en) Generation of depth map for an image
US9118846B2 (en) Apparatus for generating an image with defocused background and method thereof
KR101926490B1 (en) Apparatus and method for processing image
DE102014010152A1 (en) Automatic effect method for photography and electronic device
EP1984892B1 (en) Foreground/background segmentation in digital images
CN107087107B (en) Image processing apparatus and method based on dual camera
US9918065B2 (en) Depth-assisted focus in multi-camera systems
WO2017016030A1 (en) Image processing method and terminal
CN105979165A (en) Blurred photos generation method, blurred photos generation device and mobile terminal
Wadhwa et al. Synthetic depth-of-field with a single-camera mobile phone
JP2015522959A (en) Systems, methods, and media for providing interactive refocusing in images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15810946

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15810946

Country of ref document: EP

Kind code of ref document: A1