CN107590828B - Blurring processing method and device for shot image - Google Patents

Blurring processing method and device for shot image Download PDF

Info

Publication number
CN107590828B
CN107590828B CN201710677493.6A CN201710677493A CN107590828B CN 107590828 B CN107590828 B CN 107590828B CN 201710677493 A CN201710677493 A CN 201710677493A CN 107590828 B CN107590828 B CN 107590828B
Authority
CN
China
Prior art keywords
information
blurring
area
irrelevant
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710677493.6A
Other languages
Chinese (zh)
Other versions
CN107590828A (en
Inventor
周意保
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710677493.6A priority Critical patent/CN107590828B/en
Publication of CN107590828A publication Critical patent/CN107590828A/en
Application granted granted Critical
Publication of CN107590828B publication Critical patent/CN107590828B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a blurring processing method and a blurring processing device for a shot image, wherein the blurring processing method comprises the following steps: projecting a structural light source to a foreground area of the preview image, and shooting a structural light image of the structural light source modulated by the foreground area; demodulating a phase corresponding to a deformed position pixel in the structured light image, and generating first depth-of-field information of a foreground area according to the phase; generating irrelevant person 3D information in the foreground area according to the first depth of field information, and determining a target area corresponding to the irrelevant person according to the irrelevant person 3D information; and deleting irrelevant people, and blurring the background area and the target area corresponding to the irrelevant people. Therefore, the shot person image can be actively selected, irrelevant persons can be intelligently deleted, relevant deleted areas are subjected to blurring processing, the relevant deleted areas are in seamless connection with other areas in the image, and the image visual effect is good.

Description

Blurring processing method and device for shot image
Technical Field
The invention relates to the technical field of terminal equipment photographing, in particular to a blurring processing method and device for a photographed image.
Background
With the continuous development of terminal equipment technology, the shooting function in the terminal equipment is more and more abundant, and users are also more and more accustomed to using the terminal equipment to shoot. However, scene information shot by the terminal device is often rich and includes images of various people, for example, when a user in a scenic spot takes a shadow and commemorates, other visitors are often included in the shot images, and therefore a shooting mode capable of effectively shielding irrelevant people is urgently needed.
Disclosure of Invention
The invention provides a blurring processing method and device for a shot image, and aims to solve the technical problem that irrelevant people cannot be removed from a shot picture in the prior art.
The embodiment of the invention provides a blurring processing method for a shot image, which comprises the following steps: projecting a structural light source to a foreground area of a preview image, and shooting a structural light image of the structural light source modulated by the foreground area; demodulating a phase corresponding to a pixel at a deformation position in the structured light image, generating first depth-of-field information of the foreground area according to the phase, and generating 3D information of the foreground area according to the first depth-of-field information; matching the 3D information of the foreground area with a pre-stored database, determining the 3D information of irrelevant figures according to the matching result, and determining a target area corresponding to the irrelevant figures according to the 3D information of the irrelevant figures; and deleting the irrelevant people, and blurring the background area and the target area corresponding to the irrelevant people.
Another embodiment of the present invention provides a blurring processing apparatus for a captured image, including: the shooting module is used for projecting a structural light source to a foreground area of a preview image and shooting a structural light image of the structural light source modulated by the foreground area; the generating module is used for demodulating a phase corresponding to a deformed position pixel in the structured light image, generating first depth-of-field information of the foreground area according to the phase, and generating 3D information of the foreground area according to the first depth-of-field information; the matching module is used for matching the 3D information of the foreground area with a pre-stored database and determining the 3D information of irrelevant people according to the matching result; the determining module is used for determining a target area corresponding to an irrelevant person according to the irrelevant person 3D information; and the processing module is used for deleting the irrelevant people and blurring the background area and the target area corresponding to the irrelevant people.
Yet another embodiment of the present invention provides a terminal device, including a memory and a processor, where the memory stores computer readable instructions, and the instructions, when executed by the processor, cause the processor to execute the blurring processing method for captured images according to the above embodiments of the present invention.
Yet another embodiment of the present invention provides a non-transitory computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements a blurring processing apparatus for captured images according to the above-described embodiment of the present invention.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
projecting a structural light source to a foreground area of a preview image, shooting a structural light image of the structural light source modulated by the foreground area, demodulating a phase corresponding to a deformed position pixel in the structural light image, generating first depth of field information of the foreground area according to the phase, generating 3D information of the foreground area according to the first depth of field information, matching the 3D information of the foreground area with a pre-stored database, determining 3D information of irrelevant figures according to a matching result, determining a target area corresponding to the irrelevant figures according to the 3D information of the irrelevant figures, deleting the irrelevant figures, and blurring the background area and the target area corresponding to the irrelevant figures. Therefore, the shot person image can be actively selected, irrelevant persons can be intelligently deleted, relevant deleted areas are subjected to blurring processing, the relevant deleted areas are in seamless connection with other areas in the image, and the image visual effect is good.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a flowchart of a blurring processing method of a captured image according to an embodiment of the present invention;
FIG. 2(a) is a first view of a scene of structured light measurements according to one embodiment of the present invention;
FIG. 2(b) is a diagram of a second scenario of structured light measurements, in accordance with one embodiment of the present invention;
FIG. 2(c) is a schematic view of a scene three of structured light measurements according to one embodiment of the present invention;
FIG. 2(d) is a diagram of a fourth scenario of structured light measurements, in accordance with one embodiment of the present invention;
FIG. 2(e) is a fifth view of a scene for structured light measurement according to one embodiment of the present invention;
FIG. 3(a) is a schematic diagram of a partial diffractive structure of a collimating beam splitting element according to one embodiment of the present invention;
FIG. 3(b) is a schematic diagram of a partial diffractive structure of a collimating beam splitting element according to another embodiment of the present invention;
FIG. 4 is a diagram of a captured image processing scene according to one embodiment of the present invention;
fig. 5 is a block diagram of a blurring processing apparatus of a captured image according to an embodiment of the present invention;
fig. 6 is a block diagram of a blurring processing apparatus of a captured image according to another embodiment of the present invention;
fig. 7 is a block diagram of a blurring processing apparatus of a captured image according to still another embodiment of the present invention;
and
fig. 8 is a schematic structural diagram of an image processing circuit in a terminal device according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
It should be understood that in many application scenarios, the picture that the user wants to take includes other personal information besides the personal information that the user wants to take, and if the picture includes other personal information, the appearance of the picture may be affected, and even when the picture is used, unnecessary disputes may be caused, such as infringing on the portrait of another person.
In order to solve the technical problem, in the embodiment of the invention, the person information in the shot picture can be actively selected, and on the basis of ensuring that the person information of the shot subject can be obtained, other person information is shielded, so that the shooting requirement of a user is better met.
A blurring processing method and apparatus of a captured image according to an embodiment of the present invention will be described below with reference to the drawings.
Fig. 1 is a flowchart of a blurring processing method of a captured image according to an embodiment of the present invention.
As shown in fig. 1, the blurring processing method for a captured image includes:
step 101, projecting a structural light source to a foreground area of a preview image, and shooting a structural light image of the structural light source modulated by the foreground area. And 102, demodulating a phase corresponding to a deformed position pixel in the structured light image, generating first depth-of-field information of the foreground area according to the phase, and generating 3D information of the foreground area according to the first depth-of-field information.
The 3D information of the embodiment of the present invention includes depth information of an image, pixel position relationship information, and the like.
In addition, the depth information refers to the distance from the closest point to the farthest point of the object that can produce a sharper image, i.e., the spatial depth at which the object can be clearly imaged.
It can be understood that the irrelevant people have a larger difference in depth of field than the subject user, and therefore, in this example, the area where the irrelevant people are located is determined based on the depth of field information of the foreground area.
It should be noted that, according to different application scenarios, different embodiments may be adopted to obtain the first depth-of-field information of the foreground region.
As a possible implementation manner, in order to further improve the accuracy of the determined sensitive area, the first depth-of-field information of the foreground area is acquired based on the structured light, wherein the structural feature of the structured light source includes a laser stripe, a gray code, a sinusoidal stripe, a uniform speckle, or a non-uniform speckle.
In this embodiment, in order to make it clear to those skilled in the art how to obtain the first depth-of-field information of the foreground region according to the structured light, a widely-applied grating projection technology (fringe projection technology) is taken as an example to describe its specific principle, wherein the grating projection technology belongs to a broad-sense planar structured light.
When using surface structured light projection, as shown in fig. 2(a), a sinusoidal stripe is generated by computer programming, the sinusoidal stripe is projected to a measured object through a projection device, a CCD camera is used to photograph the bending degree of the stripe modulated by an object, the bending stripe is demodulated to obtain a phase, and the phase is converted to the height of the full field. Of course, the most important point is the calibration of the system, including the calibration of the system geometry and the calibration of the internal parameters of the CCD camera and the projection device, which otherwise may cause errors or error coupling. Since the system external parameters are not calibrated it is not possible to calculate the correct height information from the phase.
Specifically, in the first step, a sinusoidal fringe pattern is programmed, because the phase is acquired subsequently by using a deformed fringe pattern, for example, by using a four-step phase shifting method, four fringes with a phase difference pi/2 are generated, and then the four fringes are projected onto the object to be measured (mask) in a time-sharing manner, and the pattern on the left side of fig. 2(b) is acquired, and the fringes on the reference plane shown on the right side of fig. 2(b) are acquired.
In a second step, phase recovery is performed, and the modulated phase is calculated from the four acquired modulated fringe patterns, where the resulting phase pattern is a truncated phase pattern, since the result of the four-step phase-shifting algorithm is calculated from the arctan function and is therefore limited to between [ -pi, pi ], i.e. it starts over again whenever its value exceeds this range. The phase principal value obtained is shown in fig. 2 (c).
In the second step, it is necessary to cancel the transition, i.e. restore the truncated phase to a continuous phase, as shown in fig. 2(d), with the modulated continuous phase on the left and the reference continuous phase on the right.
And thirdly, subtracting the modulated continuous phase from the reference continuous phase to obtain a phase difference, wherein the phase difference represents the height information of the measured object relative to the reference surface, and substituting the phase difference into a phase and height conversion formula (wherein corresponding parameters are calibrated) to obtain the three-dimensional model of the object to be measured as shown in fig. 2 (e).
It should be understood that, in practical applications, the structured light used in the embodiments of the present invention may be any pattern other than the grating, according to different application scenarios.
In this embodiment, a substantially flat diffraction element having a diffraction structure of relief with a specific phase distribution, a step relief structure having two or more concavities and convexities in cross section, or a step relief structure of a plurality of concavities and convexities may be used, the thickness of the substrate is approximately l micrometers, and the height of each step is not uniform, and is 0.7 micrometers to 0.9 micrometers. Fig. 3(a) is a partial diffraction structure of the collimating beam splitting element of this embodiment, and fig. 3(b) is a cross-sectional side view taken along section a-a, both in units of micrometers on the abscissa and on the ordinate.
Accordingly, since a general diffraction element diffracts a light beam to obtain a plurality of diffracted lights, there is a large difference in light intensity between the diffracted lights, and there is a large risk of injury to human eyes.
The collimation beam splitting component in this embodiment not only has the effect of carrying out the collimation to non-collimated light beam, still have the effect of beam split, non-collimated light through the speculum reflection goes out multi-beam collimated light beam toward different angles behind the collimation beam splitting component promptly, and the cross sectional area of the multi-beam collimated light beam of outgoing is approximate equal, energy flux is approximate equal, and then it is better to make the scattered point light that utilizes after this beam diffraction carry out image processing or projected effect, and simultaneously, laser emergent light disperses to each beam of light, the risk of injury people's eye has further been reduced, and owing to be speckle structured light, arrange even structured light for other, when reaching same collection effect, the electric energy that consumes is lower.
Based on the above description, in this embodiment, the structured light source is projected to the foreground region of the preview image, the structured light image modulated by the structured light source through the foreground region is captured, the phase corresponding to the deformed position pixel in the structured light image is demodulated, the first depth-of-field information of the foreground region is calculated according to the phase distortion, and then the 3D information of the foreground region is generated according to the first depth-of-field information.
And 103, matching the 3D information of the foreground region with a pre-stored database, determining the 3D information of irrelevant people according to the matching result, and determining a target region corresponding to the irrelevant people according to the 3D information of the irrelevant people.
It is understood that, in practical applications, a user using the terminal device is the user himself or a family or a friend of the user, and therefore, a user who is a subject of shooting in a picture taken by using the terminal device is generally the user himself or a family or a friend of the user.
Therefore, in the embodiment of the present invention, for the related personal information stored in the database of the terminal device includes the user himself or the family or friend information of the user, and the like, after the 3D information of the foreground area is acquired, the 3D information is matched with the pre-stored database corresponding to the related terminal device.
In an embodiment of the present invention, if the matching degree of the 3D information of the foreground area and the pre-stored related person information stored in the database corresponding to the related terminal device is not greater than the preset threshold, it is determined that the current preview image is not an image for a person, and may be only an image for a landscape, and at this time, it is determined that all persons in the image are unrelated persons, so that the target area corresponding to the unrelated persons is determined according to the 3D information of the unrelated persons.
In another embodiment of the present invention, if there is character information of which the matching degree with the pre-stored related character information stored in the database corresponding to the related terminal device is greater than a preset threshold value in the 3D information of the foreground region, it is determined that the current preview image includes a related character, and thus it is determined that the characters other than the related character in the image are all unrelated characters, and thus, the target region corresponding to the unrelated character is determined according to the 3D information of the unrelated character.
It should be noted that, according to different application scenarios, different manners may be adopted to determine the target area corresponding to the irrelevant person according to the 3D information of the irrelevant person:
in the first example, since the 3D information of the person and the other objects have a large difference, the region corresponding to the person is identified from the 3D information of the foreground region, and the region corresponding to the other persons except the related person is taken as the target region.
In a second example, the region where the person is located is determined by combining other person identification means, for example, contour information of the foreground region is established according to 3D information of the foreground region, the contour information of the foreground region is compared with contour information of the person, a region covered by a human body contour is taken as a region corresponding to the person, and further, a region corresponding to another person except for the related person is taken as a target region.
For another example, a related living body recognition technique is used to recognize a living body of a foreground region, identify a region where a person is located, perform pixel matching between the region where the person is located and 3D information of the foreground region, match 3D information corresponding to the region where the person is located, and determine a target region of an unrelated person from the 3D information of the unrelated person by using 3D information corresponding to other persons other than the related person as 3D information of the unrelated person.
It should be emphasized that, in the above embodiment, only the user himself corresponding to the terminal device or the family or the friend of the user is taken as an example for explanation, in practical application, the relevant person may be other persons customized by the user, such as a certain star interested by the user, or may be a person whose person characteristics satisfy some preset conditions, such as a user whose height exceeds a, a user making a specified action, etc., and the implementation principle is similar no matter which kind of user the pre-stored person information in the database is, and is not described herein again.
And 104, deleting irrelevant people, and blurring the background area and the target area corresponding to the irrelevant people.
It can be understood that after determining the target area corresponding to the irrelevant person, deleting the irrelevant person, but deleting only the irrelevant person may cause the target area where the irrelevant person originally is located in the captured image to be connected with other areas more abruptly, as shown in the left diagram of fig. 4, when the preview picture contains the irrelevant person a, since a blocks a part of the building B, the area where the irrelevant person a is located is directly deleted, as shown in the right diagram of fig. 4, the building B is not completely displayed and is connected with the whole image more abruptly,
affecting the overall aesthetic appearance of the image.
Therefore, in the embodiment of the present invention, in order to achieve seamless connection between the background area corresponding to the irrelevant person and the entire image, after the irrelevant person is deleted, blurring processing is performed on the background area corresponding to the irrelevant person and the target area where the irrelevant person is located.
It should be noted that, according to different application scenarios, different manners may be adopted to perform blurring processing on the background area and the target area corresponding to unrelated people, which is illustrated as follows:
as a possible implementation manner, the second depth-of-field information of the background area is determined, the basic value of the blurring degree is obtained according to the first depth-of-field information and the second depth-of-field information, and the target area are blurred according to the basic value of the blurring degree.
The basic value of the blurring degree is a reference value of the blurring degree, a blurring coefficient can be obtained by performing operation on the basis of the basic value of the blurring degree, and the target region and the background region are blurred according to the blurring coefficient.
In this embodiment, obtaining the basic value of the blurring degree according to the first depth of field information and the second depth of field information may be implemented in various ways. For example, the representative value of the first depth information and the representative value of the second depth information may be determined separately, and then an operation may be performed based on the representative value of the first depth information and the representative value of the second depth information to obtain a base value of the degree of blurring. The representative value may include, but is not limited to, an average value, a sampled value, and the like. The calculation method used to obtain the basic value of the blurring degree may include, but is not limited to, calculating a ratio, a difference, or further multiplying or adding a preset value based on the ratio or the difference.
For example, a first average value of the first depth information and a second average value of the second depth information are obtained, and a ratio of the first average value to the second average value is calculated to obtain a base value of the blurring degree. Wherein, the larger the ratio of the first average value to the second average value is, the larger the basic value of the blurring degree is.
In different application scenarios, different manners may be adopted to perform blurring processing on a background region and a target region corresponding to an unrelated person according to a basic value of a blurring degree, and as a possible implementation manner, a blurring coefficient of each pixel in the background region is determined according to the basic value of the blurring degree and second depth information of the background region, for example, a product of the basic value of the blurring degree and the second depth information of each pixel in the background region is calculated, the blurring coefficient of each pixel in the background region is obtained, a product of the basic value of the blurring degree and the depth information of each pixel in the target region is calculated, and the blurring coefficient of each pixel in the target region is obtained.
And then, determining a blurring coefficient of each pixel in the target area according to the basic value of the blurring degree and the depth information of the target area, and performing Gaussian blur processing on the background area and the target area according to the blurring coefficient of each pixel in the background area and the target area.
In order to more clearly describe the blurring processing method of the captured image according to the embodiment of the present invention, the following example is performed in combination with a specific application scenario:
when the current preview image contains a person A and a person B, projecting a structural light source to a foreground area of the preview image, shooting a structural light image of the structural light source modulated by the foreground area, demodulating a phase corresponding to a deformed position pixel in the structural light image, generating first depth-of-field information of the foreground area according to the phase, and generating 3D information of the foreground area according to the first depth-of-field information.
And further, matching the 3D information of the foreground area with a pre-stored database to match the related characters A, determining a target area where the user B is located according to the 3D information of the user B, deleting the area where the user B is located, blurring the target area where the user B is located and the background area, obtaining that the target area and the background area where the original unrelated characters are located in the processed image are in seamless connection with other areas, and obtaining the processed image with a better visual effect.
In summary, in the blurring processing method for a captured image according to the embodiment of the present invention, a structured light source is projected to a foreground region of a preview image, a structured light image modulated by the structured light source through the foreground region is captured, a phase corresponding to a deformed position pixel in the structured light image is demodulated, first depth-of-field information of the foreground region is generated according to the phase, 3D information of the foreground region is generated according to the first depth-of-field information, the 3D information of the foreground region is matched with a pre-stored database, irrelevant person 3D information is determined according to a matching result, a target region corresponding to an irrelevant person is determined according to the irrelevant person 3D information, the irrelevant person is deleted, and a background region and the target region corresponding to the irrelevant person are subjected to blurring processing. Therefore, the shot person image can be actively selected, irrelevant persons can be intelligently deleted, relevant deleted areas are subjected to blurring processing, the relevant deleted areas are in seamless connection with other areas in the image, and the image visual effect is good.
In order to implement the above embodiment, the present invention further provides a blurring processing apparatus for a captured image, and fig. 5 is a block diagram illustrating a configuration of the blurring processing apparatus for a captured image according to an embodiment of the present invention, and as shown in fig. 5, the blurring processing apparatus for a captured image includes a capturing module 100, a generating module 200, a matching module 300, a determining module 400, and a processing module 500.
The shooting module 100 is configured to project a structured light source to a foreground region of the preview image, and shoot a structured light image of the structured light source modulated by the foreground region.
The generating module 200 is configured to demodulate a phase corresponding to a deformed pixel in the structured light image, generate first depth-of-field information of the foreground area according to the phase, and generate 3D information of the foreground area according to the first depth-of-field information.
And the matching module 300 is configured to match the 3D information of the foreground region with a pre-stored database, and determine the 3D information of an irrelevant person according to a matching result.
The determining module 400 is configured to determine a target area corresponding to an irrelevant person according to the irrelevant person 3D information.
The processing module 500 is configured to delete an irrelevant person and perform blurring processing on a background area and a target area corresponding to the irrelevant person.
It should be noted that, according to different application scenarios, the processing module 500 may perform blurring processing on the background area and the target area corresponding to unrelated people in different manners.
In one embodiment of the present invention, as shown in fig. 6, the processing module 500 includes a determining unit 510, an obtaining unit 520 and a processing unit 530 on the basis of that shown in fig. 5.
The determining unit 510 is configured to determine second depth information of the background area.
An obtaining unit 520, configured to obtain a basic value of the blurring degree according to the first depth of field information and the second depth of field information.
The processing unit 530 is configured to perform blurring processing on the target area and the target area according to the basic value of the blurring degree.
Specifically, the processing unit 530 may also implement blurring the target area and the target area according to the basic value of the blurring degree in different manners in different application scenarios.
As a possible implementation, as shown in fig. 7, on the basis as shown in fig. 6, the processing unit 530 includes a first determining sub-unit 531, a second determining unit 532 and a processing sub-unit 533.
The first determining subunit 531 is configured to determine a blurring coefficient of each pixel in the background region according to the basic value of the blurring degree and the second depth information of the background region.
A second determining unit 532, configured to determine a blurring coefficient of each pixel in the target area according to the basic value of the blurring degree and the depth information of the target area;
the processing subunit 533 is configured to perform gaussian blurring processing on the background region and the target region according to the blurring coefficient of each pixel in the background region and the target region.
The division of each module in the captured image blurring processing device is only for illustration, and in other embodiments, the captured image blurring processing device may be divided into different modules as needed to complete all or part of the functions of the captured image blurring processing device.
It should be noted that the blurring processing method for the captured image according to the embodiment of the present invention is also applicable to the blurring processing device for the captured image according to the embodiment of the present invention, and the implementation principles thereof are similar and will not be described herein again.
To sum up, the blurring processing apparatus for a captured image according to the embodiments of the present invention projects a structured light source to a foreground region of a preview image, captures a structured light image of the structured light source modulated by the foreground region, demodulates a phase corresponding to a deformed position pixel in the structured light image, generates first depth-of-field information of the foreground region according to the phase, generates 3D information of the foreground region according to the first depth-of-field information, matches the 3D information of the foreground region with a pre-stored database, determines 3D information of an irrelevant person according to a matching result, determines a target region corresponding to the irrelevant person according to the 3D information of the irrelevant person, deletes the irrelevant person, and performs blurring processing on a background region and the target region corresponding to the irrelevant person. Therefore, the shot person image can be actively selected, irrelevant persons can be intelligently deleted, relevant deleted areas are subjected to blurring processing, the relevant deleted areas are in seamless connection with other areas in the image, and the image visual effect is good.
In order to achieve the purpose, the invention further provides terminal equipment. The terminal includes therein an Image processing circuit, which may be implemented using hardware and/or software components, and may include various processing units defining an ISP (Image signal processing) pipeline. Fig. 8 is a schematic structural diagram of an image processing circuit in a terminal device according to an embodiment of the present invention. As shown in fig. 8, for ease of explanation, only aspects of the image processing techniques related to embodiments of the present invention are shown.
As shown in fig. 8, image processing circuit 110 includes an imaging device 1110, an ISP processor 1130, and control logic 1140. The imaging device 1110 may include a camera with one or more lenses 1112, an image sensor 1114, and a structured light projector 1116. The structured light projector 1116 projects structured light onto an object to be measured. The structured light pattern may be a laser stripe, a gray code, a sinusoidal stripe, or a randomly arranged speckle pattern. The image sensor 1114 captures a structured light image projected onto the object to be measured, and transmits the structured light image to the ISP processor 1130, and the ISP processor 1130 demodulates the structured light image to obtain depth information of the object to be measured. At the same time, the image sensor 1114 can also capture color information of the object under test. Of course, the structured light image and the color information of the object to be measured may be captured by the two image sensors 1114, respectively. Taking speckle structured light as an example, the ISP processor 1130 demodulates the structured light image, specifically including acquiring a speckle image of the measured object from the structured light image, performing image data calculation on the speckle image of the measured object and the reference speckle image according to a predetermined algorithm, and acquiring a moving distance of each scattered spot of the speckle image on the measured object relative to a reference scattered spot in the reference speckle image. And (4) converting and calculating by using a trigonometry method to obtain the depth value of each scattered spot of the speckle image, and obtaining the depth information of the measured object according to the depth value. Of course, the depth image information and the like may be acquired by a binocular vision method or a method based on the time difference of flight TOF, and the method is not limited thereto, as long as the depth information of the object to be measured can be acquired or obtained by calculation, and all methods fall within the scope of the present embodiment.
After the ISP processor 1130 receives the color information of the object to be measured captured by the image sensor 1114, image data corresponding to the color information of the object to be measured may be processed. ISP processor 1130 analyzes the image data to obtain image statistics that may be used to determine one or more control parameters of imaging device 1110. The image sensor 1114 may include an array of color filters (e.g., Bayer filters), and the image sensor 1114 may acquire light intensity and wavelength information captured with each imaging pixel of the image sensor 1114 and provide a set of raw image data that may be processed by the ISP processor 1130.
ISP processor 1130 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and ISP processor 1130 may perform one or more image processing operations on the raw image data, collecting image statistics about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
ISP processor 1130 may also receive pixel data from image memory 1120. The image memory 1120 may be a portion of a memory device, a storage device, or a separate dedicated memory within an electronic device, and may include a DMA (direct memory Access) feature.
Upon receiving the raw image data, ISP processor 1130 may perform one or more image processing operations.
After the ISP processor 1130 obtains the color information and the depth information of the object to be measured, it may be fused to obtain a three-dimensional image. The feature of the corresponding object to be measured can be extracted by at least one of an appearance contour extraction method or a contour feature extraction method. For example, the features of the object to be measured are extracted by methods such as an active shape model method ASM, an active appearance model method AAM, a principal component analysis method PCA, and a discrete cosine transform method DCT, which are not limited herein. And then the characteristics of the measured object extracted from the depth information and the characteristics of the measured object extracted from the color information are subjected to registration and characteristic fusion processing. The fusion processing may be a process of directly combining the features extracted from the depth information and the color information, a process of combining the same features in different images after weight setting, or a process of generating a three-dimensional image based on the features after fusion in other fusion modes.
Image data for a three-dimensional image may be sent to image memory 1120 for additional processing before being displayed. ISP processor 1130 receives processed data from image memory 1120 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. Image data for a three-dimensional image may be output to a display 1160 for viewing by a user and/or for further Processing by a Graphics Processing Unit (GPU). Further, the output of ISP processor 1130 may also be sent to image memory 1120, and display 1160 may read image data from image memory 1120. In one embodiment, image memory 1120 may be configured to implement one or more frame buffers. In addition, the output of the ISP processor 1130 may be transmitted to an encoder/decoder 1150 for encoding/decoding image data. The encoded image data may be saved and decompressed before being displayed on the display 1160 device. The encoder/decoder 1150 may be implemented by a CPU or GPU or coprocessor.
The image statistics determined by the ISP processor 1130 may be sent to the control logic processor 1140 unit. Control logic 1140 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters for imaging device 1110 based on the received image statistics.
The following steps are steps of implementing the blurring processing method of the shot image by using the image processing technology in fig. 8:
step 101', projecting a structured light source to a foreground region of a preview image, and shooting a structured light image of the structured light source modulated by the foreground region.
And 102', demodulating a phase corresponding to a deformed position pixel in the structured light image, generating first depth of field information of the foreground area according to the phase, and generating 3D information of the foreground area according to the first depth of field information.
And 103', matching the 3D information of the foreground area with a pre-stored database, determining irrelevant figure 3D information according to a matching result, and determining a target area corresponding to an irrelevant figure according to the irrelevant figure 3D information.
And 104', deleting the irrelevant people, and blurring the background area and the target area corresponding to the irrelevant people.
It should be noted that the foregoing explanation of the embodiment of the blurring processing method for the captured image is also applicable to the terminal device of this embodiment, and the implementation principle is similar, and is not described herein again.
In summary, the terminal device according to the embodiment of the present invention projects a structured light source to a foreground region of a preview image, shoots a structured light image of the structured light source modulated by the foreground region, demodulates a phase corresponding to a deformed position pixel in the structured light image, generates first depth-of-field information of the foreground region according to the phase, generates 3D information of the foreground region according to the first depth-of-field information, matches the 3D information of the foreground region with a pre-stored database, determines 3D information of an unrelated person according to a matching result, determines a target region corresponding to the unrelated person according to the 3D information of the unrelated person, deletes the unrelated person, and virtualizes a background region and the target region corresponding to the unrelated person. Therefore, the shot person image can be actively selected, irrelevant persons can be intelligently deleted, relevant deleted areas are subjected to blurring processing, the relevant deleted areas are in seamless connection with other areas in the image, and the image visual effect is good.
An embodiment of the present invention further provides a non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, can implement the blurring processing method for captured images according to the foregoing embodiment.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (10)

1. A blurring processing method for a shot image is characterized by comprising the following steps:
projecting a structural light source to a foreground area of a preview image, and shooting a structural light image of the structural light source modulated by the foreground area;
demodulating a phase corresponding to a pixel at a deformation position in the structured light image, generating first depth-of-field information of the foreground area according to the phase, and generating 3D information of the foreground area according to the first depth-of-field information;
matching the 3D information of the foreground area with a pre-stored database, determining irrelevant figure 3D information according to a matching result, and determining a target area corresponding to the irrelevant figure according to the irrelevant figure 3D information;
deleting the irrelevant figures, and blurring the background area and the target area corresponding to the irrelevant figures;
wherein, the determining the target area corresponding to the irrelevant person according to the irrelevant person 3D information comprises:
the method comprises the steps of adopting a relevant living body identification technical means to identify a living body of a foreground area, identifying an area where a person is located, carrying out pixel matching on the area where the person is located and 3D information of the foreground area, matching 3D information corresponding to the area where the person is located, and further taking 3D information corresponding to other persons except for relevant persons as 3D information of irrelevant persons so as to determine a target area of the irrelevant persons according to the 3D information of the irrelevant persons.
2. The method of claim 1, wherein the structural features of the structured light source comprise:
laser stripes, gray codes, sinusoidal stripes, uniform speckles, or non-uniform speckles.
3. The method of claim 1, wherein blurring the background region and the target region corresponding to the unrelated person comprises:
determining second depth information of the background area;
acquiring a basic value of the blurring degree according to the first depth of field information and the second depth of field information;
and blurring the background area and the target area corresponding to the irrelevant person according to the basic value of the blurring degree.
4. The method of claim 3, wherein blurring the background region and the target region corresponding to the irrelevant person according to the basic value of the blurring degree comprises:
determining a blurring coefficient of each pixel in the background area according to the basic value of the blurring degree and the second depth information of the background area;
determining a blurring coefficient of each pixel in the target area according to the basic value of the blurring degree and the depth information of the target area;
and performing Gaussian blur processing on the background area and the target area according to the blurring coefficient of each pixel in the background area and the target area.
5. The method of claim 4, wherein determining the blurring coefficient for each pixel in the background region according to the base value of the blurring degree and the second depth information of the background region comprises:
calculating the product of the basic value of the blurring degree and the second depth information of each pixel in the background area, and acquiring the blurring coefficient of each pixel in the background area;
determining a blurring coefficient of each pixel in the target area according to the basic value of the blurring degree and the depth information of the target area, including:
and calculating the product of the basic value of the virtualization degree and the depth information of each pixel in the target area, and acquiring the virtualization coefficient of each pixel in the target area.
6. A blurring processing apparatus for a captured image, comprising:
the shooting module is used for projecting a structural light source to a foreground area of a preview image and shooting a structural light image of the structural light source modulated by the foreground area;
the generating module is used for demodulating a phase corresponding to a deformed position pixel in the structured light image, generating first depth-of-field information of the foreground area according to the phase, and generating 3D information of the foreground area according to the first depth-of-field information;
the matching module is used for matching the 3D information of the foreground area with a pre-stored database and determining the 3D information of irrelevant people according to the matching result;
the determining module is used for determining a target area corresponding to an irrelevant person according to the irrelevant person 3D information;
the processing module is used for deleting the irrelevant figures and blurring the background area and the target area corresponding to the irrelevant figures;
wherein, the determining the target area corresponding to the irrelevant person according to the irrelevant person 3D information comprises:
the method comprises the steps of adopting a relevant living body identification technical means to identify a living body of a foreground area, identifying an area where a person is located, carrying out pixel matching on the area where the person is located and 3D information of the foreground area, matching 3D information corresponding to the area where the person is located, and further taking 3D information corresponding to other persons except for relevant persons as 3D information of irrelevant persons so as to determine a target area of the irrelevant persons according to the 3D information of the irrelevant persons.
7. The apparatus of claim 6, wherein the processing module comprises:
a determining unit configured to determine second depth information of the background area;
an obtaining unit configured to obtain a basic value of a blurring degree according to the first depth-of-field information and the second depth-of-field information;
and the processing unit is used for carrying out blurring processing on the target area and the target area according to the basic value of the blurring degree.
8. The apparatus of claim 7, wherein the processing unit comprises:
the first determining subunit is configured to determine a blurring coefficient of each pixel in the background region according to the basic value of the blurring degree and the second depth information of the background region;
a second determining unit, configured to determine a blurring coefficient of each pixel in the target region according to the basic value of the blurring degree and the depth information of the target region;
and the processing subunit is used for performing Gaussian blur processing on the background area and the target area according to the blurring coefficient of each pixel in the background area and the target area.
9. A terminal device comprising a memory and a processor, wherein the memory stores computer readable instructions, and the instructions, when executed by the processor, cause the processor to perform the blurring processing method for captured images according to any one of claims 1 to 5.
10. A non-transitory computer-readable storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the blurring processing method of a captured image according to any one of claims 1 to 5.
CN201710677493.6A 2017-08-09 2017-08-09 Blurring processing method and device for shot image Expired - Fee Related CN107590828B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710677493.6A CN107590828B (en) 2017-08-09 2017-08-09 Blurring processing method and device for shot image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710677493.6A CN107590828B (en) 2017-08-09 2017-08-09 Blurring processing method and device for shot image

Publications (2)

Publication Number Publication Date
CN107590828A CN107590828A (en) 2018-01-16
CN107590828B true CN107590828B (en) 2020-01-10

Family

ID=61042065

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710677493.6A Expired - Fee Related CN107590828B (en) 2017-08-09 2017-08-09 Blurring processing method and device for shot image

Country Status (1)

Country Link
CN (1) CN107590828B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110175950A (en) * 2018-10-24 2019-08-27 广东小天才科技有限公司 A kind of method for secret protection and wearable device based on wearable device
CN110022392B (en) * 2019-05-27 2021-06-15 Oppo广东移动通信有限公司 Photographing control method and related product
CN110809152A (en) * 2019-11-06 2020-02-18 Oppo广东移动通信有限公司 Information processing method, encoding device, decoding device, system, and storage medium
CN111047634B (en) * 2019-11-13 2023-08-08 杭州飞步科技有限公司 Scene depth determination method, device, equipment and storage medium
CN114302057A (en) * 2021-12-24 2022-04-08 维沃移动通信有限公司 Image parameter determination method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103152521A (en) * 2013-01-30 2013-06-12 广东欧珀移动通信有限公司 Effect of depth of field achieving method for mobile terminal and mobile terminal
CN103886085A (en) * 2014-03-28 2014-06-25 浪潮软件集团有限公司 Universal method for transforming cross report form through columns
CN104333700A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Image blurring method and image blurring device
CN105979165A (en) * 2016-06-02 2016-09-28 广东欧珀移动通信有限公司 Blurred photos generation method, blurred photos generation device and mobile terminal
CN106157262A (en) * 2016-06-28 2016-11-23 广东欧珀移动通信有限公司 The processing method of a kind of augmented reality, device and mobile terminal
CN106993112A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 Background-blurring method and device and electronic installation based on the depth of field

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103152521A (en) * 2013-01-30 2013-06-12 广东欧珀移动通信有限公司 Effect of depth of field achieving method for mobile terminal and mobile terminal
CN103886085A (en) * 2014-03-28 2014-06-25 浪潮软件集团有限公司 Universal method for transforming cross report form through columns
CN104333700A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Image blurring method and image blurring device
CN105979165A (en) * 2016-06-02 2016-09-28 广东欧珀移动通信有限公司 Blurred photos generation method, blurred photos generation device and mobile terminal
CN106157262A (en) * 2016-06-28 2016-11-23 广东欧珀移动通信有限公司 The processing method of a kind of augmented reality, device and mobile terminal
CN106993112A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 Background-blurring method and device and electronic installation based on the depth of field

Also Published As

Publication number Publication date
CN107590828A (en) 2018-01-16

Similar Documents

Publication Publication Date Title
CN107590828B (en) Blurring processing method and device for shot image
CN107025635B (en) Depth-of-field-based image saturation processing method and device and electronic device
CN107368730B (en) Unlocking verification method and device
CN107480613B (en) Face recognition method and device, mobile terminal and computer readable storage medium
CN107563304B (en) Terminal equipment unlocking method and device and terminal equipment
CN107734267B (en) Image processing method and device
CN107481304B (en) Method and device for constructing virtual image in game scene
CN107797664B (en) Content display method and device and electronic device
CN107517346B (en) Photographing method and device based on structured light and mobile device
CN107564050B (en) Control method and device based on structured light and terminal equipment
CN107464280B (en) Matching method and device for user 3D modeling
CN107483815B (en) Method and device for shooting moving object
CN107705278B (en) Dynamic effect adding method and terminal equipment
CN107610127B (en) Image processing method, image processing apparatus, electronic apparatus, and computer-readable storage medium
CN107610171B (en) Image processing method and device
CN107509043B (en) Image processing method, image processing apparatus, electronic apparatus, and computer-readable storage medium
CN107734264B (en) Image processing method and device
US11138740B2 (en) Image processing methods, image processing apparatuses, and computer-readable storage medium
CN107392874B (en) Beauty treatment method and device and mobile equipment
CN107452034B (en) Image processing method and device
CN107480615B (en) Beauty treatment method and device and mobile equipment
CN107360354B (en) Photographing method, photographing device, mobile terminal and computer-readable storage medium
CN107623814A (en) The sensitive information screen method and device of shooting image
CN107454336B (en) Image processing method and apparatus, electronic apparatus, and computer-readable storage medium
CN107613239B (en) Video communication background display method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200110