CN105933589B - A kind of image processing method and terminal - Google Patents

A kind of image processing method and terminal Download PDF

Info

Publication number
CN105933589B
CN105933589B CN201610503757.1A CN201610503757A CN105933589B CN 105933589 B CN105933589 B CN 105933589B CN 201610503757 A CN201610503757 A CN 201610503757A CN 105933589 B CN105933589 B CN 105933589B
Authority
CN
China
Prior art keywords
focusing area
preview image
area
terminal
focusing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610503757.1A
Other languages
Chinese (zh)
Other versions
CN105933589A (en
Inventor
张海平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201610503757.1A priority Critical patent/CN105933589B/en
Publication of CN105933589A publication Critical patent/CN105933589A/en
Application granted granted Critical
Publication of CN105933589B publication Critical patent/CN105933589B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses a kind of image processing methods, comprising: determines the focusing area and non-focusing area of the preview image of the camera shooting of terminal;The distance between spatial position indicated by each pixel and the camera in the preview image are determined using the laser range sensor of the terminal, obtains N number of distance value, and the pixel number of the preview image is the N, and the N is positive integer;Keep the focusing area clearly under the premise of, Fuzzy processing is carried out to the non-focusing area according to N number of distance value.The embodiment of the invention also provides a kind of terminals.The operating process of background blurring realization can be simplified through the embodiment of the present invention, and due to carrying out background blurring processing to non-focusing area according to distance, thus, obtained background blurring image effect is more true to nature.

Description

A kind of image processing method and terminal
Technical field
The present invention relates to technical field of electronic equipment more particularly to a kind of image processing method and terminals.
Background technique
With the fast development of information technology, terminal (such as mobile phone, tablet computer) use is more and more frequent, in terminal Integrated function is also more and more.It takes pictures and has become an important attraction of each cell phone manufacturer, how to allow effect promoting of taking pictures, And more differentiation functions, become the focus that each manufacturer competitively uses.
Currently, terminal, which is taken pictures, preferably to realize that close shot is clear, distant view is fuzzy, and popular double camera shootings in the market Head, primarily now effect is exactly to realize that close shot is clear, and distant view is fuzzy, and principle is that difference is respectively seen using two cameras Region A and B the difference of A Yu B distance are then estimated using the two cameras.If user manually selects the area A at this time Domain is focusing area, then the image of a-quadrant is to keep clear, corresponding B area then Fuzzy processing, and this blurring journey Degree, it is generally the case that user oneself is needed to control by upper and lower tie rod.This mode realizes that close shot is clear, distant view mould Paste, on the one hand, operation it is not smart enough, since it is desired that user manually select focusing area operation it is not smart enough due to double camera shootings test A, on the other hand, since double camera shootings test A is at a distance from B, only the relative distance of A and B, the i.e. distance of A-B, are not the areas A or B With a distance from camera lens, itself is selecting focus point in domain, not accurate enough, and A and B relative distance only estimate that error is larger.
Summary of the invention
The embodiment of the invention provides a kind of image processing method and terminals, to the operation of the background blurring realization of simplification Journey and make background blurring image effect more true to nature.
First aspect of the embodiment of the present invention provides a kind of image processing method, comprising:
Determine the focusing area and non-focusing area of the preview image of the camera shooting of terminal;
Space indicated by each pixel in the preview image is determined using the laser range sensor of the terminal The distance between position and the camera obtain N number of distance value, and the pixel number of the preview image is the N, described N is positive integer;
Keep the focusing area clearly under the premise of, the non-focusing area is carried out according to N number of distance value Fuzzy processing.
Second aspect of the embodiment of the present invention provides a kind of terminal, comprising:
First determination unit, the focusing area of the preview image of the camera shooting for determining terminal and non-focusing area Domain;
Second determination unit, for determining each picture in the preview image using the laser range sensor of the terminal The distance between spatial position indicated by vegetarian refreshments and the camera obtain N number of distance value, the pixel of the preview image Number is the N, and the N is positive integer;
Processing unit, for keep the focusing area clearly under the premise of, determined according to second determination unit N number of distance value Fuzzy processing is carried out to the non-focusing area that first determination unit determines.
The third aspect of the embodiment of the present invention provides a kind of terminal, comprising:
Processor and memory;
Wherein, the processor is used to call the executable program code in the memory, executes first aspect Some or all of step.
The implementation of the embodiments of the present invention has the following beneficial effects:
As can be seen that through the embodiment of the present invention, determine the preview image of the camera shooting of terminal focusing area and Non- focusing area determines space bit indicated by each pixel in the preview image using the laser range sensor of the terminal The distance between the camera is set, obtains N number of distance value, the pixel number of the preview image is the N, this is positive whole Number, keep the focusing area clearly under the premise of, Fuzzy processing is carried out to the non-focusing area according to N number of distance value. Therefore, can be after focusing area and non-focusing area be determined, it can be according to the distance between terminal and each pixel value pair Non- focusing area carries out Fuzzy processing, thus, the operating process of background blurring realization in the prior art is simplified, and due to root Background blurring processing is carried out to non-focusing area according to distance, thus, obtained background blurring image effect is more true to nature.
Detailed description of the invention
It to describe the technical solutions in the embodiments of the present invention more clearly, below will be to needed in the embodiment Attached drawing is briefly described, it should be apparent that, drawings in the following description are some embodiments of the invention, general for this field For logical technical staff, without creative efforts, it is also possible to obtain other drawings based on these drawings.
Fig. 1 is a kind of flow diagram of the first embodiment of image processing method disclosed by the embodiments of the present invention;
Fig. 1 a is that the isolated area of preview image disclosed by the embodiments of the present invention divides schematic diagram;
Fig. 1 b is laser range sensor ranging schematic diagram disclosed by the embodiments of the present invention;
Fig. 1 c is the laser range sensor ranging plane demonstration graph in Fig. 1 b disclosed by the embodiments of the present invention;
Fig. 2 is a kind of flow diagram of the second embodiment of image processing method disclosed by the embodiments of the present invention;
Fig. 3 a is a kind of structural schematic diagram of the first embodiment of terminal disclosed by the embodiments of the present invention;
Fig. 3 b is the structural schematic diagram of the first determination unit of terminal described in Fig. 3 a disclosed by the embodiments of the present invention;
Fig. 3 c is the structural schematic diagram of the processing unit of terminal described in Fig. 3 a disclosed by the embodiments of the present invention;
Fig. 4 is a kind of structural schematic diagram of the second embodiment of terminal disclosed by the embodiments of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall within the protection scope of the present invention.
Description and claims of this specification and term " first ", " second ", " third " and " in the attached drawing Four " etc. are not use to describe a particular order for distinguishing different objects.In addition, term " includes " and " having " and it Any deformation, it is intended that cover and non-exclusive include.Such as it contains the process, method of a series of steps or units, be System, product or equipment are not limited to listed step or unit, but optionally further comprising the step of not listing or list Member, or optionally further comprising other step or units intrinsic for these process, methods, product or equipment.
Referenced herein " embodiment " is it is meant that a particular feature, structure, or characteristic described can wrap in conjunction with the embodiments Containing at least one embodiment of the present invention.Each position in the description occur the phrase might not each mean it is identical Embodiment, nor the independent or alternative embodiment with other embodiments mutual exclusion.Those skilled in the art explicitly and Implicitly understand, embodiment described herein can be combined with other embodiments.
Terminal described in the embodiment of the present invention may include smart phone (such as Android phone, iOS mobile phone, Windows Phone mobile phone etc.), tablet computer, palm PC, laptop, mobile internet device (MID, Mobile Internet Devices) or wearable device etc., above-mentioned terminal is only citing, and non exhaustive, including but not limited to above-mentioned end End.
It should be noted that the principle of laser range sensor are as follows: laser range sensor issues modulated near-infrared Light, meets object back reflection, and laser range sensor is clapped by calculating light transmitting and reflection interval difference or phase difference to convert Take the photograph the distance of scenery) it tests by the distance between bat scenery and camera lens.Swash for example, can first be emitted by laser diode alignment target Light pulse, laser is scattered to all directions after target reflects, and some scattered light returns to transducer receivers, is connect by optical system It being imaged on avalanche photodide after receipts, avalanche photodide is a kind of optical sensor of the inside with enlarging function, Therefore it can detect extremely faint optical signal, record and handle and be issued to return from light pulse and be received the time experienced, Target range can be measured.
Optionally, by the above-mentioned means, spatial position indicated by each pixel and camera shooting in preview image can be measured The distance between head, wherein each pixel corresponds to a certain position of jobbie in photographed scene in preview image, this certain One position is referred to as spatial position indicated by pixel, and since preview image is 2-D data, and spatial position is three dimensions According to the association of the two is, in shooting process, when taking the jobbie in three-dimensional space, which can taken the photograph As head preview image in be imaged, thus, a spatial position in each pixel corresponding three-dimensional space.Assuming that preview image Comprising N number of pixel, then N number of distance value can be obtained by the above method, each distance value indicates indicated by each pixel The distance between spatial position and pixel, wherein N is positive integer.
Referring to Fig. 1, being a kind of first embodiment flow diagram of image processing method provided in an embodiment of the present invention. Image processing method as described in this embodiment, comprising the following steps:
101, the focusing area and non-focusing area of the preview image of the camera shooting of terminal are determined.
Wherein, the camera of terminal can be visible image capturing head, in infrared camera or ultraviolet light camera at least One etc..
Optionally, can choose terminal camera shooting preview image in target area, using the target area as Focusing area, and using the region other than the focusing area as non-focusing area, wherein the preview image includes multiple only Vertical region, the target area are at least one of the multiple isolated area.As shown in Figure 1a, preview image is divided into 9 A isolated area (isolated area that You Tuzhong dotted line is crossed to form), target area can be 9 isolated areas of the preview image In any region.Certainly, preview image can also be divided into 2,3,4 etc. isolated areas, not arrange one by one herein It lifts.
Still optionally further, the target area in the preview image of the camera shooting of above-mentioned selection terminal may include following Step:
1) it, receives and chooses instruction;
2), in the preview image for shooting the camera of terminal by it is described choose the indicated isolated area of instruction as pair Burnt region, and using the region other than the focusing area as non-focusing area;
Wherein, in step 1, user can choose at least one isolated area in multiple isolated areas, choose this extremely determining After a few isolated area, then selection instruction is generated, it in step 2, then can be using at least one above-mentioned isolated area as focusing Region.For example, can receive user for the selection instruction at least one region in above-mentioned 9 isolated areas, it is assumed that isolated area i For a region in 9 isolated areas, after having chosen the isolated area, then using isolated area i as focusing area, and Using the region other than focusing area as non-focusing area.
Optionally, N number of receiving area built in laser range sensor, each receiving area be it is independent, can receive External laser energy.Guarantee simultaneously through two lens designs, guarantee laser range sensor can receive N number of region away from From signal, and it can guarantee N number of receiving area of laser range sensor and the N that the preview image of camera divides in advance A region is consistent.For example, in Fig. 1 b, when camera is opened, photographed scene can form preview graph such as Fig. 1 b and as illustrated in figure 1 c Picture, preview image are divided into 9 regions, i.e., photographed scene is divided into 9 regions, and laser range sensor can detect this respectively The distance between 9 regions and camera value specifically issue modulated near infrared light (figure A by laser range sensor In indicated by the dotted line that laser range sensor issues), and reflected by the object in photographed scene, and by the laser ranging Sensor receives, by calculating light transmitting and reflection interval difference or phase difference, come the distance of scenery of being taken that converts.Assuming that the One detection unit detects distance value at first, then the distance value is fed back to terminal.In Fig. 1 c, the photographed scene of camera can be obtained To preview image, and the detection zone (including P1, P2, P3, P4, P5, P6, P7, P8 and P9) of laser range sensor can with it is pre- 9 regions that image of looking at divides correspond, by the distance between the corresponding P1 detection zone of first detection unit and camera As in corresponding preview image region and the distance between camera, each detection zone and camera can be obtained as a result, The distance between value, that is, can determine the space in each region in N number of region for dividing in advance in the preview image of camera shooting The distance between position and the camera.Thus can be using the corresponding region of lowest distance value as focusing area.Certainly, sharp The distance between the spatial position of each pixel and camera in preview image can also be measured with laser range sensor.
Further, the target area in the preview image of the camera shooting of above-mentioned selection terminal may be used also are as follows:
Using the specified region in the preview image as focusing area, and using the region other than the focusing area as Non- focusing area.For example, a certain region that can be preassigned in preview screen is used as specified region, then, then in preview image Region other than specified region is non-focusing area.
Optionally, step 101 may also include the steps of:
3) target detection, is carried out to the preview image of the camera shooting of terminal;
4), using the target region of the detection as focusing area.
Wherein, the target in step 3 can be people, vehicle, cat, dog etc., specifically be not limited thereto.Above-mentioned target detection Infrared sensor progress temperature detection can be used or image recognition algorithm carries out target and detected.For example, can be used infrared When camera is shot, then the target in preview image can be identified according to the distribution situation of different temperatures.
Further, above-mentioned steps 3 may include following steps:
31) binary conversion treatment, is carried out to the preview image of the camera shooting of terminal, obtains binaryzation preview image;
32) profile of the binaryzation preview image, is extracted;
33) image recognition, is carried out to the profile, to identify the target in the profile.
Wherein, binary conversion treatment, the binaryzation obtained after binary conversion treatment can be carried out to the preview image that camera is shot Only there are two tonal gradations for preview image, i.e., pixel value is 0 or pixel value is 255.Wherein, the threshold value of binary conversion treatment can be The average brightness value of preview image, alternatively, the average brightness value of focusing area optionally can distinguish above-mentioned multiple isolated areas Each isolated area in choose multiple pixel values, then, calculate all pixels value mean value, using the mean value as binaryzation threshold Value, due to only choosing the partial pixel point of preview image, thus, calculating speed is accelerated, also, in multiple isolated areas It is chosen, thus, more representative, the binarization threshold precision of selection is high and access speed is fast.
Optionally, when preview image is color image, the luminance component image of also extractable preview image, and it is bright to this It spends component image and carries out target detection according to step 31,32 and 33, firstly, binary conversion treatment is carried out, secondly, contours extract, most Afterwards, image recognition is carried out, due to only handling a luminance component image, thus, the complexity of preview image is reduced, Processing speed can be promoted.
Optionally, in the camera shooting of terminal, shooting image can be focused, wherein raw in the focusing At focal zone can be used as focusing area.
102, it is determined in the preview image indicated by each pixel using the laser range sensor of the terminal The distance between spatial position and the camera obtain N number of distance value, and the pixel number of the preview image is the N, The N is positive integer.
103, keep the focusing area clearly under the premise of, according to N number of distance value to the non-focusing area Carry out Fuzzy processing.
Optionally, focusing area can be kept clear, and Fuzzy processing is carried out to non-focusing area according to N number of distance value, So, the image obtained i.e. focusing area is clear, rather than focusing area is fuzzy.In the embodiment of the present invention, the master of Fuzzy processing Want means that Gaussian Blur algorithm can be used, it is of course also possible to use other algorithms, are not specifically limited herein.
It should be noted that the above-mentioned holding focusing area is clear it is understood that any place can not be done to focusing area Reason is imitated alternatively, certain image enhancement processing can be carried out to focusing area alternatively, certain U.S. face can be carried out to focusing area Fruit processing.Wherein, image enhancement processing can be used: histogram equalization, gray scale stretching, white balance processing, color temperature adjustment, image Restore etc., it is not limited here.In brief, focusing area is kept clearly may be used are as follows: to guarantee the readability of focusing area not Lower than readability when not handling focusing area.
Still optionally further, in the case that keep focusing area clearly, can by non-focusing area carry out Fuzzy processing, Concretely:
Firstly, determining the mean value of the corresponding all distance values of focusing area, the first mean value is obtained;
Secondly, determining the mean value of all distance values corresponding to each isolated area in non-focusing area respectively, obtain To multiple second mean values;
Then, it is determined according to first mean value and the multiple second mean value each described only in the non-focusing area The fuzzy coefficient in vertical region;
Finally, being obscured respectively according to the fuzzy coefficient of each isolated area to each isolated area Change processing.
Specifically, the mean value of distance value corresponding to each isolated area in above-mentioned multiple isolated areas is calculated separately, so Afterwards, the ratio conduct apart from mean value according to corresponding to isolated area each in non-focusing area apart from mean value and focusing area Fuzzy coefficient.It is assumed that focusing area apart from mean value be A, a certain isolated area apart from mean value be B, then fuzzy coefficient=B/ A, B are bigger, then fuzzy coefficient is bigger, that is, fog-level is higher, and B is smaller, then fuzzy coefficient is smaller, that is, fog-level It is smaller.
Optionally, certainly, Fuzzy processing can be also carried out to focusing area, and non-focusing area is kept clear, can be obtained It is clear to distant view, the fuzzy effect of close shot.
As can be seen that through the embodiment of the present invention, determine the preview image of the camera shooting of terminal focusing area and Non- focusing area determines space bit indicated by each pixel in the preview image using the laser range sensor of the terminal The distance between the camera is set, obtains N number of distance value, the pixel number of the preview image is the N, this is positive whole Number, keep the focusing area clearly under the premise of, Fuzzy processing is carried out to the non-focusing area according to N number of distance value. Therefore, can be after focusing area and non-focusing area be determined, it can be according to the distance between terminal and each pixel value pair Non- focusing area carries out Fuzzy processing, thus, the operating process of background blurring realization in the prior art is simplified, and due to root Background blurring processing is carried out to non-focusing area according to distance, thus, obtained background blurring image effect is more true to nature.
Consistently with above-described embodiment, referring to Fig. 2, for the of a kind of image processing method provided in an embodiment of the present invention Two embodiment flow diagrams.Image processing method as described in this embodiment, comprising the following steps:
201, the focusing area and non-focusing area of the preview image of the camera shooting of terminal are determined.
202, it is determined in the preview image indicated by each pixel using the laser range sensor of the terminal The distance between spatial position and the camera obtain N number of distance value, and the pixel number of the preview image is the N, The N is positive integer.
203, average distance value is determined according to M distance value, wherein the M distance value is institute in the focusing area Have or the distance between partial pixel point and the camera value, the M are positive integer, and less than the N.
Optionally, distance value all in M distance value can be carried out taking mean operation, it is assumed that focusing area includes J Pixel, above-mentioned M distance value can be the distance between part pixel and camera value in focusing area, then, M is less than J, Certainly, above-mentioned M distance value can also be the distance between all pixels point and camera value in focusing area, then, M is equal to J. As can be seen that M is less than or equal to J, J is less than N.
204, it calculates in the non-focusing area between each corresponding distance value of pixel and the average distance value Difference obtains multiple differences.
Optionally, the difference between the corresponding distance value of each pixel in non-focusing area and average distance value can be calculated Value, obtained difference is likely larger than 0, and less than 0, alternatively, being greater than 0, how different differences be can be used as to the corresponding picture of the difference The foundation of vegetarian refreshments progress Fuzzy processing.
205, Fuzzy processing is carried out to the non-focusing area according to the multiple difference.
Wherein, step 205 may include two different Fuzzy processing modes, specific as follows:
Above-mentioned multiple differences can be carried out the operation that takes absolute value, obtain multiple absolute values by the first Fuzzy processing mode, Fuzzy processing is carried out to non-focusing area according to multiple absolute value, absolute value is then blurred greatly that degree is big, and absolute value is small then Blurring degree is small.
Further, certainly, the ascending sequence of multiple absolute values can be divided into multiple grades, if, it is multiple etc. Grade includes A, B and C, and the corresponding gaussian coefficient of A grade is a, and the corresponding gaussian coefficient of B is b, and the corresponding gaussian coefficient of C is c, A etc. The corresponding absolute value of grade is less than the corresponding absolute value of B grade, and the corresponding absolute value of B grade is less than the corresponding absolute value of C grade, phase Ying Di is blurred the ascending sequence of degree: A < B < C, then, the corresponding pixel of each absolute value in A grade according to Gaussian coefficient a carries out Fuzzy processing, and the corresponding pixel of each absolute value in B grade carries out mould according to gaussian coefficient b Gelatinization processing, the corresponding pixel of each absolute value in C grade carry out Fuzzy processing according to gaussian coefficient c.
The first Fuzzy processing mode can judge above-mentioned multiple differences, when difference is less than or equal to 0, then Fuzzy processing is not carried out to the corresponding pixel of the difference, the reason is that, difference is less than or equal to 0, then it is believed that the difference The corresponding spatial position of corresponding pixel is close shot, and it is clear to keep.If difference is greater than 0, it is believed that the difference is corresponding The corresponding spatial position of pixel is distant view, carries out Fuzzy processing, and difference is bigger, and blurring degree is bigger, and difference is smaller, mould Gelatinization degree is smaller.Certainly, second of Fuzzy processing mode, point etc. being also referred in the first Fuzzy processing mode Grade blurring degree, it may be assumed that this partial difference by difference greater than 0 is divided into multiple grades, and each grade uses corresponding Gauss system Number carries out Fuzzy processing.
Wherein, the specific descriptions of other steps in Fig. 2 described embodiment can refer at image described in Fig. 1 The specific descriptions of reason method, details are not described herein.
It is specific as follows the following are the device for implementing image processing method described in Fig. 1 or Fig. 2:
Fig. 3 a is please referred to, is a kind of schematic structural diagram of the first embodiment of terminal provided in an embodiment of the present invention.This implementation Terminal described in example, comprising: the first determination unit 301, the second determination unit 302 and processing unit 303, specific as follows:
First determination unit 301, the focusing area of the preview image of the camera shooting for determining terminal and non-focusing Region;
Second determination unit 302, it is every in the preview image for being determined using the laser range sensor of the terminal The distance between spatial position indicated by one pixel and the camera obtain N number of distance value, the picture of the preview image Vegetarian refreshments number is the N, and the N is positive integer;
Processing unit 303, for keep the focusing area clearly under the premise of, according to second determination unit The non-focusing area that the 302 N number of distance values determined determine first determination unit 301 carries out at blurring Reason.
Optionally, first determination unit 301 is specifically used for:
The target area in the preview image of the camera shooting of terminal is chosen, using the target area as focusing area, And using the region other than the focusing area as non-focusing area, wherein the preview image includes multiple isolated areas, institute Stating target area is at least one of the multiple isolated area.
Optionally, first determination unit 301 also particularly useful for:
It receives and chooses instruction, choose the indicated independence of instruction by described in the preview image that the camera of terminal is shot Region is as focusing area, and using the region other than the focusing area as non-focusing area;
Alternatively,
Using the specified region in the preview image as focusing area, and using the region other than the focusing area as Non- focusing area.
Optionally, as shown in Figure 3b, first determination unit 301 of terminal described in Fig. 3 a includes:
Detection module 3011, the preview image for the camera shooting to terminal carry out target detection;
First determining module 3012, for using the target region of the detection as focusing area.
Further, the detection module 3011 includes:
Binary processing module (does not mark) in figure, and the preview image for the camera shooting to terminal carries out binaryzation Processing, obtains binaryzation preview image;
Extraction module (does not mark) in figure, for extracting the profile of the binaryzation preview image;
Identification module (does not mark) in figure, for carrying out image recognition to the profile, to identify in the profile Target.
Optionally, such as Fig. 3 c, the processing unit 303 of terminal described in Fig. 3 a includes:
Second determining module 3031, for determining average distance value according to M distance value, wherein the M distance value be The distance between all or part of pixel and the camera are worth in the focusing area, and the M is positive integer, and is less than The N;
Computing module 3032, for calculate the corresponding distance value of each pixel in the non-focusing area with it is described average Difference between distance value obtains multiple differences;
Fuzzy processing module 3033, for being carried out at blurring according to the multiple difference to the non-focusing area Reason.
As can be seen that terminal described in through the embodiment of the present invention, it may be determined that the preview graph of the camera shooting of terminal The focusing area of picture and non-focusing area determine each pixel in the preview image using the laser range sensor of the terminal The distance between indicated spatial position and the camera, obtain N number of distance value, and the pixel number of the preview image is institute State N, should be positive integer, keep the focusing area clearly under the premise of, the non-focusing area is carried out according to N number of distance value Fuzzy processing.Therefore, can be after focusing area and non-focusing area be determined, it can be according between terminal and each pixel Distance value to non-focusing area carry out Fuzzy processing, thus, simplify the operation of background blurring realization in the prior art Journey, and due to carrying out background blurring processing to non-focusing area according to distance, thus, obtained background blurring image effect is more It is true to nature.
Referring to Fig. 4, being a kind of schematic structural diagram of the second embodiment of terminal provided in an embodiment of the present invention.The present embodiment Described in terminal, comprising: at least one input equipment 1000;At least one output equipment 2000;At least one processor 3000, such as CPU;With memory 4000, above-mentioned input equipment 1000, output equipment 2000, processor 3000 and memory 4000 are connected by bus 5000.
Wherein, above-mentioned input equipment 1000 concretely touch panel, physical button or mouse, fingerprint recognition mould group etc. Deng.
Above-mentioned output equipment 2000 concretely display screen.
Above-mentioned memory 4000 can be high speed RAM memory, can also be non-labile memory (non-volatile ), such as magnetic disk storage memory.Above-mentioned memory 4000 is used to store a set of program code, above-mentioned input equipment 1000, defeated Equipment 2000 and processor 3000 are used to call the program code stored in memory 4000 out, perform the following operations:
Above-mentioned processor 3000, is used for:
Determine the focusing area and non-focusing area of the preview image of the camera shooting of terminal;
Space indicated by each pixel in the preview image is determined using the laser range sensor of the terminal The distance between position and the camera obtain N number of distance value, and the pixel number of the preview image is the N, described N is positive integer;
Keep the focusing area clearly under the premise of, the non-focusing area is carried out according to N number of distance value Fuzzy processing.
Optionally, the focusing area of the preview image of the camera shooting of the determining terminal of above-mentioned processor 3000 and non-focusing Region, comprising:
The target area in the preview image of the camera shooting of terminal is chosen, using the target area as focusing area, And using the region other than the focusing area as non-focusing area, wherein the preview image includes multiple isolated areas, institute Stating target area is at least one of the multiple isolated area.
Optionally, above-mentioned processor 3000 chooses the target area in the preview image of the camera shooting of terminal, comprising:
It receives and chooses instruction;
By the indicated isolated area of instruction of choosing as focusing in the preview image that the camera of terminal is shot Region, and using the region other than the focusing area as non-focusing area;
Alternatively,
Using the specified region in the preview image as focusing area, and using the region other than the focusing area as Non- focusing area.
Optionally, above-mentioned processor 3000 determines the focusing area of the preview image of the camera shooting of terminal, comprising:
Target detection is carried out to the preview image of the camera shooting of terminal;
Using the target region of the detection as focusing area.
Optionally, above-mentioned processor 3000 carries out target detection to the preview image of the camera shooting of terminal, comprising:
Binary conversion treatment is carried out to the preview image of the camera shooting of terminal, obtains binaryzation preview image;
Extract the profile of the binaryzation preview image;
Image recognition is carried out to the profile, to identify the target in the profile.
Optionally, above-mentioned processor 3000 carries out Fuzzy processing to the non-focusing area according to N number of distance value, Include:
Determine average distance value according to M distance value, wherein the M distance value be the focusing area in it is all or The distance between person's partial pixel point and the camera value, the M is positive integer, and is less than the N;
The difference in the non-focusing area between each corresponding distance value of pixel and the average distance value is calculated, Obtain multiple differences;
Fuzzy processing is carried out to the non-focusing area according to the multiple difference.
The embodiment of the present invention also provides a kind of computer storage medium, wherein the computer storage medium can be stored with journey Sequence, the program include some or all of any image processing method recorded in above method embodiment step when executing Suddenly.
Although combining each embodiment herein, invention has been described, however, implementing the claimed invention In the process, those skilled in the art are by checking the attached drawing, disclosure and the appended claims, it will be appreciated that and it is real Other variations of the existing open embodiment.In the claims, " comprising " (comprising) word is not excluded for other compositions Part or step, "a" or "an" are not excluded for multiple situations.Claim may be implemented in single processor or other units In several functions enumerating.Mutually different has been recited in mutually different dependent certain measures, it is not intended that these are arranged It applies to combine and generates good effect.
It will be understood by those skilled in the art that the embodiment of the present invention can provide as method, apparatus (equipment) or computer journey Sequence product.Therefore, complete hardware embodiment, complete software embodiment or combining software and hardware aspects can be used in the present invention The form of embodiment.Moreover, it wherein includes the calculating of computer usable program code that the present invention, which can be used in one or more, The computer program implemented in machine usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) The form of product.Computer program is stored/distributed in suitable medium, is provided together with other hardware or as the one of hardware Part can also use other distribution forms, such as pass through the wired or wireless telecommunication system of Internet or other.
The present invention be referring to the embodiment of the present invention method, apparatus (equipment) and computer program product flow chart with/ Or block diagram describes.It should be understood that each process that can be realized by computer program instructions in flowchart and/or the block diagram and/ Or the combination of the process and/or box in box and flowchart and/or the block diagram.It can provide these computer program instructions To general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices processor to generate one A machine so that by the instruction that the processor of computer or other programmable data processing devices executes generate for realizing The device for the function of being specified in one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates, Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one The step of function of being specified in a box or multiple boxes.
Although in conjunction with specific features and embodiment, invention has been described, it is clear that, do not departing from this hair In the case where bright spirit and scope, it can be carry out various modifications and is combined.Correspondingly, the specification and drawings are only institute The exemplary illustration of the invention that attached claim is defined, and be considered as covered in the scope of the invention any and all and repair Change, change, combining or equivalent.Obviously, those skilled in the art various changes and modifications can be made to the invention without It is detached from the spirit and scope of the present invention.If in this way, these modifications and changes of the present invention belong to the claims in the present invention and its Within the scope of equivalent technologies, then the present invention is also intended to include these modifications and variations.

Claims (13)

1. a kind of image processing method characterized by comprising
Determine the focusing area and non-focusing area of the preview image of the camera shooting of terminal;
Spatial position indicated by each pixel in the preview image is determined using the laser range sensor of the terminal The distance between described camera obtains N number of distance value, and the pixel number of the preview image is the N, and the N is Positive integer;
Keep the focusing area clearly under the premise of, the non-focusing area is obscured according to N number of distance value Change processing, comprising: the mean value for determining the corresponding all distance values of focusing area obtains the first mean value;Non- focusing area is determined respectively The mean value of all distance values corresponding to each isolated area in domain, obtains multiple second mean values;Then, according to described first Value and the multiple second mean value determine the fuzzy coefficient of each isolated area in the non-focusing area;Respectively according to institute The fuzzy coefficient for stating each isolated area carries out Fuzzy processing to each isolated area.
2. the method according to claim 1, wherein preview image that the camera of the determining terminal is shot Focusing area and non-focusing area, comprising:
The target area in the preview image of the camera shooting of terminal is chosen, using the target area as focusing area, and will Region other than the focusing area is as non-focusing area, wherein the preview image includes multiple isolated areas, the mesh Marking region is at least one of the multiple isolated area.
3. according to the method described in claim 2, it is characterized in that, in the preview image of the camera shooting for choosing terminal Target area, comprising:
It receives and chooses instruction;
By the indicated isolated area of instruction of choosing as focusing area in the preview image that the camera of terminal is shot, And using the region other than the focusing area as non-focusing area;
Alternatively,
Using the specified region in the preview image as focusing area, and using the region other than the focusing area as non-right Burnt region.
4. the method according to claim 1, wherein preview image that the camera of the determining terminal is shot Focusing area, comprising:
Target detection is carried out to the preview image of the camera shooting of terminal;
Using the target region of the detection as focusing area.
5. according to the method described in claim 4, it is characterized in that, the preview image of the camera shooting to terminal carries out Target detection, comprising:
Binary conversion treatment is carried out to the preview image of the camera shooting of terminal, obtains binaryzation preview image;
Extract the profile of the binaryzation preview image;
Image recognition is carried out to the profile, to identify the target in the profile.
6. method according to any one of claims 1 to 5, which is characterized in that it is described according to N number of distance value to described Non- focusing area carries out Fuzzy processing, further includes:
Average distance value is determined according to M distance value, wherein the M distance value is all or portion in the focusing area Divide the distance between pixel and the camera value, the M is positive integer, and is less than the N;
The difference in the non-focusing area between each corresponding distance value of pixel and the average distance value is calculated, is obtained Multiple differences;
Fuzzy processing is carried out to the non-focusing area according to the multiple difference.
7. a kind of terminal characterized by comprising
First determination unit, the focusing area and non-focusing area of the preview image of the camera shooting for determining terminal;
Second determination unit, for determining each pixel in the preview image using the laser range sensor of the terminal The distance between indicated spatial position and the camera obtain N number of distance value, the pixel number of the preview image It is positive integer for the N, the N;
Processing unit, for keep the focusing area clearly under the premise of, the institute that is determined according to second determination unit It states N number of distance value and Fuzzy processing is carried out to the non-focusing area that first determination unit determines, comprising: determine focusing The mean value of the corresponding all distance values in region, obtains the first mean value;Determine that each isolated area institute is right in non-focusing area respectively The mean value for all distance values answered obtains multiple second mean values;Then, according to first mean value and the multiple second mean value Determine the fuzzy coefficient of each isolated area in the non-focusing area;Respectively according to each isolated area Fuzzy coefficient carries out Fuzzy processing to each isolated area.
8. terminal according to claim 7, which is characterized in that first determination unit is specifically used for:
The target area in the preview image of the camera shooting of terminal is chosen, using the target area as focusing area, and will Region other than the focusing area is as non-focusing area, wherein the preview image includes multiple isolated areas, the mesh Marking region is at least one of the multiple isolated area.
9. terminal according to claim 8, which is characterized in that first determination unit also particularly useful for:
It receives and chooses instruction, choose the indicated isolated area of instruction by described in the preview image that the camera of terminal is shot As focusing area, and using the region other than the focusing area as non-focusing area;
Alternatively,
Using the specified region in the preview image as focusing area, and using the region other than the focusing area as non-right Burnt region.
10. terminal according to claim 7, which is characterized in that first determination unit includes:
Detection module, the preview image for the camera shooting to terminal carry out target detection;
First determining module, for using the target region of the detection as focusing area.
11. terminal according to claim 10, which is characterized in that the detection module includes:
Binary processing module, the preview image for the camera shooting to terminal carry out binary conversion treatment, obtain binaryzation Preview image;
Extraction module, for extracting the profile of the binaryzation preview image;
Identification module, for carrying out image recognition to the profile, to identify the target in the profile.
12. according to the described in any item terminals of claim 7 to 11, which is characterized in that the processing unit further include:
Second determining module, for determining average distance value according to M distance value, wherein the M distance value is the focusing The distance between all or part of pixel and the camera are worth in region, and the M is positive integer, and is less than the N;
Computing module, for calculate the corresponding distance value of each pixel in the non-focusing area and the average distance value it Between difference, obtain multiple differences;
Fuzzy processing module, for carrying out Fuzzy processing to the non-focusing area according to the multiple difference.
13. a kind of terminal characterized by comprising
Processor and memory;Wherein, the processor is by calling the code in the memory or instructing to execute such as power Benefit requires method described in 1-6 any one.
CN201610503757.1A 2016-06-28 2016-06-28 A kind of image processing method and terminal Active CN105933589B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610503757.1A CN105933589B (en) 2016-06-28 2016-06-28 A kind of image processing method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610503757.1A CN105933589B (en) 2016-06-28 2016-06-28 A kind of image processing method and terminal

Publications (2)

Publication Number Publication Date
CN105933589A CN105933589A (en) 2016-09-07
CN105933589B true CN105933589B (en) 2019-05-28

Family

ID=56828711

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610503757.1A Active CN105933589B (en) 2016-06-28 2016-06-28 A kind of image processing method and terminal

Country Status (1)

Country Link
CN (1) CN105933589B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110881103A (en) * 2019-09-19 2020-03-13 Oppo广东移动通信有限公司 Focusing control method and device, electronic equipment and computer readable storage medium

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106485790A (en) * 2016-09-30 2017-03-08 珠海市魅族科技有限公司 Method and device that a kind of picture shows
CN106454123B (en) * 2016-11-25 2019-02-22 盐城丝凯文化传播有限公司 A kind of method and mobile terminal of focusing of taking pictures
CN106775238A (en) * 2016-12-14 2017-05-31 深圳市金立通信设备有限公司 A kind of photographic method and terminal
CN106657782B (en) * 2016-12-21 2020-02-18 努比亚技术有限公司 Picture processing method and terminal
CN106993091B (en) * 2017-03-29 2020-05-12 维沃移动通信有限公司 Image blurring method and mobile terminal
CN107426493A (en) * 2017-05-23 2017-12-01 深圳市金立通信设备有限公司 A kind of image pickup method and terminal for blurring background
CN108933890A (en) * 2017-05-24 2018-12-04 中兴通讯股份有限公司 A kind of background-blurring method, equipment and terminal
CN107395965B (en) * 2017-07-14 2019-11-29 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN107277372B (en) * 2017-07-27 2021-04-23 Oppo广东移动通信有限公司 Focusing method, focusing device, computer readable storage medium and mobile terminal
CN107295262B (en) * 2017-07-28 2021-03-26 努比亚技术有限公司 Image processing method, mobile terminal and computer storage medium
CN107592466B (en) * 2017-10-13 2020-04-24 维沃移动通信有限公司 Photographing method and mobile terminal
CN108174085A (en) * 2017-12-19 2018-06-15 信利光电股份有限公司 A kind of image pickup method of multi-cam, filming apparatus, mobile terminal and readable storage medium storing program for executing
CN109696788B (en) * 2019-01-08 2021-12-14 武汉精立电子技术有限公司 Quick automatic focusing method based on display panel
CN113126111B (en) * 2019-12-30 2024-02-09 Oppo广东移动通信有限公司 Time-of-flight module and electronic device
CN111182211B (en) * 2019-12-31 2021-09-24 维沃移动通信有限公司 Shooting method, image processing method and electronic equipment
CN111246092B (en) * 2020-01-16 2021-07-20 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN113138387B (en) * 2020-01-17 2024-03-08 北京小米移动软件有限公司 Image acquisition method and device, mobile terminal and storage medium
CN112733346B (en) * 2020-12-31 2022-08-09 博迈科海洋工程股份有限公司 Method for planning delightful area in electrical operation room

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101764925A (en) * 2008-12-25 2010-06-30 华晶科技股份有限公司 Simulation method for shallow field depth of digital image
CN101933040A (en) * 2007-06-06 2010-12-29 索尼株式会社 Image processing device, image processing method, and image processing program
CN105025226A (en) * 2015-07-07 2015-11-04 广东欧珀移动通信有限公司 Shooting control method and user terminal
CN105227838A (en) * 2015-09-28 2016-01-06 广东欧珀移动通信有限公司 A kind of image processing method and mobile terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101472064A (en) * 2007-12-25 2009-07-01 鸿富锦精密工业(深圳)有限公司 Filming system and method for processing scene depth

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101933040A (en) * 2007-06-06 2010-12-29 索尼株式会社 Image processing device, image processing method, and image processing program
CN101764925A (en) * 2008-12-25 2010-06-30 华晶科技股份有限公司 Simulation method for shallow field depth of digital image
CN105025226A (en) * 2015-07-07 2015-11-04 广东欧珀移动通信有限公司 Shooting control method and user terminal
CN105227838A (en) * 2015-09-28 2016-01-06 广东欧珀移动通信有限公司 A kind of image processing method and mobile terminal

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110881103A (en) * 2019-09-19 2020-03-13 Oppo广东移动通信有限公司 Focusing control method and device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN105933589A (en) 2016-09-07

Similar Documents

Publication Publication Date Title
CN105933589B (en) A kind of image processing method and terminal
CN109583285B (en) Object recognition method
US8879847B2 (en) Image processing device, method of controlling image processing device, and program for enabling computer to execute same method
EP3496383A1 (en) Image processing method, apparatus and device
US10304164B2 (en) Image processing apparatus, image processing method, and storage medium for performing lighting processing for image data
CN105611275B (en) The multi-camera for executing electronic device captures the method and its equipment of control
JP5464244B2 (en) Image processing apparatus, program, and image processing system
KR101303877B1 (en) Method and apparatus for serving prefer color conversion of skin color applying face detection and skin area detection
JP6553624B2 (en) Measurement equipment and system
US20140037135A1 (en) Context-driven adjustment of camera parameters
CN103905737B (en) Backlighting detecting and device
CN106991654A (en) Human body beautification method and apparatus and electronic installation based on depth
JP2011508289A (en) Binocular detection and tracking method and apparatus
US20170076428A1 (en) Information processing apparatus
US9280209B2 (en) Method for generating 3D coordinates and mobile terminal for generating 3D coordinates
CN113052066B (en) Multi-mode fusion method based on multi-view and image segmentation in three-dimensional target detection
CN111598065A (en) Depth image acquisition method, living body identification method, apparatus, circuit, and medium
CN108200335A (en) Photographic method, terminal and computer readable storage medium based on dual camera
CN110213491B (en) Focusing method, device and storage medium
JP6157165B2 (en) Gaze detection device and imaging device
CN112969023A (en) Image capturing method, apparatus, storage medium, and computer program product
CN109816628A (en) Face evaluation method and Related product
CN106101542B (en) A kind of image processing method and terminal
CN106874835B (en) A kind of image processing method and device
CN109785439A (en) Human face sketch image generating method and Related product

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong

Applicant after: OPPO Guangdong Mobile Communications Co., Ltd.

Address before: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong

Applicant before: Guangdong OPPO Mobile Communications Co., Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant