CN107948516A - A kind of image processing method, device and mobile terminal - Google Patents
A kind of image processing method, device and mobile terminal Download PDFInfo
- Publication number
- CN107948516A CN107948516A CN201711241258.0A CN201711241258A CN107948516A CN 107948516 A CN107948516 A CN 107948516A CN 201711241258 A CN201711241258 A CN 201711241258A CN 107948516 A CN107948516 A CN 107948516A
- Authority
- CN
- China
- Prior art keywords
- image
- camera device
- distance
- target scene
- main object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 25
- 238000012545 processing Methods 0.000 claims abstract description 61
- 238000000034 method Methods 0.000 claims abstract description 49
- 238000004590 computer program Methods 0.000 claims description 9
- 230000000694 effects Effects 0.000 description 14
- 230000006870 function Effects 0.000 description 13
- 230000003287 optical effect Effects 0.000 description 10
- 238000003384 imaging method Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000033001 locomotion Effects 0.000 description 5
- 239000003795 chemical substances by application Substances 0.000 description 4
- 230000006854 communication Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 235000007926 Craterellus fallax Nutrition 0.000 description 2
- 240000007175 Datura inoxia Species 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000005314 correlation function Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000001404 mediated effect Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000005498 polishing Methods 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2621—Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
The invention discloses a kind of image processing method, device and mobile terminal, applied to the mobile terminal including the first camera device and the second camera device, the described method includes:Obtain the first image that first camera device gathers target scene;Determine the shooting main object in described first image;According to the centre-to-centre spacing and focal length of first camera device and second camera device, and described first image, the second image, determine the distance between region and preset reference of the target scene, second image is the image that second camera device gathers the target scene;According to the first distance and the relation of second distance, virtualization processing is carried out to the background area that main object is shot described in described first image, the image after being handled.It is thereby achieved that user shoots the purpose of the shooting image that main object is clear, background is illusory, dynamic image, video using mobile terminal.
Description
Technical field
The present invention relates to field of terminal, more particularly to a kind of image processing method, device and mobile terminal.
Background technology
With the continuous development of mobile terminal technology, the function of mobile terminal is stronger and stronger, is carried out using mobile terminal
Take pictures, record video and be seen everywhere, using mobile terminal carry out dynamic image make it is also commonplace.
With the rising of living standard, beautiful pursuit of the people to art is also increasingly stronger, and people hope can profit
Being shot with mobile terminal more has aesthetic feeling, the image of artistic feeling, dynamic image or video, for example, it is desirable to as photo-optics
Machine shoots shooting clear, background blurring, picture level Ganfeng richness the image of main object like that.
However, for the ease of carrying, the volume of existing mobile terminal is usually all done smaller, is limited by volume size
To make, the camera in existing mobile terminal is largely digital camera head, does not possess optical zoom function, and then without image of Buddha light
Learn camera and shoot the image with background blurring effect, dynamic image or video like that.
The content of the invention
The embodiment of the present invention provides a kind of image processing method, device and mobile terminal, to solve existing mobile terminal
The problem of image with background blurring effect, dynamic image or video can not be shot.
In order to solve the above-mentioned technical problem, the present invention is realized in:
First aspect, there is provided a kind of image processing method, applied to mobile terminal, the mobile terminal is taken the photograph including first
Picture device and the second camera device, the described method includes:
Obtain the first image that first camera device gathers target scene;
Determine the shooting main object in described first image;
According to the centre-to-centre spacing and focal length of first camera device and second camera device, and first figure
Picture, the second image, determine the distance between region and preset reference of the target scene;Wherein, second image is institute
State the image that the second camera device gathers the target scene, the preset reference is first camera device and described the
The line at the center of two camera devices;
According to the first distance and the relation of second distance, the background area to shooting main object described in described first image
Domain carries out virtualization processing, the image after being handled;
Wherein, first distance is preset to shoot the background area of main object described in the target scene with described
The distance between benchmark, the second distance be the shooting main object and the preset reference in the target scene it
Between distance.
Second aspect, there is provided a kind of image processing apparatus, applied to mobile terminal, the mobile terminal is taken the photograph including first
As device and the second camera device, described device include:
First acquisition module, the first image gathered for obtaining first camera device to target scene;
Main body determining module, for determining the shooting main object in described first image;
Apart from determining module, for the centre-to-centre spacing and Jiao according to first camera device and second camera device
Away from, and described first image, the second image, determine the distance between region and preset reference of the target scene;Wherein,
Second image is the image that second camera device gathers the target scene, and the preset reference is described first
The line at the center of camera device and second camera device;
Background blurring processing module, for according to the first distance and the relation of second distance, to institute in described first image
The background area for stating shooting main object carries out virtualization processing, the image after being handled;
Wherein, first distance is preset to shoot the background area of main object described in the target scene with described
The distance between benchmark, the second distance be the shooting main object and the preset reference in the target scene it
Between distance.
The third aspect, there is provided a kind of mobile terminal, the mobile terminal include processor, memory and be stored in described deposit
On reservoir and the computer program that can run on the processor, the computer program are realized when being performed by the processor
The step of method as described in relation to the first aspect.
A kind of fourth aspect, there is provided computer-readable recording medium, it is characterised in that the computer-readable storage medium
Computer program is stored in matter, the step of method as described in relation to the first aspect is realized when the computer program is executed by processor
Suddenly.
In embodiments of the present invention, the first image gathered by obtaining first camera device to target scene;Really
Determine the shooting main object in described first image;According to the centre-to-centre spacing of first camera device and second camera device
And focal length, and described first image, the second image, determine the distance between region and preset reference of the target scene;
Wherein, second image is the image that second camera device gathers the target scene, and the preset reference is institute
State the line at the center of the first camera device and second camera device;It is right according to the first distance and the relation of second distance
The background area that main object is shot described in described first image carries out virtualization processing, the image after being handled;Wherein, institute
The first distance is stated to shoot the distance between the background area of main object and described preset reference described in the target scene,
The second distance is the distance between the shooting main object and described preset reference in the target scene.Therefore,
Relative to the prior art, it is possible to achieve user using mobile terminal shoots shooting, and main object is clear, background is illusory fuzzy, rich
Have a sense of hierarchy, the purpose of the image of aesthetic feeling, artistic feeling, dynamic image or video, improve the shooting experience of user.
Brief description of the drawings
Attached drawing described herein is used for providing a further understanding of the present invention, forms the part of the present invention, this hair
Bright schematic description and description is used to explain the present invention, does not form inappropriate limitation of the present invention.In the accompanying drawings:
Figure 1A is a kind of a kind of flow signal of embodiment of image processing method provided in an embodiment of the present invention
Figure;
Figure 1B is a kind of principle schematic for image processing method that the embodiment shown in Figure 1A of the present invention provides;
Fig. 1 C are a kind of a kind of application effect signal for image processing method that the embodiment shown in Figure 1A of the present invention provides
Figure;
Fig. 1 D are that a kind of another application effect for image processing method that the embodiment shown in Figure 1A of the present invention provides is shown
It is intended to;
Fig. 2A is that a kind of flow of another embodiment of image processing method provided in an embodiment of the present invention is shown
It is intended to;
Fig. 2 B are a kind of a kind of application effect signal for image processing method that the embodiment shown in Fig. 2A of the present invention provides
Figure;
Fig. 3 is a kind of a kind of concrete structure schematic diagram of image processing apparatus provided in an embodiment of the present invention;
Fig. 4 is a kind of another concrete structure schematic diagram of image processing apparatus provided in an embodiment of the present invention;
Fig. 5 is a kind of hardware architecture diagram of mobile terminal provided in an embodiment of the present invention.
Embodiment
Below in conjunction with the attached drawing in the embodiment of the present invention, the technical solution in the embodiment of the present invention is carried out clear, complete
Site preparation describes, it is clear that described embodiment is part of the embodiment of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, the every other implementation that those of ordinary skill in the art are obtained without making creative work
Example, belongs to the scope of protection of the invention.
The image with background blurring effect, dynamic image or video can not be shot to solve existing mobile terminal
Problem, the present invention provide a kind of image processing method, and the executive agent of these methods, can be, but not limited to mobile phone, IPAD, can wear
Wearing equipment etc. can be configured as performing at least one of the mobile terminal of method provided in an embodiment of the present invention, alternatively, the party
The executive agent of method, can also be and be able to carry out the client of this method in itself.
For ease of description, hereafter executive agent in this way is to be able to carry out exemplified by the mobile phone of this method, to this method
Embodiment be introduced.It is appreciated that the executive agent of this method is a kind of exemplary explanation for mobile phone, should not
It is interpreted as the restriction to this method.
As shown in Figure 1A, a kind of image processing method provided in an embodiment of the present invention, it is whole applied to mobile terminal, the movement
End includes the first camera device and the second camera device, and by taking mobile phone as an example, the first camera device and the second camera device can be
Two cameras being arranged in mobile phone, this method mainly comprise the following steps:
Step 101, obtain the first image that first camera device gathers target scene.
Target scene, can be that user goes for the targeted actual field of the first image with background blurring effect
Scape.
Step 102, determine shooting main object in described first image.
Main object is shot, can be that user wants a things in the target scene that shows emphatically, for example, target field
A people or one tree in scape, etc..
In a kind of embodiment, step 102 can specifically include:Described first image is included in the movement
On the display screen of terminal;Receive the operation for the selection shooting main object that user makes according to the content of display screen display;Root
The shooting main object in described first image is determined according to the operation.That is, it can be quickly determined by the selection of user
Go out the shooting main object in the first image.
In another embodiment, step 102 can specifically pass through existing shooting main object recognition methods
Automatically identify the shooting main object in the first image.For example, it is assumed that user wants one in the target scene that shows emphatically
A things is behaved, then shooting main object can be identified from the first image by existing face or human body recognition method.
Certainly, above two can be not limited to by determining the method for the shooting main object in the first image, it is existing other
The method of shooting main object in identification image is also applied for the embodiment of the present invention.
Step 103, centre-to-centre spacing and focal length according to first camera device and second camera device, and it is described
First image, the second image, determine the distance between region and preset reference of the target scene.
Wherein, the second image is the image that the second camera device gathers target scene.It is appreciated that when target scene is
During static scene, the first camera device and the second camera device identical also can may be used at the time of gathering the first image and the second image
With difference;When target scene is dynamic scene, the first camera device and the second camera device gather the first image and the second figure
Can be identical at the time of picture, it can also differ less than default time interval.
In practical applications, it can first open the first camera device to catch target scene and show on mobile terminals, treat
After shooting main object is determined in user's selection, the second camera device is opened, then utilizes the first camera device and second at the same time
Camera device obtains the first image and the second image.
Preset reference for the center of the first camera device and the second camera device line, wherein, the first camera device and
The center of second camera device can be specifically the optical center of the camera lens of the first camera device and the second camera device.
Correspondingly, the centre-to-centre spacing of the first camera device and the second camera device can also be the camera lens of the first camera device
The distance between optical center of camera lens of optical center and the second camera device.
In a kind of embodiment, step 103 can specifically include:Determine the region of the target scene described
The first coordinate in first image, and second coordinate of the region in second image of the target scene;According to described
The centre-to-centre spacing and focal length of first camera device and second camera device, and first coordinate and second coordinate,
Determine the distance between region and preset reference of the target scene.
More specifically, it is assumed that the focal length of the first camera device and second camera device is equal, then above-mentioned according to institute
The centre-to-centre spacing and focal length of the first camera device and second camera device are stated, and first coordinate and described second is sat
Mark, determines that the distance between the region of the target scene and preset reference can include:According to first camera device and
The centre-to-centre spacing and focal length of second camera device, and first coordinate, second coordinate and preset formula, calculate
To the distance between the region of the target scene and preset reference.
Above-mentioned preset formula is specifically as follows:
Wherein, z is the distance between the region of the target scene and described preset reference, and f is institute
The focal length of the first camera device and second camera device is stated, B is first camera device and second camera device
Centre-to-centre spacing, XRFor first coordinate, XTFor second coordinate.
With reference to Figure 1B to showing that the process of above-mentioned preset formula illustrates.
As shown in Figure 1B, it is assumed that:ORAnd OTIt is the optical center of the camera lens of the first camera device and the second camera device respectively, P is
A region in target scene, z are the distance between P and the preset reference (alternatively referred to as depth), and P ' is that P takes the photograph first
As device imaging plane on imaging point, P " is imaging points of the P on the imaging plane of the second camera device, namely P ' is P
Imaging point in the first image, P " are imaging points of the P in the second image, and f is first camera device and described second
The focal length of camera device, B are the centre-to-centre spacing of first camera device and second camera device, XRIt is P ' in the first image
In coordinate (the first coordinate), XTFor P " coordinate (the second coordinate) in the second image, the distance that P ' arrives P " are dis, first
The width of the width of image and the second image is d.
So,
It can be obtained according to similar triangle theory:
It can be obtained by abbreviation:
It can be appreciated that since f and B is known constant, and the first coordinate XRWith the second coordinate XTReflect the first shooting dress
The parallax with the second camera device shooting area P is put, therefore, as long as having calculated the first camera device and the bat of the second camera device
Take the photograph the parallax of region P, it is possible to calculate the distance between region and the preset reference of target scene z.
It should be noted that since in practical applications, target scene is in the first image or the second image by some pictures
Vegetarian refreshments represents that the different zones in target scene correspond to the different pixels point in the first image or the second image, and target field
The distance between different zones in scape and preset reference be also likely to be it is different, therefore, determine the region of target scene with
During the distance between preset reference, the corresponding target of each pixel that can specifically be to determine in the first image or the second image
The distance between each region and preset reference in scene.
Step 104, according to the first distance and the relation of second distance, to shooting main object described in described first image
Background area carry out virtualization processing, the image after being handled.
Wherein, background area and the preset reference of first distance for shooting main object described in the target scene
The distance between, second distance be the target scene in the shooting main object and the preset reference between away from
From.Wherein, background area, can be the region in target scene in addition to shooting main object region.
Since the background area for shooting main object is usually made of in the first image or the second image multiple pixels,
Corresponding multiple first coordinates or the second coordinate, therefore, above-mentioned first distance can be specifically in the first image (or second image)
The corresponding target scene of each background pixel point in region and the distance between preset reference.Wherein, background pixel point, is mesh
Mark the formed picture point in described first image or the second image of the background area in scene.
Likewise, since shooting main object is generally also made of in the first image or the second image multiple pixels,
Corresponding multiple first coordinates or the second coordinate, therefore, according to the shooting main object that the calculation in step 103 calculates with
The distance between preset reference is probably a scope.In a kind of embodiment of the present invention, calculate for convenience, can be with
The distance between center and preset reference of main object will be shot in target scene as above-mentioned second distance.Specifically,
In the case of one kind, the center for shooting main object can be specifically the geometric center on depth (z) direction, corresponding second distance
Can be the average value for shooting ultimate range and minimum range between main object and preset reference;In another case,
The center of shooting main object can be specifically the mean center on depth direction (z directions), and corresponding second distance can be
Shoot the average value of each distance between each region of main object and preset reference.
In a kind of specific implementation, above-mentioned steps 104 can specifically include following sub-step:
Sub-step 1, the absolute value for calculating the first distance and the difference of second distance.
Specifically, calculate each region corresponding first in the corresponding target scene of each background pixel point in the first image
Distance, the absolute value with the difference of second distance.
The absolute value, is more than the background that main object is shot described in the target scene of preset value by sub-step 2
Region is determined as target area.
It will namely be more than before or after being located at shooting main object in target scene, with shooting the distance of main object
The background area of preset value is determined as target area.
Wherein, preset value can be set according to thickness of the shooting main object on depth direction (z directions), example
Such as, it is assumed that shooting main object is an adult, above-mentioned pre- since the thickness of adult is usually between 20-30cm
If value could be provided as a value between 20-30cm.
Sub-step 3, by the target area in described first image carry out virtualization processing.
Can be specifically that the corresponding background pixel point in target area described in the first image is subjected to virtualization processing.It is so-called
Virtualization is handled, it can be understood as is Fuzzy Processing, to obtain, shooting main object is clear, background blurring has aesthetic feeling and art
First image of sense.
In a kind of more specifically embodiment, above-mentioned sub-step 3 can specifically include:According in the target area
The size of corresponding first distance of each pixel, corresponds to the pixel of target area described in described first image
Virtualization processing.That is, according to the difference of the first distance, to the virtualization degree of background pixel point different in the first image not
Together, under normal circumstances, corresponding first distance of background pixel point is bigger, and virtualization degree is also bigger, conversely, then virtualization degree is got over
It is small.It can be appreciated that according to the difference of the first distance, different degrees of virtualization is carried out to different background pixel points and is handled, can be with
The first image that acquisition shooting main object is clear, background blurring and virtualization level enriches so that the first image is richer after processing
There are aesthetic feeling and artistic feeling, can more improve the shooting experience of user.
Fig. 1 C and Fig. 1 D show a kind of application for image processing method that the embodiment shown in Figure 1A of the present invention provides
Effect diagram.As shown in Figure 1 C, the background area of the shooting main object (woman in Fig. 1 C) in image is blurred
After processing, can obtain that shooting main object is clear and background it is illusory it is fuzzy, create an aesthetic feeling image with artistic feeling.Likewise,
As shown in figure iD, progress is drawn near in various degree to the background area of the shooting main object (man in Fig. 1 D) in image
Virtualization processing after, can obtain that shooting main object is clear and that background draws near is that fog-level gradually weakens, rich in level
Sense and the image of artistic feeling.
A kind of image processing method that our inventive embodiments provide, since first in mobile terminal can be utilized to image
The first image and the second image that device and the second camera device are obtained for target scene shooting;And it can be taken the photograph according to first
As the centre-to-centre spacing and focal length of device and the second camera device, and the first image and the second image, determine in target scene
Shoot the second distance between main object and preset reference, and the background area of the shooting main object in target scene with
The first distance between preset reference;Finally according to the first distance and the relation of second distance, to shooting main body in the first image
The background area of object carries out virtualization processing, obtains the first background blurring image.Accordingly, with respect to the prior art, Ke Yishi
Shooting using mobile terminal is shot in current family, and main object is clear, background is illusory fuzzy, rich in stereovision, aesthetic feeling, artistic feeling
Image purpose, improve user shooting experience.
As shown in Figure 2 A, in another embodiment of the present invention, after step 103, image after being handled it
Before, a kind of image processing method provided in an embodiment of the present invention can also include:
Step 105, the size according to the second distance, preset to shooting main object described in described first image
Index is adjusted, with the image after being handled.
Above-mentioned pre-set level can include the one or more in following index:Light intensity, tone, form and aspect, saturation degree,
The index such as contrast and brightness.
Specifically, can be according to the big of the corresponding second distance of each pixel that main object is shot in the first image
It is small, the pre-set level of the subject in the first image is adjusted.
For by taking light intensity as an example, under normal circumstances, it is smaller to shoot the corresponding second distance of pixel of main object,
Light intensity is stronger.
, can be to the part or all of pixel in shooting main object, according to the picture for still by taking light intensity as an example
The different of the corresponding second distance of vegetarian refreshments carry out blasts or dim processing, realize be similar to simulate polishing, stage light applications, rim(ming) light,
The abundant visual effects such as monochromatic processing, outline, shade.Specifically as shown in Figure 2 B, to shooting the main object (female in Fig. 2 B
People) in pixel, different according to the corresponding second distance of the pixel carry out blasts, the background area to shooting main object
Virtualization processing is carried out, visual effect better image can be obtained.
A kind of image processing method provided in an embodiment of the present invention, due to not only can according to first distance and second distance
Relation, to described in the first image shoot main object background area carry out virtualization processing, can also according to second distance,
The pre-set level of shooting main object in first image is adjusted, therefore, mobile terminal shooting can be utilized to be regarded
Feel effect more horn of plenty, the image with more artistic value, further increase the shooting experience of user.
Alternatively, a kind of image processing method provided in an embodiment of the present invention, can be used for handling in picture, dynamic image
A two field picture or video in a two field picture etc., that is, above-mentioned first image can include:One in picture, dynamic image
Two field picture in two field picture or video, etc..So as to fulfill user want with mobile terminal shoot shooting main object it is clear,
Background is illusory fuzzy, and the purpose rich in stereovision, aesthetic feeling, the image of artistic feeling, dynamic image and video, improves the bat of user
Take the photograph experience.
In practical applications, created an aesthetic feeling, the substantially process of the dynamic image of artistic feeling can be:First open first
Camera device catches target scene and simultaneously shows on mobile terminals, after user clicks on and determines shooting main object, opens the
Two camera devices, then obtain one group of first image and one group the using the first camera device and the second camera device continuous shooting at the same time
Two images, and in real time at least one first image, the second image shot according to synchronization, carries according to the embodiment of the present invention
A kind of image processing method supplied is handled, and then every one first image is spliced, final to obtain shooting main object
Clearly, the dynamic image of background area virtualization.
In practical applications, created an aesthetic feeling, the substantially process of the video of artistic feeling can be:First open the first shooting
Device catches target scene and shows on mobile terminals, after shooting main object is determined in user's selection, opens second and takes the photograph
As device, target scene is shot using the first camera device and the second camera device and obtains the first video and the second video, so
Afterwards in real time to the first image of the frame in the first video or multiframe, the second image shot according to synchronization in the second video,
Handled according to a kind of image processing method provided in an embodiment of the present invention, final acquisition shooting main object is clear, background
First video of region virtualization.
It is appreciated that imaged while above-mentioned second image is the first camera device the first image of collection, by described second
The image that device gathers target scene.It is appreciated that the first camera device and the second camera device gather at the same time obtain it is above-mentioned
In first image and the second image is not absolute " at the same time ", can allow that there are certain error.
Corresponding to above method embodiment, the embodiment of the present invention additionally provides a kind of image processing apparatus, is said below
It is bright.
As shown in figure 3, a kind of image processing apparatus provided in an embodiment of the present invention, applied to mobile terminal, the movement
Terminal, which includes the first camera device and the second camera device, described device, to be included:
First acquisition module 301, the first image gathered for obtaining first camera device to target scene.
Target scene, can be that user goes for the targeted actual field of the first image with background blurring effect
Scape.
Main body determining module 302, for determining the shooting main object in described first image.
Main object is shot, can be that user wants a things in the target scene that shows emphatically, for example, target field
A people or one tree in scape, etc..
In a kind of embodiment, main body determining module 302 can include:Display sub-module, receiving submodule and
Determination sub-module.
Display sub-module, for described first image to be included to the display screen in the mobile terminal.
Receiving submodule, the operation for the selection shooting main object made for receiving user.
Determination sub-module, for determining the shooting main object in described first image according to the operation.
In another embodiment, main body determining module 302, specifically can be used for by existing shooting main body
Object identifying method automatically identifies the shooting main object in the first image.For example, it is assumed that user wants the mesh showed emphatically
The things marked in scene is behaved, then bat can be identified from the first image by existing face or human body recognition method
Take the photograph main object.
Certainly, above two can be not limited to by determining the method for the shooting main object in the first image, it is existing other
The method of shooting main object in identification image is also applied for the embodiment of the present invention.
Apart from determining module 303, for according to the centre-to-centre spacing of first camera device and second camera device and
Focal length, and described first image, the second image, determine the distance between region and preset reference of the target scene;
Wherein, second image is the image that second camera device gathers the target scene;It is described default
Benchmark is the line at the center of first camera device and second camera device, wherein, the first camera device and second
The center of camera device can be specifically the optical center of the camera lens of the first camera device and the second camera device.Correspondingly, first takes the photograph
As the centre-to-centre spacing of device and the second camera device can also be the optical center and the second camera device of the camera lens of the first camera device
The distance between optical center of camera lens.
In a kind of embodiment, it can specifically include apart from determining module 303:Coordinate determination sub-module and distance
Determination sub-module.
Coordinate determination sub-module, for determining first coordinate of the region of the target scene in described first image,
And second coordinate of the region of the target scene in second image;
Apart from determination sub-module, for the centre-to-centre spacing and Jiao according to first camera device and second camera device
Away from, and first coordinate and second coordinate, determine the distance between region and preset reference of the target scene.
In a kind of more specifically embodiment, it is assumed that the focal length phase of the first camera device and second camera device
Deng, then it is above-mentioned apart from determination sub-module, specifically for the center according to first camera device and second camera device
Away from and focal length, and first coordinate, second coordinate and preset formula, be calculated the region of the target scene with
The distance between preset reference.
Wherein, the preset formula is:
Wherein, z is the distance between the region of the target scene and described preset reference, and f is institute
The focal length of the first camera device and second camera device is stated, B is first camera device and second camera device
Centre-to-centre spacing, XRFor first coordinate, XTFor second coordinate.
It should be noted that since in practical applications, target scene is in the first image or the second image by some pictures
Vegetarian refreshments represents that the different zones in target scene correspond to the different pixels point in the first image or the second image, and target field
The distance between different zones in scape and preset reference be also likely to be it is different, therefore, determine the region of target scene with
During the distance between preset reference, the corresponding target of each pixel that can specifically be to determine in the first image or the second image
The distance between each region and preset reference in scene.
Background blurring processing module 304, for according to the first distance and the relation of second distance, in described first image
The background area of the shooting main object carries out virtualization processing, the image after being handled.
Wherein, first distance is preset to shoot the background area of main object described in the target scene with described
The distance between benchmark;The second distance be the shooting main object and the preset reference in the target scene it
Between distance.Background area, can be the region in target scene in addition to shooting main object region.
Since the background area for shooting main object is usually made of in the first image or the second image multiple pixels,
Corresponding multiple first coordinates or the second coordinate, therefore, above-mentioned first distance can be specifically in the first image (or second image)
The corresponding target scene of each background pixel point in region and the distance between preset reference.Wherein, background pixel point, is mesh
Mark the formed picture point in described first image or the second image of the background area in scene.
Likewise, since shooting main object is generally also made of in the first image or the second image multiple pixels,
Corresponding multiple first coordinates or the second coordinate, therefore, the shooting master calculated according to the calculation in determining module 303
The distance between body object and preset reference are probably a scope.In a kind of embodiment of the present invention, for convenience
Calculate, the distance between center and preset reference of main object can will be shot in target scene as above-mentioned second distance.
Specifically, in one case, the center for shooting main object can be specifically the geometric center on depth (z) direction, accordingly
Second distance can be the average value for shooting ultimate range and minimum range between main object and preset reference;Another
In the case of kind, the center for shooting main object can be specifically the mean center on depth direction (z directions), corresponding second away from
From can be the average value that shoots with a distance from each between main object and preset reference.
In a kind of embodiment, background blurring processing module 304 can specifically include:Calculating sub module, target
Region determination sub-module and virtualization processing submodule.
Calculating sub module, for calculating the absolute value of the first distance and the difference of second distance.Specifically, calculate the first figure
Corresponding first distance in each region in the corresponding target scene of each background pixel point as in, it is exhausted with the difference of second distance
To value.
Target area determination sub-module, shoots for the absolute value to be more than described in the target scene of preset value
The background area of main object is determined as target area.
It will namely be more than before or after being located at shooting main object in target scene, with shooting the distance of main object
The background area of preset value is determined as target area.
Wherein, preset value can be set according to thickness of the shooting main object on depth direction (z directions).
Virtualization processing submodule, for the target area in described first image to be carried out virtualization processing.
Specifically the corresponding background pixel point in target area described in the first image can be subjected to virtualization processing.So-called virtualization
Processing, it can be understood as be Fuzzy Processing, shooting main object is clear, background blurring has aesthetic feeling and artistic feeling to obtain
First image.
In a kind of more specifically embodiment, above-mentioned virtualization handles submodule, specifically for according to the target area
The size of corresponding first distance of each pixel in domain, carries out the pixel of target area described in described first image
Corresponding virtualization processing.That is, according to the difference of the first distance, to the virtualization degree of background pixel point different in the first image
Difference, under normal circumstances, corresponding first distance of background pixel point is bigger, and virtualization degree is also bigger, conversely, then virtualization degree is got over
It is small.It can be appreciated that according to the difference of the first distance, different degrees of virtualization is carried out to different background pixel points and is handled, can be with
The first image that acquisition shooting main object is clear, background blurring and virtualization level enriches so that the first image is richer after processing
There are aesthetic feeling and artistic feeling, can more meet the needs of users.
A kind of image processing apparatus that our inventive embodiments provide, since first in mobile terminal can be utilized to image
The first image and the second image that device and the second camera device are obtained for target scene shooting;And it can be taken the photograph according to first
As the centre-to-centre spacing and focal length of device and the second camera device, and the first image and the second image, determine in target scene
Shoot the second distance between main object and preset reference, and the background area of the shooting main object in target scene with
The first distance between preset reference;Finally according to the first distance and the relation of second distance, to shooting main body in the first image
The background area of object carries out virtualization processing, obtains the first background blurring image.Accordingly, with respect to the prior art, Ke Yishi
Shooting using mobile terminal is shot in current family, and main object is clear, background is illusory fuzzy, rich in stereovision, aesthetic feeling, artistic feeling
Image purpose, improve user shooting experience.
As shown in figure 4, in another embodiment of the present invention, a kind of image processing apparatus provided in an embodiment of the present invention is also
It can include:Main body index adjustment module 305.
Main body index adjustment module 305, before the image after being handled, according to the big of the second distance
It is small, the pre-set level that main object is shot described in described first image is adjusted, with the image after being handled.
Above-mentioned pre-set level can include the one or more in following index:Light intensity, tone, form and aspect, saturation degree,
The index such as contrast and brightness.
A kind of image processing apparatus provided in an embodiment of the present invention, due to not only can be according to the first distance and theme distance
Relation, to described in the first image shoot main object background area carry out virtualization processing, can also according to second distance,
The pre-set level of shooting main object in first image is adjusted, therefore, vision can be shot using mobile terminal
Effect more horn of plenty, the image with more artistic value, further increasing the shooting experience of user.
Alternatively, a kind of image processing apparatus provided in an embodiment of the present invention, can be used for handling in picture, dynamic image
A two field picture or video in a two field picture etc., that is, above-mentioned first image can include:One in picture, dynamic image
Two field picture in two field picture or video, etc..To realize that user wants to be shot that shooting main object is clear, the back of the body with mobile terminal
Scape is illusory fuzzy, the purpose rich in stereovision, aesthetic feeling, the image of artistic feeling, dynamic image and video, improves the shooting body of user
Test.
A kind of image processing apparatus provided in an embodiment of the present invention can be realized one in the embodiment of the method for Figure 1A and Fig. 2A
Each process that kind image processing method is realized, and same technique effect can be obtained, it is no longer superfluous here to avoid repeating
State.
Fig. 5 is a kind of hardware architecture diagram for the mobile terminal for realizing each embodiment of the present invention,
The mobile terminal 500 includes but not limited to:It is radio frequency unit 501, mixed-media network modules mixed-media 502, audio output unit 503, defeated
Enter unit 504, sensor 505, display unit 506, user input unit 507, interface unit 508, memory 509, processor
The component such as 510 and power supply 511.It will be understood by those skilled in the art that the mobile terminal structure shown in Fig. 5 is not formed
Restriction to mobile terminal, mobile terminal can be included than illustrating more or fewer components, either combine some components or
Different component arrangements.In embodiments of the present invention, mobile terminal include but not limited to mobile phone, tablet computer, laptop,
Palm PC, car-mounted terminal, wearable device and pedometer etc..
Wherein, processor 510, the first image gathered for obtaining first camera device to target scene;Determine
Shooting main object in described first image;According to the centre-to-centre spacing of first camera device and second camera device and
Focal length, and described first image, the second image, determine the distance between region and preset reference of the target scene;Its
In, second image is the image that second camera device gathers the target scene, and the preset reference is described
The line at the center of the first camera device and second camera device;According to the first distance and the relation of second distance, to institute
The background area for stating shooting main object described in the first image carries out virtualization processing, the image after being handled;Wherein, it is described
First distance is that the distance between the background area of main object and described preset reference, institute are shot described in the target scene
Second distance is stated as the distance between the shooting main object in the target scene and described preset reference.
Relative to the prior art, the mobile terminal can realize user using mobile terminal shoot shooting main object it is clear
Clear, background is illusory fuzzy, rich in stereovision, aesthetic feeling, the image of artistic feeling, dynamic image, video purpose, improve user's
Shooting experience.
It should be understood that in the embodiment of the present invention, radio frequency unit 501 can be used for receiving and sending messages or communication process in, signal
Reception and transmission, specifically, by from base station downlink data receive after, handled to processor 510;In addition, by uplink
Data sending is to base station.In general, radio frequency unit 501 includes but not limited to antenna, at least one amplifier, transceiver, coupling
Device, low-noise amplifier, duplexer etc..In addition, radio frequency unit 501 can also by wireless communication system and network and other set
Standby communication.
Mobile terminal has provided wireless broadband internet to the user by mixed-media network modules mixed-media 502 and has accessed, and such as helps user to receive
Send e-mails, browse webpage and access streaming video etc..
Audio output unit 503 can be receiving by radio frequency unit 501 or mixed-media network modules mixed-media 502 or in memory 509
It is sound that the voice data of storage, which is converted into audio signal and exports,.Moreover, audio output unit 503 can also be provided and moved
The relevant audio output of specific function that dynamic terminal 500 performs is (for example, call signal receives sound, message sink sound etc.
Deng).Audio output unit 503 includes loudspeaker, buzzer and receiver etc..
Input unit 504 is used to receive audio or video signal.Input unit 504 can include graphics processor
(Graphics Processing Unit, GPU) 5041 and microphone 5042, graphics processor 5041 is in video acquisition mode
Or the static images or the view data of video obtained in image capture mode by image capture apparatus (such as camera) carry out
Reason.Picture frame after processing may be displayed on display unit 506.Picture frame after the processing of graphics processor 5041 can be deposited
Storage is transmitted in memory 509 (or other storage mediums) or via radio frequency unit 501 or mixed-media network modules mixed-media 502.Mike
Wind 5042 can receive sound, and can be voice data by such acoustic processing.Voice data after processing can be
The form output of mobile communication base station can be sent to via radio frequency unit 501 by being converted in the case of telephone calling model.
Mobile terminal 500 further includes at least one sensor 505, such as optical sensor, motion sensor and other biographies
Sensor.Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein, ambient light sensor can be according to environment
The light and shade of light adjusts the brightness of display panel 5061, and proximity sensor can close when mobile terminal 500 is moved in one's ear
Display panel 5061 and/or backlight.As one kind of motion sensor, accelerometer sensor can detect in all directions (general
For three axis) size of acceleration, size and the direction of gravity are can detect that when static, available for identification mobile terminal posture (ratio
Such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap);Pass
Sensor 505 can also include fingerprint sensor, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer, wet
Meter, thermometer, infrared ray sensor etc. are spent, details are not described herein.
Display unit 506 is used for the information for showing by information input by user or being supplied to user.Display unit 506 can wrap
Display panel 5061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used
Forms such as (Organic Light-Emitting Diode, OLED) configures display panel 5061.
User input unit 507 can be used for the numeral or character information for receiving input, and produce the use with mobile terminal
The key signals input that family is set and function control is related.Specifically, user input unit 507 include contact panel 5071 and
Other input equipments 5072.Contact panel 5071, also referred to as touch-screen, collect user on it or neighbouring touch operation
(for example user uses any suitable objects or annex such as finger, stylus on contact panel 5071 or in contact panel 5071
Neighbouring operation).Contact panel 5071 may include both touch detecting apparatus and touch controller.Wherein, touch detection
Device detects the touch orientation of user, and detects the signal that touch operation is brought, and transmits a signal to touch controller;Touch control
Device processed receives touch information from touch detecting apparatus, and is converted into contact coordinate, then gives processor 510, receiving area
Manage the order that device 510 is sent and performed.It is furthermore, it is possible to more using resistance-type, condenser type, infrared ray and surface acoustic wave etc.
Type realizes contact panel 5071.Except contact panel 5071, user input unit 507 can also include other input equipments
5072.Specifically, other input equipments 5072 can include but is not limited to physical keyboard, function key (such as volume control button,
Switch key etc.), trace ball, mouse, operation lever, details are not described herein.
Further, contact panel 5071 can be covered on display panel 5061, when contact panel 5071 is detected at it
On or near touch operation after, send to processor 510 with determine touch event type, be followed by subsequent processing device 510 according to touch
The type for touching event provides corresponding visual output on display panel 5061.Although in Figure 5, contact panel 5071 and display
Panel 5061 is the component independent as two to realize the function that outputs and inputs of mobile terminal, but in some embodiments
In, can be integrated by contact panel 5071 and display panel 5061 and realize the function that outputs and inputs of mobile terminal, it is specific this
Place does not limit.
Interface unit 508 is the interface that external device (ED) is connected with mobile terminal 500.For example, external device (ED) can include
Line or wireless head-band earphone port, external power supply (or battery charger) port, wired or wireless data port, storage card end
Mouth, port, audio input/output (I/O) port, video i/o port, earphone end for connecting the device with identification module
Mouthful etc..Interface unit 508 can be used for receive the input (for example, data message, electric power etc.) from external device (ED) and
One or more elements that the input received is transferred in mobile terminal 500 can be used in 500 He of mobile terminal
Data are transmitted between external device (ED).
Memory 509 can be used for storage software program and various data.Memory 509 can mainly include storing program area
And storage data field, wherein, storing program area can storage program area, application program (such as the sound needed at least one function
Sound playing function, image player function etc.) etc.;Storage data field can store according to mobile phone use created data (such as
Voice data, phone directory etc.) etc..In addition, memory 509 can include high-speed random access memory, can also include non-easy
The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 510 is the control centre of mobile terminal, utilizes each of various interfaces and the whole mobile terminal of connection
A part, by running or performing the software program and/or module that are stored in memory 509, and calls and is stored in storage
Data in device 509, perform the various functions and processing data of mobile terminal, so as to carry out integral monitoring to mobile terminal.Place
Reason device 510 may include one or more processing units;Preferably, processor 510 can integrate application processor and modulatedemodulate is mediated
Device is managed, wherein, application processor mainly handles operating system, user interface and application program etc., and modem processor is main
Handle wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 510.
Mobile terminal 500 can also include the power supply 511 (such as battery) to all parts power supply, it is preferred that power supply 511
Can be logically contiguous by power-supply management system and processor 510, so as to realize management charging by power-supply management system, put
The function such as electricity and power managed.
In addition, mobile terminal 500 includes some unshowned function modules, details are not described herein.
Preferably, the embodiment of the present invention also provides a kind of mobile terminal, including processor 510, and memory 509, is stored in
On memory 509 and the computer program that can be run on the processor 510, the computer program are performed by processor 510
Each process of the above-mentioned image processing method embodiments of Shi Shixian, and identical technique effect can be reached, to avoid repeating, here
Repeat no more.
The embodiment of the present invention also provides a kind of computer-readable recording medium, and meter is stored with computer-readable recording medium
Calculation machine program, the computer program realize each process of above-mentioned image processing method embodiment, and energy when being executed by processor
Reach identical technique effect, to avoid repeating, which is not described herein again.Wherein, the computer-readable recording medium, such as only
Read memory (Read-Only Memory, abbreviation ROM), random access memory (Random Access Memory, abbreviation
RAM), magnetic disc or CD etc..
It should be noted that herein, term " comprising ", "comprising" or its any other variant are intended to non-row
His property includes, so that process, method, article or device including a series of elements not only include those key elements, and
And other elements that are not explicitly listed are further included, or further include as this process, method, article or device institute inherently
Key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that including this
Also there are other identical element in the process of key element, method, article or device.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on such understanding, technical scheme substantially in other words does the prior art
Going out the part of contribution can be embodied in the form of software product, which is stored in a storage medium
In (such as ROM/RAM, magnetic disc, CD), including some instructions are used so that a station terminal (can be mobile phone, computer, services
Device, air conditioner, or network equipment etc.) perform method described in each embodiment of the present invention.
The embodiment of the present invention is described above in conjunction with attached drawing, but the invention is not limited in above-mentioned specific
Embodiment, above-mentioned embodiment is only schematical, rather than restricted, those of ordinary skill in the art
Under the enlightenment of the present invention, in the case of present inventive concept and scope of the claimed protection is not departed from, it can also make very much
Form, belongs within the protection of the present invention.
Claims (15)
1. a kind of image processing method, it is characterised in that applied to mobile terminal, the mobile terminal includes the first camera device
With the second camera device, the described method includes:
Obtain the first image that first camera device gathers target scene;
Determine the shooting main object in described first image;
According to the centre-to-centre spacing and focal length of first camera device and second camera device, and described first image,
Two images, determine the distance between region and preset reference of the target scene;Wherein, second image is described second
The image that camera device gathers the target scene, the preset reference are first camera device and second shooting
The line at the center of device;
According to the first distance and the relation of second distance, to shot described in described first image the background area of main object into
Row virtualization is handled, the image after being handled;
Wherein, background area and the preset reference of first distance for shooting main object described in the target scene
The distance between, the second distance is between the shooting main object and the preset reference in the target scene
Distance.
2. according to the method described in claim 1, it is characterized in that, described take the photograph according to first camera device and described second
As the centre-to-centre spacing and focal length of device, and described first image, the second image, the region of the target scene and default base are determined
The distance between standard, including:
Determine first coordinate of the region of the target scene in described first image, and the target scene region in institute
State the second coordinate in the second image;
According to the centre-to-centre spacing and focal length of first camera device and second camera device, and first coordinate and institute
The second coordinate is stated, determines the distance between region and preset reference of the target scene.
3. according to the method described in claim 2, it is characterized in that, first camera device and second camera device
Focal length is equal,
The centre-to-centre spacing and focal length according to first camera device and second camera device, and first coordinate
With second coordinate, the distance between region and preset reference of the target scene are determined, including:
According to the centre-to-centre spacing and focal length of first camera device and second camera device, and first coordinate, institute
The second coordinate and preset formula are stated, the distance between region and preset reference of the target scene is calculated;
Wherein, the preset formula is:
Wherein, z is the distance between region and described preset reference of the target scene, and f is described the
The focal length of one camera device and second camera device, B are in first camera device and second camera device
The heart is away from XRFor first coordinate, XTFor second coordinate.
It is 4. right according to the method described in claim 1, it is characterized in that, described according to the first distance and the relation of second distance
The background area that main object is shot described in described first image carries out virtualization processing, including:
Calculate the absolute value of first distance and the difference of the second distance;
The background area that the absolute value is more than to shooting main object described in the target scene of preset value is determined as mesh
Mark region;
The target area in described first image is subjected to virtualization processing.
5. according to the method described in claim 4, it is characterized in that, the target area by described first image into
Row virtualization is handled, including:
According to the size of corresponding first distance of each pixel in the target area, to mesh described in described first image
The pixel for marking region carries out corresponding virtualization processing.
6. according to the method described in claim 1, it is characterized in that, before the image after being handled, the method is also wrapped
Include:
According to the size of the second distance, the pre-set level that main object is shot described in described first image is adjusted
Section, with the image after being handled;
The pre-set level includes the one or more in following index:Light intensity, tone, form and aspect, saturation degree, contrast and
Brightness.
7. according to claim 1-6 any one of them methods, it is characterised in that described first image includes:Picture, Dynamic Graph
The two field picture in a two field picture or video as in.
8. a kind of image processing apparatus, it is characterised in that applied to mobile terminal, the mobile terminal includes the first camera device
With the second camera device, described device includes:
First acquisition module, the first image gathered for obtaining first camera device to target scene;
Main body determining module, for determining the shooting main object in described first image;
Apart from determining module, for the centre-to-centre spacing and focal length according to first camera device and second camera device, with
And described first image, the second image, determine the distance between region and preset reference of the target scene;Wherein, it is described
Second image is the image that second camera device gathers the target scene, and the preset reference is the described first shooting
The line at the center of device and second camera device;
Background blurring processing module, for according to the first distance and the relation of second distance, to being clapped described in described first image
The background area for taking the photograph main object carries out virtualization processing, the image after being handled;
Wherein, background area and the preset reference of first distance for shooting main object described in the target scene
The distance between, the second distance is between the shooting main object and the preset reference in the target scene
Distance.
9. device according to claim 8, it is characterised in that described to include apart from determining module:
Coordinate determination sub-module, for determining first coordinate of the region of the target scene in described first image, and institute
State second coordinate of the region of target scene in second image;
Apart from determination sub-module, for the centre-to-centre spacing and focal length according to first camera device and second camera device,
And first coordinate and second coordinate, determine the distance between region and preset reference of the target scene.
10. device according to claim 9, it is characterised in that it is described apart from determination sub-module, specifically for according to
The centre-to-centre spacing and focal length of first camera device and second camera device, and first coordinate, second coordinate and
Preset formula, is calculated the distance between region and preset reference of the target scene;
Wherein, the preset formula is:
Wherein, z is the distance between region and described preset reference of the target scene, and f is described the
The focal length of one camera device and second camera device, B are in first camera device and second camera device
The heart is away from XRFor first coordinate, XTFor second coordinate.
11. device according to claim 8, it is characterised in that the background blurring processing module includes:
Calculating sub module, for calculating the absolute value of the first distance and the difference of second distance;
Target area determination sub-module, main body is shot for the absolute value to be more than described in the target scene of preset value
The background area of object is determined as target area;
Virtualization processing submodule, for the target area in described first image to be carried out virtualization processing.
12. according to the devices described in claim 11, it is characterised in that the virtualization processing submodule, specifically for according to institute
The size of corresponding first distance of each pixel in target area is stated, to the picture of target area described in described first image
Vegetarian refreshments carries out corresponding virtualization processing.
13. device according to claim 8, it is characterised in that further include:
Main body index adjustment module, before the image after being handled, according to the size of the second distance, to described
The pre-set level that main object is shot described in first image is adjusted, with the image after being handled;
The pre-set level includes the one or more in following index:Light intensity, tone, form and aspect, saturation degree, contrast and
Brightness.
14. according to claim 8-13 any one of them devices, it is characterised in that described first image includes:Picture, dynamic
The two field picture in a two field picture or video in image.
A kind of 15. mobile terminal, it is characterised in that including:Memory, processor and it is stored on the memory and can be in institute
The computer program run on processor is stated, such as claim 1 to 7 is realized when the computer program is performed by the processor
Any one of method the step of.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711241258.0A CN107948516A (en) | 2017-11-30 | 2017-11-30 | A kind of image processing method, device and mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711241258.0A CN107948516A (en) | 2017-11-30 | 2017-11-30 | A kind of image processing method, device and mobile terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107948516A true CN107948516A (en) | 2018-04-20 |
Family
ID=61948099
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711241258.0A Pending CN107948516A (en) | 2017-11-30 | 2017-11-30 | A kind of image processing method, device and mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107948516A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109849788A (en) * | 2018-12-29 | 2019-06-07 | 北京七鑫易维信息技术有限公司 | Information providing method, apparatus and system |
CN110830715A (en) * | 2019-10-31 | 2020-02-21 | 维沃移动通信(杭州)有限公司 | Photographing method and electronic equipment |
CN111726531A (en) * | 2020-06-29 | 2020-09-29 | 北京小米移动软件有限公司 | Image shooting method, processing method, device, electronic equipment and storage medium |
CN112532882A (en) * | 2020-11-26 | 2021-03-19 | 维沃移动通信有限公司 | Image display method and device |
CN113194242A (en) * | 2020-01-14 | 2021-07-30 | 荣耀终端有限公司 | Shooting method in long-focus scene and mobile terminal |
US11893668B2 (en) | 2021-03-31 | 2024-02-06 | Leica Camera Ag | Imaging system and method for generating a final digital image via applying a profile to image information |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104333700A (en) * | 2014-11-28 | 2015-02-04 | 广东欧珀移动通信有限公司 | Image blurring method and image blurring device |
CN105141834A (en) * | 2015-07-27 | 2015-12-09 | 努比亚技术有限公司 | Device and method for controlling picture shooting |
CN105245774A (en) * | 2015-09-15 | 2016-01-13 | 努比亚技术有限公司 | Picture processing method and terminal |
CN105979165A (en) * | 2016-06-02 | 2016-09-28 | 广东欧珀移动通信有限公司 | Blurred photos generation method, blurred photos generation device and mobile terminal |
CN106612393A (en) * | 2015-10-22 | 2017-05-03 | 努比亚技术有限公司 | Image processing method, image processing device and mobile terminal |
-
2017
- 2017-11-30 CN CN201711241258.0A patent/CN107948516A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104333700A (en) * | 2014-11-28 | 2015-02-04 | 广东欧珀移动通信有限公司 | Image blurring method and image blurring device |
CN105141834A (en) * | 2015-07-27 | 2015-12-09 | 努比亚技术有限公司 | Device and method for controlling picture shooting |
CN105245774A (en) * | 2015-09-15 | 2016-01-13 | 努比亚技术有限公司 | Picture processing method and terminal |
CN106612393A (en) * | 2015-10-22 | 2017-05-03 | 努比亚技术有限公司 | Image processing method, image processing device and mobile terminal |
CN105979165A (en) * | 2016-06-02 | 2016-09-28 | 广东欧珀移动通信有限公司 | Blurred photos generation method, blurred photos generation device and mobile terminal |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109849788A (en) * | 2018-12-29 | 2019-06-07 | 北京七鑫易维信息技术有限公司 | Information providing method, apparatus and system |
CN110830715A (en) * | 2019-10-31 | 2020-02-21 | 维沃移动通信(杭州)有限公司 | Photographing method and electronic equipment |
CN110830715B (en) * | 2019-10-31 | 2021-06-25 | 维沃移动通信(杭州)有限公司 | Photographing method and electronic equipment |
CN113194242A (en) * | 2020-01-14 | 2021-07-30 | 荣耀终端有限公司 | Shooting method in long-focus scene and mobile terminal |
CN113194242B (en) * | 2020-01-14 | 2022-09-20 | 荣耀终端有限公司 | Shooting method in long-focus scene and mobile terminal |
US12096120B2 (en) | 2020-01-14 | 2024-09-17 | Honor Device Co., Ltd. | Photographing method in telephoto scenario and mobile terminal |
CN111726531A (en) * | 2020-06-29 | 2020-09-29 | 北京小米移动软件有限公司 | Image shooting method, processing method, device, electronic equipment and storage medium |
CN112532882A (en) * | 2020-11-26 | 2021-03-19 | 维沃移动通信有限公司 | Image display method and device |
CN112532882B (en) * | 2020-11-26 | 2022-09-16 | 维沃移动通信有限公司 | Image display method and device |
US11893668B2 (en) | 2021-03-31 | 2024-02-06 | Leica Camera Ag | Imaging system and method for generating a final digital image via applying a profile to image information |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107948516A (en) | A kind of image processing method, device and mobile terminal | |
CN108540724A (en) | A kind of image pickup method and mobile terminal | |
CN107566730B (en) | A kind of panoramic picture image pickup method and mobile terminal | |
CN107566728A (en) | A kind of image pickup method, mobile terminal and computer-readable recording medium | |
CN108322644A (en) | A kind of image processing method, mobile terminal and computer readable storage medium | |
CN107770438A (en) | A kind of photographic method and mobile terminal | |
CN107592466A (en) | A kind of photographic method and mobile terminal | |
CN107948499A (en) | A kind of image capturing method and mobile terminal | |
CN107566739A (en) | A kind of photographic method and mobile terminal | |
CN108111754A (en) | A kind of method and mobile terminal of definite image acquisition modality | |
CN110213485B (en) | Image processing method and terminal | |
CN107580209A (en) | Take pictures imaging method and the device of a kind of mobile terminal | |
CN108989672A (en) | A kind of image pickup method and mobile terminal | |
CN108600647A (en) | Shooting preview method, mobile terminal and storage medium | |
CN108881733A (en) | A kind of panorama shooting method and mobile terminal | |
CN107635110A (en) | A kind of video interception method and terminal | |
CN107592467A (en) | A kind of image pickup method and mobile terminal | |
CN108038825A (en) | A kind of image processing method and mobile terminal | |
CN107820022A (en) | A kind of photographic method and mobile terminal | |
CN108924412A (en) | A kind of image pickup method and terminal device | |
CN107566749A (en) | Image pickup method and mobile terminal | |
CN107948505A (en) | A kind of panorama shooting method and mobile terminal | |
CN108320263A (en) | A kind of method, device and mobile terminal of image procossing | |
CN108449541A (en) | A kind of panoramic picture image pickup method and mobile terminal | |
CN107730433A (en) | One kind shooting processing method, terminal and computer-readable recording medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180420 |