CN108156369A - Image processing method and device - Google Patents
Image processing method and device Download PDFInfo
- Publication number
- CN108156369A CN108156369A CN201711277634.1A CN201711277634A CN108156369A CN 108156369 A CN108156369 A CN 108156369A CN 201711277634 A CN201711277634 A CN 201711277634A CN 108156369 A CN108156369 A CN 108156369A
- Authority
- CN
- China
- Prior art keywords
- image
- target
- master
- brightness
- scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 26
- 238000012545 processing Methods 0.000 claims abstract description 71
- 238000000034 method Methods 0.000 claims abstract description 26
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 22
- 238000003786 synthesis reaction Methods 0.000 claims abstract description 22
- 238000003860 storage Methods 0.000 claims description 12
- 238000001514 detection method Methods 0.000 claims description 8
- 230000009467 reduction Effects 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 5
- 230000007613 environmental effect Effects 0.000 claims description 5
- 230000003287 optical effect Effects 0.000 claims description 4
- 230000000694 effects Effects 0.000 abstract description 36
- 238000003384 imaging method Methods 0.000 description 42
- 238000010586 diagram Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 11
- 230000009977 dual effect Effects 0.000 description 8
- 230000008901 benefit Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000012937 correction Methods 0.000 description 3
- 238000013136 deep learning model Methods 0.000 description 3
- 238000006073 displacement reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 235000013399 edible fruits Nutrition 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 239000011800 void material Substances 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 244000086443 Craterellus fallax Species 0.000 description 1
- 235000007926 Craterellus fallax Nutrition 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 239000000872 buffer Substances 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 239000000571 coke Substances 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 210000003733 optic disk Anatomy 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000011946 reduction process Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/76—Circuitry for compensating brightness variation in the scene by influencing the image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
Present applicant proposes a kind of image processing method and device, wherein, method includes:According to the first master image of the main camera photographic subjects scene of standard exposure state modulator, the first sub-picture according to standard exposure state modulator pair camera photographic subjects scene and the second master image according to the main camera photographic subjects scene of overexposure state modulator simultaneously;The first depth of view information of target image in target scene is calculated according to the first master image and the first sub-picture by first thread, simultaneously, first background image is obtained from the first master image by the second thread, meanwhile, first object image is obtained from the second master image by third thread;Synthesis processing carries out the first background image and first object image according to the first depth of view information of target image, obtains target scene image.Target image and background image in suitable exposure parameter photographic subjects scene are respectively adopted as a result, improves shooting effect and image processing efficiency.
Description
Technical field
This application involves technical field of image processing more particularly to a kind of image processing methods and device.
Background technology
At present, the camera function of terminal device is applied in the daily production and life of user, and taking pictures still becomes one
Kind is extensive to be needed, and user passes through camera function record life of terminal device etc..
In the relevant technologies, in order to ensure the imaging effect of whole image, determine to expose according to the average brightness of photographed scene
Parameter has otherness in current shooting main body and ambient brightness, and determining shooting exposure parameter will be by ambient light
It influences, is not suitable for the main body of current shooting, influenced alternatively, shooting exposure parameter will be shot main body by ring, lead to background
Regional exposure is insufficient, poor so as to cause the image effect of shooting.
Apply for content
The application provides a kind of image processing method and device, to solve in the prior art, to shoot main body and background area
Brightness it is corresponding mutually, so as to cause shooting effect it is poor the technical issues of.
The embodiment of the present application provides a kind of image processing method, including:It is clapped according to the main camera of standard exposure state modulator
The first master image of target scene is taken the photograph, while the first of the target scene is shot according to standard exposure state modulator pair camera
Sub-picture shoots the second master image of the target scene according to the main camera of overexposure state modulator;Pass through first thread
The first depth of view information of target image in the target scene is calculated according to first master image and first sub-picture, together
When, first object image is obtained from first master image by the second thread, meanwhile, by third thread from described second
The first background image is obtained in master image;According to the first depth of view information of the target image to the first object image and institute
It states the first background image and carries out synthesis processing, obtain target scene image.
Another embodiment of the application provides a kind of image processing apparatus, including:Taking module, for being joined according to standard exposure
Number controls the first master image of main camera photographic subjects scene, while is clapped according to the standard exposure state modulator pair camera
It takes the photograph the first sub-picture of the target scene and the target scene is shot according to the main camera of overexposure state modulator
Second master image;Acquisition module calculates institute for passing through first thread according to first master image and first sub-picture
The first depth of view information of target image in target scene is stated, meanwhile, is obtained from first master image by the second thread
One target image, meanwhile, the first background image is obtained from second master image by third thread;Processing module is used for
The first object image and first background image are carried out at synthesis according to the first depth of view information of the target image
Reason obtains target scene image.
The another embodiment of the application provides a kind of computer equipment, including memory and processor, is stored up in the memory
There is computer-readable instruction, when described instruction is performed by the processor so that the processor performs the above-mentioned reality of the application
Apply the image processing method described in example.
The application a further embodiment provides a kind of non-transitorycomputer readable storage medium, is stored thereon with computer journey
Sequence realizes the image processing method as described in the above embodiments of the present application when the computer program is executed by processor.
Technical solution provided by the embodiments of the present application can include the following benefits:
According to the first master image of the main camera photographic subjects scene of standard exposure state modulator, while control secondary camera
First sub-picture of photographic subjects scene and according to the second of the main camera photographic subjects scene of overexposure state modulator master
Image, first depth of field for calculating target image in target scene according to the first master image and the first sub-picture by first thread are believed
Breath, meanwhile, first object image is obtained from the first master image by the second thread, meanwhile, it is led by third thread from second
The first background image is obtained in image, and then, first object image and first are carried on the back according to the first depth of view information of target image
Scape image carries out synthesis processing.It can not only ensure the imaging effect of whole image as a result, especially when ambient brightness and shooting
The imaging effect of whole image when the luminance difference of main body is larger, and improve image processing efficiency.
The additional aspect of the application and advantage will be set forth in part in the description, and will partly become from the following description
It obtains significantly or is recognized by the practice of the application.
Description of the drawings
The application is above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments
Significantly and it is readily appreciated that, wherein:
Fig. 1 is the flow chart according to the image processing method of the application one embodiment;
Fig. 2 is the principle schematic according to the range of triangle of the application one embodiment;
Fig. 3 is to obtain schematic diagram according to the dual camera depth of view information of the application one embodiment;
Fig. 4 is the schematic diagram of a scenario according to the image processing method of the application one embodiment;
Fig. 5 is the effect diagram of image procossing according to prior art;
Fig. 6 is the effect diagram according to the image procossing of the application one embodiment;
Fig. 7 is the flow chart according to the image processing method of the application another embodiment;
Fig. 8 is the schematic diagram of a scenario according to the image processing method of the application another embodiment;
Fig. 9 is the structure diagram according to the image processing apparatus of the application one embodiment;
Figure 10 is the structure diagram according to the image processing apparatus of the application another embodiment;
Figure 11 is the structure diagram according to the image processing apparatus of the application another embodiment;And
Figure 12 is the schematic diagram according to the image processing circuit of the application one embodiment.
Specific embodiment
Embodiments herein is described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end
Same or similar label represents same or similar element or the element with same or like function.Below with reference to attached
The embodiment of figure description is exemplary, it is intended to for explaining the application, and it is not intended that limitation to the application.
Below with reference to the accompanying drawings the image processing method and device of the embodiment of the present application are described.
Wherein, the executive agent of the image processing method of the application and device can be terminal device, wherein, terminal device
It can be the hardware device that mobile phone, tablet computer, personal digital assistant, Wearable etc. have dual camera.This is wearable
Equipment can be Intelligent bracelet, smartwatch, intelligent glasses etc..
Based on above analysis it is found that the brightness of shooting main body and the difference of ambient brightness can interact, so as to cause bat
According to ineffective, especially when the brightness for shooting main body and the gap of ambient brightness are larger, to shooting the exposure parameter of main body
Easily influenced by ambient brightness and cause to expose it is improper and, the acquisition parameters of background area are easily shot
The influence of main body brightness and cause to expose it is improper, for example, during performer under the dark auditorium photographs stage lighting of light,
Lead to the corresponding image overexposure of performer, and the fine and smooth exposure of the correlation in rush seat is insufficient etc..
In order to solve the above-mentioned technical problem, the image processing method of the embodiment of the present application, be based respectively on shooting background and
Shooting main body is suitably exposed, and preferably shoots main body to exposure effect later and background area synthesizes, not only carry
The high exposure effect of image subject, and ensure that the details of whole image is enriched.
Fig. 1 is according to the flow chart of the image processing method of the application one embodiment, as shown in Figure 1, this method includes:
Step 101, according to the first master image of the main camera photographic subjects scene of standard exposure state modulator, meanwhile, root
According to the first sub-picture of standard exposure state modulator pair camera photographic subjects scene.
Step 102, the second master image of main camera photographic subjects scene is excessively controlled according to overexposure parameter.
Specifically, terminal device is taken pictures by dual camera system, and dual camera system is shot by main camera
Master image and the sub-picture of secondary camera shooting calculate depth of view information, wherein, dual camera system includes one and obtains to shoot
The main camera of main body master image and an auxiliary master image obtain the secondary camera of depth of view information, wherein, main camera and pair
The set-up mode of camera can be along it is horizontally arranged or or be placed in the vertical direction, in order to more
Add and clearly describe how dual camera obtains depth of view information, below with reference to the accompanying drawings illustrate that dual camera obtains the original of depth of view information
Reason:
In practical applications, human eye explanation depth of view information mainly differentiates depth of view information by binocular vision, this takes the photograph with double
The principle that depth of view information is differentiated as head, mainly realized by the principle of range of triangle as shown in Figure 2, based on Fig. 2
In, in real space, depict imaging object and two camera position ORAnd OTAnd the coke of two cameras
Plane, the distance of plane is f where two cameras of focal plane distance, is imaged in two cameras of focal plane position, from
And obtain two shooting images.
Wherein, P and P ' is position of the same target in different shooting images respectively.Wherein, P points are apart from place shooting figure
The distance of the left border of picture is XR, the distance of left border of the P ' points apart from place shooting image is XT。ORAnd OTRespectively two
A camera, for the two cameras in same plane, distance is B.
Based on principle of triangulation, the distance between plane Z, has as follows where the object and two cameras in Fig. 2
Relationship:
Based on this, can push awayWherein, d is position of the same target in different shooting images
The distance between put difference.Since B, f are definite value, the distance Z of object can be determined according to d.
Certainly, in addition to triangle telemetry, the depth of view information of master image can also be calculated using other modes, for example,
When main camera and secondary camera are taken pictures for same scene, the distance of the object distance camera in scene and main camera
The proportional relationships such as displacement difference, posture difference with secondary camera imaging, therefore, in one embodiment of the application, Ke Yigen
Above-mentioned distance Z is obtained according to this proportionate relationship.
For example, as shown in figure 3, the sub-picture that the master image and secondary camera that are obtained by main camera obtain,
The figure of difference difference is calculated, is represented here with disparity map, what is represented on this figure is the displacement difference of the upper identical point of two figures
It is different, but since the displacement difference in triangulation location and Z are directly proportional, many times disparity map is just directly used as depth of field letter
Breath figure.
Specifically, in the embodiment of the present application, the pair that the master image and secondary camera shot by main camera is shot
The depth of view information that same target is directed in master image and sub-picture is calculated in image, and is used as what is be ultimately imaged by master image
The base image of real image, and in order to avoid master image and sub-picture calculate depth of view information when, due to master image and sub-picture
The reasons such as differ greatly that depth of view information is caused to calculate is inaccurate, alternatively, master image it is unintelligible cause to be ultimately imaged into
Picture effect is bad, according to the first master image of the main camera photographic subjects scene of standard exposure state modulator, while pair is controlled to take the photograph
As the first sub-picture of the head shooting target scene, wherein, standard exposure parameter is shot in the target scene according to shooting
For main body, i.e., if shooting main body brightness is stronger, the corresponding aperture size of standard exposure parameter etc. is smaller, if clapped
It is dark to take the photograph main body brightness, then the corresponding aperture of standard exposure parameter is larger etc., can be by focusing on working as based on priori
The exposure parameter that shooting main body in preceding photographed scene obtains is as the first exposure parameter.
The first master image of the above-mentioned target scene based on the shooting of standard exposure parameter and the first sub-picture contain as a result,
The image of main body is more clearly shot, shooting main body be may further determine that out based on first master image and the first sub-picture
Depth of view information.
In addition, when current environment brightness is dark, exposure parameter due to being influenced by the brighter shooting main body of light, and
Lead to under-exposure, for example, current environment brightness is 50, and the brightness for shooting main body is 100, may be by the bright of shooting main body
The influence of degree, the finally corresponding brightness of exposure parameter of determining shooting is 60, it is clear that can cause background area in the image of shooting
Domain is unclear due to under-exposed, and in embodiments herein, mesh is shot according to the main camera of overexposure state modulator
The second master image of scene is marked, so as to ensure that, background area exposure is abundant in the target scene of shooting, wherein, overexposure ginseng
Number is the background area in the target scene of the overexposure parameter and current shooting for above-mentioned standard exposure parameter
The brightness in domain is related, and the brightness of background area is lower relative to the brightness of shooting main body, which is more greater than marking
Quasi- exposure parameter.
Certainly, have that imaging effect under half-light environment is poor, therefore, only with a kind of common half-light in the embodiment of the present application
It is explained for the scene of the higher shooting main body of shooting brightness under environment, in practical applications, there is also a kind of shooting fields
Scape, to shoot the relatively low shooting main body of brightness under strong light environment, in such a case, it is possible to according to standard exposure state modulator
First master image of main camera photographic subjects scene, while the first sub-picture of secondary camera photographic subjects scene is controlled, with
And the second master image according to the main camera photographic subjects scene of under-exposure state modulator.
It, can also be only in shooting environmental and shooting main body in order to further improve the flexibility of the image procossing of the application
Between luminance difference it is larger, during so as to be difficult to ensure that the imaging effect of whole image is higher, implement the image of the embodiment of the present application
Processing method.
Specifically, in one embodiment of the application, first of environment where shooting main body in target scene is detected
Brightness and the second brightness of shooting main body, such as can be by shooting main body location in the preview screen to photographic subjects scene
Environmental area where domain and shooting main body is focused respectively surveys light, is determined according to focusing photometric parameter value (aperture size etc.)
First brightness and the second brightness, for example, when the scene that user often shoots is relatively more fixed, for example often see and perform and be directed to
Performer on stage takes pictures, then can pre-set and store the scene of shooting and the first brightness and the second brightness or first
The correspondence of the difference of brightness and the second brightness, so as to be matched the scene that scene of currently taking pictures prestores, obtain and correspond to
The first brightness and the second brightness or the difference of the first brightness and the second brightness, for another example, can be previously according to a large number of experiments
Data build deep learning model, and the input of the deep learning model is scene brightness distribution situation of taking pictures, for example, be closer to
The boundary of view-finder is brighter or center closer to view-finder is brighter etc., exports as the first brightness and the second brightness or first
Brightness and the difference of the second brightness, so as to which the Luminance Distribution situation for scene of currently taking pictures is inputted the deep learning model, is obtained
The first brightness and the second brightness of output or the difference of the first brightness and the second brightness.Wherein, the first brightness of shooting environmental
Corresponding to the brightness of the background area of the target image in the target scene of shooting, the second brightness of target image corresponds to currently
Shoot the brightness of main body, however, it is determined that the difference of the second brightness and the first brightness is more than or equal to predetermined threshold value, then according to standard exposure
First master image of the main camera photographic subjects scene of state modulator, while control the first pair of secondary camera photographic subjects scene
Image and the second master image according to the main camera photographic subjects scene of overexposure state modulator.
Wherein, predetermined threshold value can be used for judging ambient brightness and shooting main body according to the data scalings of many experiments
Brightness whether influence the Benchmark brightness value of imaging effect, which can also be related to the imaging h ardware of terminal device,
The photonasty of imaging h ardware is better, and the predetermined threshold value is higher.
Step 103, target image in target scene is calculated according to the first master image and the first sub-picture by first thread
The first depth of view information, meanwhile, first object is obtained from the first master image by the second thread, meanwhile, pass through third thread
The first background image is obtained from the second master image.
Step 104, first object image and the first background image are closed according to the first depth of view information of target image
Into processing, target scene image is obtained.
Based on above analysis it is found that when dual camera obtains depth of view information, need to obtain same target in different shooting figures
Position as in, if obtained, the corresponding image of depth of view information is more clear, and the depth of view information obtained is more accurate, thus, at this
In example, the first of target image (shooting main body) is carried out based on the first master image under standard exposure parameter and the first sub-picture
The acquisition of depth of view information, as a result, due to the first master image and the imaging effect of the first sub-picture that are obtained under standard exposure parameter
Preferably, image is more visible, thus the first depth of view information obtained is more accurate.
Wherein, as analyzing above, the first master image is that the brightness based on shooting main body is exposed, thus, the
The imaging effect of the corresponding target image of shooting main body in one master image is preferable, the second master image be based on current shooting
What the brightness of target scene was exposed, therefore, the imaging effect of the background area shot in the second master image is preferable, in order to carry
In one embodiment of the application, first object image is extracted based on the first master image for the imaging effect of high whole image, can
To be identified by image, the technologies such as outline identification determine first object image, the first Background is obtained based on the second master image
Picture, and then, it is carried out either shooting main body also in the image of synthesis processing generation according to the first master image and the first background image
Be background area details it is all very abundant, the imaging effect of whole image is preferable.
In practical implementation, in order to enable the synthetic effect of first object image and the first background image is more certainly
So, synthesis processing is carried out to first object image and the first background image according to the first depth of view information of target image, wherein, root
According to the difference of application scenarios, first object image and the first background image are closed according to the first depth of view information of target image
Specific embodiment into processing is different:
As a kind of possible realization method, since depth of view information is bigger, the object in image is smaller, and depth of view information is got over
Small, the object in image is bigger, therefore, it is possible to the middle correlation of the adjustment first object image based on the first depth of view information adaptability
The size in region, and synthesized with the first background image, the two is avoided to cause when synthesis since size is improper
Distortion etc..
As alternatively possible realization method, since depth of view information is bigger, image is fuzzyyer, and depth of view information is smaller, figure
As more clear, therefore, it is possible to carry out pixel filling to first object image based on the first depth of view information, and with the first background image
It is synthesized so that the two synthetic effect is more natural.
It further,, can also be according to target figure in one embodiment of the application in order to advanced optimize imaging effect
First depth of view information of picture carries out virtualization processing to the first background image.
Specifically, according to the difference of application scenarios, first scape of the various ways according to target image can be used
Deeply convince that breath carries out virtualization processing to the first background image:
As a kind of possible realization method:The void of first background image is determined according to the first depth of view information of target image
Change intensity, and then, virtualization processing is carried out to corresponding background area according to the virtualization intensity of the first background area, so as to, according to
Different depth of view information carries out different degrees of virtualization so that the image effect of virtualization is more natural and rich in stereovision.
It should be noted that according to the difference of application scenarios, different realization methods may be used, realize according to the first mesh
The depth of view information of logo image determines the virtualization intensity of the first background area, as a kind of possible realization method, works as first object
The depth of view information of image is more accurate, then proves that the profile of first object object is more clear, at this point, even if to the first background area into
Is less susceptible to cause the mistake virtualization to first object image during row virtualization processing, the corresponding virtualization intensity in background area can be at this time
It is bigger, therefore, it is possible to pre-establish the corresponding void of computational accuracy and the first background area of the depth of view information of first object image
Change the correspondence of intensity, and then, the corresponding virtualization intensity in background area is obtained according to the correspondence.
As alternatively possible realization method, the position of the first object image of the first background image is more remote, it was demonstrated that the
The first object image of one background area is more uncorrelated, therefore, it is possible to according to pixel distance each in the first background image
The distance setting of one target image blurs intensity, and distance of the pixel apart from first object image is more remote in the first background image,
It is higher to blur intensity.
Further, in embodiments herein, since depth of view information calculating is time-consuming longer, thus, pass through first
Thread calculates the first depth of view information of target image in target scene according to the first master image and the first sub-picture, meanwhile, pass through
Second thread obtains first object image from the first master image, meanwhile, is obtained from the second master image by third thread
One background image.
Thus, on the one hand, as shown in figure 4, while the first depth of view information is calculated, get first object image and
One background image, in order to after the first depth of view information is obtained, can directly according to the first depth of view information, first object image and
First background image carries out corresponding image synthesis processing, compared to first obtaining depth of view information, then to the multiframe master image of shooting into
Row synthesizes the processing mode of processing, and image processing efficiency is improved, on the other hand, according to standard exposure parameter and excessively exposure
The image shot under optical parameter obtains first object image and the first background image, corresponding to bat of the image detail compared with horn of plenty
Subject image and background image are taken the photograph, is compared to the side needed using different exposure parameter shoot multi-frame images and then synthesis
Formula, the application alleviate terminal and set it is only necessary to shoot that the preferable image of the equal imaging effect of whole image can be got twice
With continued reference to Fig. 4, first object image and the first background image to be synthesized are split as standby processing pressure, another aspect
Two parallel threads operations, the time and first object image and the first background image for further shortening depth of field calculating obtain
Time difference, improve image processing efficiency.
In order to more clearly illustrate the effect of the image procossing of the embodiment of the present application, with reference to specific application scenarios
It illustrates:
The first scene:
When the poor auditorium of ambient light takes pictures to the performer on stage, clapped using screening-mode of the prior art
According to when the image that obtains as shown in figure 5, background area under-exposure is unintelligible so as to cause background area, and performer region
Overexposure (is got over so as to cause performer region is also unintelligible in figure with the clarity of the gray scale chart diagram picture of image, image
Clearly, the gray value in correspondence image region is lower).
And after using the image processing method of the application, according to the main camera photographic subjects scene of standard exposure state modulator
The first master image, while control the first sub-picture of secondary camera photographic subjects scene and according to overexposure parameter control
The second master image of main camera photographic subjects scene is made, is calculated by first thread according to the first master image and the first sub-picture
First depth of view information of performer region in target scene, meanwhile, performer is obtained from the first master image by the second thread
Image, meanwhile, background image is obtained from the second master image by third thread, according to the first depth of view information of actor image
After carrying out synthesis processing to actor image and the first background image, as shown in fig. 6, either performer or background area is due to
Suitable exposure is obtained, so as to which whole image is more clear.
In conclusion the image processing method of the embodiment of the present application, shoots according to the main camera of standard exposure state modulator
First master image of target scene, while control the first sub-picture of secondary camera photographic subjects scene and exposed according to excessive
Optical parameter controls the second master image of main camera photographic subjects scene, secondary according to the first master image and first by first thread
Image calculates the first depth of view information of target image in target scene, meanwhile, it is obtained from the first master image by the second thread
First object image, meanwhile, the first background image is obtained from the second master image by third thread, and then, according to target figure
First depth of view information of picture carries out synthesis processing to first object image and the first background image.Can not only it ensure as a result, whole
The imaging effect of whole image, the imaging effect of whole image especially when ambient brightness and the larger luminance difference of shooting main body
Fruit, and improve image processing efficiency.
Description based on above example, when the luminance difference of ambient brightness and shooting main body is little, based on unification
Exposure parameter shooting image may influence less the display effect of whole image, which can both ensure to shoot main body
Blur-free imaging, it is also ensured that the blur-free imaging of background area, with reach optimization shooting image imaging effect.
Specifically, in another embodiment of the application, as shown in fig. 7, above-mentioned steps 101 may also include:
Step 201, the first brightness of shooting environmental and the second brightness of target image are detected.
Step 202, it if detection knows that the difference of the second brightness and the first brightness is less than predetermined threshold value, is determined according to difference
Exposure parameter.
Specifically, it if detection knows that the difference of the second brightness and the first brightness is less than predetermined threshold value, is determined according to difference
Exposure parameter, the exposure parameter be both suitable for the blur-free imaging of background area, were also suitable for the blur-free imaging of shooting main body.
As a kind of possible realization method, the second brightness and the first brightness can be established previously according to lot of experimental data
Difference and exposure parameter correspondence, and then after the difference of current second brightness with the first brightness is obtained, it is right to inquire this
It should be related to, obtain exposure parameter corresponding with the target scene of current shooting.
Step 203, main camera is controlled to shoot multigroup master image to target scene under exposure parameter environment, is controlled simultaneously
Secondary camera shoots multigroup sub-picture to target scene.
Specifically, in order to improve imaging effect, main camera is controlled to shoot target scene under exposure parameter environment
Multigroup master image, while secondary camera is controlled to shoot multigroup sub-picture to target scene.
Step 204, it obtains with reference to master image from multigroup master image, and is obtained from multigroup sub-picture with referring to master image
With the reference sub-picture of group shooting.
It is appreciated that in embodiments herein, since main camera and secondary camera shoot multigroup master image simultaneously
With multigroup sub-picture, therefore, more connect with the master image of group and the image information of sub-picture in belonging to for shooting of same time point
Closely, and according to source master image and sub-picture the calculating of depth of view information is carried out, it is ensured that the depth of view information of acquisition is more accurate.
Specifically, selection refers to master image from multigroup master image, and selected from multigroup sub-picture with referring to master map
As the reference sub-picture with group shooting, it is emphasized that, during actual photographed, master image and sub-picture are according to same
Frequency shoot multiple series of images, wherein, synchronization shooting master image and sub-picture belong to group image, for example, according to when
Between sequencing, multigroup master image of main camera shooting includes master image 11, master image 12 ..., secondary camera shooting it is multigroup
Master image includes sub-picture 21, sub-picture 22 ..., then master image 11 and sub-picture 21 are with group image, master image 12 and sub-picture
22 be that can also be selected from multigroup master image with group image ... in order to further improve the efficiency of depth of view information acquisition and accuracy
Higher reference master image of clarity etc. is selected, certainly, when the number of image frames in the image group of acquisition is more, in order to improve selection
Efficiency, can also be higher from clarity according to a few frame master images of the initial options such as image definition and corresponding a few frame sub-pictures
A few frame master images and corresponding a few frame sub-pictures in selection with reference to master image and it is corresponding refer to sub-picture.
Step 205, synthesis noise reduction is carried out to multigroup master image by first thread and generates target master image, and from target master
The second target image is obtained in image, meanwhile, target field is calculated according to reference to master image and with reference to sub-picture by the second thread
Second depth of view information of target image in scape, meanwhile, the second background image is obtained from reference master image by third thread.
Step 206, the second background image and the second target image are closed according to the second depth of view information of target image
Into processing.
Specifically, as shown in figure 8, in one embodiment of the application, multigroup master image is carried out by first thread
Synthesize noise reduction generation target master image, and the second target image obtained from target master image, meanwhile, by the second thread according to
The second depth of view information of target image in target scene is calculated (with reference to master image and with reference to secondary with reference to master image and with reference to sub-picture
Image is main camera and the second frame image of secondary camera shooting), meanwhile, it is obtained from reference master image by third thread
Second background image, and then, the second background image and the second target image are carried out according to the second depth of view information of target image
Synthesis is handled, and not only increases image processing efficiency as a result, and obtain the second mesh according in the target master image after synthesis noise reduction
Logo image further improves the clarity of image.
Wherein, for the ease of be clearly understood that multiframe synthesize noise reduction process, below in the poor scene of light condition to master
The multiframe synthesis noise reduction of image illustrates.
When ambient light deficiency, the imaging devices such as terminal device are general to be shot by the way of sensitivity is improved automatically.
But this mode for improving sensitivity, it is more to result in noise in image.Multiframe synthesis noise reduction is exactly to reduce in image
Noise spot, captured image quality in the case of improvement height is photosensitive.Its principle is that noise is that this disorderly arranged priori is known
Know, specifically, after the multigroup shooting image of continuous shooting, the noise that same position occurs may be red noise, it is also possible to green to make an uproar
Point, white noise point even without noise, thus there is the condition for comparing and screening, can be according to corresponding in multigroup shooting image
Each pixel of same position value (value of the pixel include the pixel that is included of pixel number, comprising pixel
More, the value of pixel is higher, and corresponding image is also more clear), the pixel (i.e. noise) for belonging to noise is filtered out
Come.
Further, after noise is filtered out, noise can also guess according to the algorithm of further method color and
Pixel replacement is handled, and achievees the effect that remove noise.By such process, it will be able to reach the extremely low noise reduction of image quality degree of loss
Effect.
For example, the multiframe synthesis noise-reduction method easy as a kind of comparison, can read after multigroup shooting image is obtained
The value for each pixel that same position is corresponded in multigroup shooting image is taken, by calculating weighted average to these pixels,
Generate the value of the pixel of the position in composograph.In this way, clearly image can be obtained.
It certainly, in the present embodiment, can also be according to the second scape of target image in order to further improve image processing effect
Deeply convince that breath carries out the second background image virtualization processing, virtualization processing mode is with above-described embodiment description to the first Background
The mode that picture carries out virtualization processing is similar, and details are not described herein.
In conclusion the image processing method of the embodiment of the present application, when the difference of ambient brightness and shooting main body is little,
It is shot based on unified exposure parameter, not only alleviates the processing pressure of terminal device, and ensure that the imaging of image
Effect.
In order to realize above-described embodiment, the application also proposed a kind of image processing apparatus, and Fig. 9 is according to the application one
The structure diagram of the image processing apparatus of embodiment, as shown in figure 9, the image processing apparatus includes:Taking module 100 obtains
Modulus block 200 and processing module 300.
Wherein, taking module 100, for according to the first of the main camera photographic subjects scene of standard exposure state modulator the master
Image, while according to the first sub-picture of standard exposure state modulator pair camera photographic subjects scene.
Taking module 100 is additionally operable to the second master map according to the main camera photographic subjects scene of overexposure state modulator
Picture.
Acquisition module 200 is calculated according to the first master image and the first sub-picture in target scene for passing through first thread
First depth of view information of target image, meanwhile, first object image is obtained from the first master image by the second thread, meanwhile,
First background image is obtained from the second master image by third thread.
Processing module 300, for according to the first depth of view information of target image to first object image and the first Background
As carrying out synthesis processing, target scene image is obtained.
In one embodiment of the application, as shown in Figure 10, which further includes detection module 400 and determining module
500, wherein,
Detection module 400, for detecting the first brightness that main body place environment is shot in target scene and captured main body
The second brightness.
Determining module 500, the difference for determining the second brightness and the first brightness are more than or equal to predetermined threshold value.
Further, in one embodiment of the application, as shown in figure 11, taking module 100 includes virtualization unit
110。
Unit 110 is blurred, virtualization processing is carried out to the first background image for the first depth of view information according to target image,
Obtain it is background blurring after the target scene image.
It should be noted that the aforementioned description to embodiment of the method, is also applied for the device of the embodiment of the present application, realizes
Principle is similar, and details are not described herein.
The division of modules is only used for for example, in other embodiments, can will scheme in above-mentioned image processing apparatus
As device is divided into different modules as required, to complete all or part of function of above-mentioned image device.
In conclusion the image processing apparatus of the embodiment of the present application, shoots according to the main camera of standard exposure state modulator
First master image of target scene, while according to the first secondary figure of standard exposure state modulator pair camera photographic subjects scene
Picture and the second master image according to the main camera photographic subjects scene of overexposure state modulator, by first thread according to
First master image and the first sub-picture calculate the first depth of view information of target image in target scene, meanwhile, pass through the second thread
First object image is obtained from the first master image, meanwhile, the first Background is obtained from the second master image by third thread
Picture, and then, synthesis processing is carried out to first object image and the first background image according to the first depth of view information of target image.By
This, can not only ensure the imaging effect of whole image, especially when the luminance difference of ambient brightness and shooting main body is larger
The imaging effect of whole image, and improve image processing efficiency.
In order to realize above-described embodiment, the application also proposed a kind of computer equipment, wherein, computer equipment is to include
The arbitrary equipment of the processor of memory comprising storage computer program and operation computer program, for example, can be intelligence
Mobile phone, PC etc. further include image processing circuit in above computer equipment, and image processing circuit can utilize hardware
And/or component software is realized, it may include defines each of ISP (Image Signal Processing, picture signal processing) pipeline
Kind processing unit.Figure 12 is the schematic diagram of image processing circuit in one embodiment.As shown in figure 12, for purposes of illustration only, only showing
Go out the various aspects with the relevant image processing techniques of the embodiment of the present application.
As shown in figure 12, image processing circuit includes ISP processors 1040 and control logic device 1050.Imaging device 1010
The image data of capture is handled first by ISP processors 1040, and ISP processors 1040 analyze image data can with capture
For the image statistics of determining and/or imaging device 1010 one or more control parameters.(the photograph of imaging device 1010
Machine) it may include the camera with one or more lens 1012 and imaging sensor 1014, wherein, in order to implement the application's
Background blurring processing method, imaging device 1010 include two groups of cameras, wherein, with continued reference to Figure 12, imaging device 1010 can
Based on main camera and secondary camera while photographed scene image.Imaging sensor 1014 may include colour filter array (such as
Bayer filters), imaging sensor 1014 can obtain the luminous intensity captured with each imaging pixel of imaging sensor 1014 and wave
Long message, and the one group of raw image data that can be handled by ISP processors 1040 is provided, wherein, ISP processors 1040 can be based on
Figure in raw image data and secondary camera that imaging sensor 1014 in the main camera that sensor 1020 provides obtains
The raw image data obtained as sensor 1014 calculates depth of view information etc..Sensor 1020 can be based on 1020 interface class of sensor
Raw image data is supplied to ISP processors 1040 by type.1020 interface of sensor can utilize SMIA (Standard
Mobile Imaging Architecture, Standard Mobile Imager framework) interface, other serial or parallel camera interfaces or
The combination of above-mentioned interface.
ISP processors 1040 handle raw image data pixel by pixel in various formats.For example, each image pixel can
Bit depth with 8,10,12 or 14 bits, ISP processors 1040 can carry out raw image data at one or more images
Reason operation, statistical information of the collection about image data.Wherein, image processing operations can be by identical or different bit depth precision
It carries out.
ISP processors 1040 can also receive pixel data from video memory 1030.It for example, will from 1020 interface of sensor
Raw pixel data is sent to video memory 1030, and the raw pixel data in video memory 1030 is available at ISP
It is for processing to manage device 1040.Video memory 1030 can be in a part, storage device or electronic equipment for memory device
Independent private memory, and may include DMA (Direct Memory Access, direct direct memory access (DMA)) feature.
When receiving the raw image data from 1020 interface of sensor or from video memory 1030, at ISP
Reason device 1040 can carry out one or more image processing operations, such as time-domain filtering.Treated, and image data can be transmitted to image
Memory 1030, to carry out other processing before shown.ISP processors 1040 are from 1030 receiving area of video memory
Data are managed, and the processing data are carried out with the image real time transfer in original domain and in RGB and YCbCr color spaces.Place
Image data after reason may be output to display 1070, so that user watches and/or by graphics engine or GPU (Graphics
Processing Unit, graphics processor) it is further processed.In addition, the output of ISP processors 1040 also can be transmitted to image
Memory 1030, and display 1070 can read image data from video memory 1030.In one embodiment, image stores
Device 1030 can be configured as realizing one or more frame buffers.In addition, the output of ISP processors 1040 can be transmitted to coding
Device/decoder 1060, so as to encoding/decoding image data.The image data of coding can be saved, and be shown in display
It is decompressed before in 1070 equipment.Encoder/decoder 1060 can be realized by CPU or GPU or coprocessor.
The determining statistical data of ISP processors 1040, which can be transmitted, gives control logic device Unit 1050.For example, statistical data can
It is passed including the images such as automatic exposure, automatic white balance, automatic focusing, flicker detection, black level compensation, 1012 shadow correction of lens
1014 statistical information of sensor.Control logic device 1050 may include performing one or more routines (such as firmware) processor and/or
Microcontroller, one or more routines according to the statistical data of reception, can determine imaging device 1010 control parameter and
Control parameter.For example, control parameter may include 1020 control parameter of sensor (such as gain, time of integration of spectrum assignment),
The combination of camera flash control parameter, 1012 control parameter of lens (such as focusing or zoom focal length) or these parameters.
ISP control parameters may include the gain level and color for automatic white balance and color adjustment (for example, during RGB processing)
1012 shadow correction parameter of correction matrix and lens.
It is the step of realizing image processing method with image processing techniques in Figure 12 below:
According to the first master image of the main camera photographic subjects scene of standard exposure state modulator, while control secondary camera
It shoots the first sub-picture of the target scene and the target scene is shot according to the main camera of overexposure state modulator
The second master image;
Target in the target scene is calculated according to first master image and first sub-picture by first thread
First depth of view information of image, meanwhile, first object image is obtained from first master image by the second thread, meanwhile,
First background image is obtained from second master image by third thread;
According to the first depth of view information of the target image to the first object image and first background image into
Row synthesis is handled.
In order to realize above-described embodiment, the application also proposes a kind of non-transitorycomputer readable storage medium, when described
Instruction in storage medium is performed by processor, enabling performs such as above-described embodiment image processing method.
In the description of this specification, reference term " one embodiment ", " example ", " is specifically shown " some embodiments "
The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description
Point is contained at least one embodiment or example of the application.In the present specification, schematic expression of the above terms are not
It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office
It is combined in an appropriate manner in one or more embodiments or example.In addition, without conflicting with each other, the skill of this field
Art personnel can tie the different embodiments or examples described in this specification and the feature of different embodiments or examples
It closes and combines.
In addition, term " first ", " second " are only used for description purpose, and it is not intended that instruction or hint relative importance
Or the implicit quantity for indicating indicated technical characteristic.Define " first " as a result, the feature of " second " can be expressed or
Implicitly include at least one this feature.In the description of the present application, " multiple " are meant that at least two, such as two, three
It is a etc., unless otherwise specifically defined.
Any process described otherwise above or method description are construed as in flow chart or herein, represent to include
Module, segment or the portion of the code of the executable instruction of one or more the step of being used to implement custom logic function or process
Point, and the range of the preferred embodiment of the application includes other realization, wherein can not press shown or discuss suitable
Sequence, including according to involved function by it is basic simultaneously in the way of or in the opposite order, to perform function, this should be by the application
Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use
In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for
Instruction execution system, device or equipment (such as computer based system, including the system of processor or other can be held from instruction
The system of row system, device or equipment instruction fetch and execute instruction) it uses or combines these instruction execution systems, device or set
It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicate, propagate or pass
Defeated program is for instruction execution system, device or equipment or the dress used with reference to these instruction execution systems, device or equipment
It puts.The more specific example (non-exhaustive list) of computer-readable medium is including following:Electricity with one or more wiring
Connecting portion (electronic device), portable computer diskette box (magnetic device), random access memory (RAM), read-only memory
(ROM), erasable edit read-only storage (EPROM or flash memory), fiber device and portable optic disk is read-only deposits
Reservoir (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other are suitable
Medium, because can be for example by carrying out optical scanner to paper or other media, then into edlin, interpretation or when necessary with it
His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each section of the application can be realized with hardware, software, firmware or combination thereof.Above-mentioned
In embodiment, software that multiple steps or method can in memory and by suitable instruction execution system be performed with storage
Or firmware is realized.Such as, if realized with hardware in another embodiment, following skill well known in the art can be used
Any one of art or their combination are realized:With for data-signal realize logic function logic gates from
Logic circuit is dissipated, the application-specific integrated circuit with suitable combinational logic gate circuit, programmable gate array (PGA), scene can compile
Journey gate array (FPGA) etc..
Those skilled in the art are appreciated that realize all or part of step that above-described embodiment method carries
Suddenly it is that relevant hardware can be instructed to complete by program, the program can be stored in a kind of computer-readable storage medium
In matter, the program when being executed, one or a combination set of the step of including embodiment of the method.
In addition, each functional unit in each embodiment of the application can be integrated in a processing module, it can also
That each unit is individually physically present, can also two or more units be integrated in a module.Above-mentioned integrated mould
The form that hardware had both may be used in block is realized, can also be realized in the form of software function module.The integrated module is such as
Fruit is realized in the form of software function module and is independent product sale or in use, can also be stored in a computer
In read/write memory medium.
Storage medium mentioned above can be read-only memory, disk or CD etc..Although it has been shown and retouches above
Embodiments herein is stated, it is to be understood that above-described embodiment is exemplary, it is impossible to be interpreted as the limit to the application
System, those of ordinary skill in the art can be changed above-described embodiment, change, replace and become within the scope of application
Type.
Claims (10)
1. a kind of image processing method, which is characterized in that including:
According to the first master image of the main camera photographic subjects scene of standard exposure state modulator, meanwhile, it is exposed according to the standard
The secondary camera of optical parameter control shoots the first sub-picture of the target scene;
The second master image of the target scene is shot according to the main camera of overexposure state modulator;
Target image in the target scene is calculated according to first master image and first sub-picture by first thread
The first depth of view information, meanwhile, first object image is obtained from first master image by the second thread, meanwhile, pass through
Third thread obtains the first background image from second master image;
The first object image and first background image are closed according to the first depth of view information of the target image
Into processing, target scene image is obtained.
2. the method as described in claim 1, which is characterized in that shot described according to the main camera of standard exposure state modulator
First master image of target scene, while secondary camera is controlled to shoot the first sub-picture of the target scene and according to mistake
It spends before the second master image that exposure parameter controls the main camera shooting target scene, further includes:
Detect the first brightness of environment where shooting main body in the target scene and the second brightness of the shooting main body;
Determine that the difference of second brightness and first brightness is more than or equal to predetermined threshold value..
3. the method as described in claim 1, which is characterized in that further include:
Virtualization processing carries out first background image according to the first depth of view information of the target image, obtains background blurring
The target scene image afterwards.
4. method as claimed in claim 2, which is characterized in that the first brightness and the target in the detection shooting environmental
After second brightness of image, further include:
If detection knows that the difference of second brightness and first brightness is less than predetermined threshold value, determined according to the difference
Exposure parameter;
The main camera is controlled to shoot multigroup master image to the target scene under the exposure parameter environment, is controlled simultaneously
The pair camera shoots multigroup sub-picture to the target scene;
It is obtained from multigroup master image with reference to master image, and is obtained from multigroup sub-picture and refer to master image with described
With the reference sub-picture of group shooting;
Synthesis noise reduction is carried out to multigroup master image by the first thread and generates target master image, and from the target master
The second target image is obtained in image, meanwhile, by second thread according to described with reference to master image and described with reference to secondary figure
The second depth of view information as calculating target image in the target scene, meanwhile, by the third thread from described with reference to master
The second background image is obtained in image;
Second background image and second target image are closed according to the second depth of view information of the target image
Into processing.
5. method as claimed in claim 4, which is characterized in that further include:
Virtualization processing is carried out to second background image according to the second depth of view information of the target image, is obtained background blurring
Treated the target scene image.
6. a kind of image processing apparatus, which is characterized in that including:
Taking module, for the first master image according to the main camera photographic subjects scene of standard exposure state modulator, while root
The first sub-picture of the target scene is shot according to the standard exposure state modulator pair camera;
The taking module is additionally operable to shoot the second master map of the target scene according to the main camera of overexposure state modulator
Picture;
Acquisition module calculates the target field for passing through first thread according to first master image and first sub-picture
First depth of view information of target image in scape, meanwhile, first object figure is obtained from first master image by the second thread
Picture, meanwhile, the first background image is obtained from second master image by third thread;
Processing module, for the first depth of view information according to the target image to the first object image and first back of the body
Scape image carries out synthesis processing, obtains target scene image.
7. device as claimed in claim 6, which is characterized in that further include:
Detection module, for detect shoot main body in the target scene where environment the first brightness and captured main body the
Two brightness;
Determining module, the difference for determining second brightness and first brightness are more than or equal to predetermined threshold value.
8. device as claimed in claim 6, which is characterized in that the taking module includes:
Unit is blurred, first background image is carried out at virtualization for the first depth of view information according to the target image
Reason, the target scene image after acquisition is background blurring.
9. a kind of computer equipment, which is characterized in that including memory, processor and storage on a memory and can be in processor
The computer program of upper operation when the processor performs described program, realizes the image as described in any in claim 1-5
Processing method.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor
The image processing method as described in any in claim 1-5 is realized during execution.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711277634.1A CN108156369B (en) | 2017-12-06 | 2017-12-06 | Image processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711277634.1A CN108156369B (en) | 2017-12-06 | 2017-12-06 | Image processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108156369A true CN108156369A (en) | 2018-06-12 |
CN108156369B CN108156369B (en) | 2020-03-13 |
Family
ID=62466064
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711277634.1A Active CN108156369B (en) | 2017-12-06 | 2017-12-06 | Image processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108156369B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108881730A (en) * | 2018-08-06 | 2018-11-23 | 成都西纬科技有限公司 | Image interfusion method, device, electronic equipment and computer readable storage medium |
CN109523456A (en) * | 2018-10-31 | 2019-03-26 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment, computer readable storage medium |
CN109803087A (en) * | 2018-12-17 | 2019-05-24 | 维沃移动通信有限公司 | A kind of image generating method and terminal device |
CN110191291A (en) * | 2019-06-13 | 2019-08-30 | Oppo广东移动通信有限公司 | Image processing method and device based on multiple image |
CN112606402A (en) * | 2020-11-03 | 2021-04-06 | 泰州芯源半导体科技有限公司 | Product manufacturing platform applying multi-parameter analysis |
CN114520880A (en) * | 2020-11-18 | 2022-05-20 | 华为技术有限公司 | Exposure parameter adjusting method and device |
US11431915B2 (en) | 2019-02-18 | 2022-08-30 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Image acquisition method, electronic device, and non-transitory computer readable storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101610421A (en) * | 2008-06-17 | 2009-12-23 | 深圳华为通信技术有限公司 | Video communication method, Apparatus and system |
CN106993112A (en) * | 2017-03-09 | 2017-07-28 | 广东欧珀移动通信有限公司 | Background-blurring method and device and electronic installation based on the depth of field |
JP2017143354A (en) * | 2016-02-08 | 2017-08-17 | キヤノン株式会社 | Image processing apparatus and image processing method |
CN107169939A (en) * | 2017-05-31 | 2017-09-15 | 广东欧珀移动通信有限公司 | Image processing method and related product |
CN107241559A (en) * | 2017-06-16 | 2017-10-10 | 广东欧珀移动通信有限公司 | Portrait photographic method, device and picture pick-up device |
-
2017
- 2017-12-06 CN CN201711277634.1A patent/CN108156369B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101610421A (en) * | 2008-06-17 | 2009-12-23 | 深圳华为通信技术有限公司 | Video communication method, Apparatus and system |
JP2017143354A (en) * | 2016-02-08 | 2017-08-17 | キヤノン株式会社 | Image processing apparatus and image processing method |
CN106993112A (en) * | 2017-03-09 | 2017-07-28 | 广东欧珀移动通信有限公司 | Background-blurring method and device and electronic installation based on the depth of field |
CN107169939A (en) * | 2017-05-31 | 2017-09-15 | 广东欧珀移动通信有限公司 | Image processing method and related product |
CN107241559A (en) * | 2017-06-16 | 2017-10-10 | 广东欧珀移动通信有限公司 | Portrait photographic method, device and picture pick-up device |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108881730A (en) * | 2018-08-06 | 2018-11-23 | 成都西纬科技有限公司 | Image interfusion method, device, electronic equipment and computer readable storage medium |
CN109523456A (en) * | 2018-10-31 | 2019-03-26 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment, computer readable storage medium |
CN109523456B (en) * | 2018-10-31 | 2023-04-07 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and computer readable storage medium |
CN109803087A (en) * | 2018-12-17 | 2019-05-24 | 维沃移动通信有限公司 | A kind of image generating method and terminal device |
CN109803087B (en) * | 2018-12-17 | 2021-03-16 | 维沃移动通信有限公司 | Image generation method and terminal equipment |
US11431915B2 (en) | 2019-02-18 | 2022-08-30 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Image acquisition method, electronic device, and non-transitory computer readable storage medium |
CN110191291A (en) * | 2019-06-13 | 2019-08-30 | Oppo广东移动通信有限公司 | Image processing method and device based on multiple image |
CN110191291B (en) * | 2019-06-13 | 2021-06-25 | Oppo广东移动通信有限公司 | Image processing method and device based on multi-frame images |
CN112606402A (en) * | 2020-11-03 | 2021-04-06 | 泰州芯源半导体科技有限公司 | Product manufacturing platform applying multi-parameter analysis |
CN114520880A (en) * | 2020-11-18 | 2022-05-20 | 华为技术有限公司 | Exposure parameter adjusting method and device |
CN114520880B (en) * | 2020-11-18 | 2023-04-18 | 华为技术有限公司 | Exposure parameter adjusting method and device |
Also Published As
Publication number | Publication date |
---|---|
CN108156369B (en) | 2020-03-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108055452A (en) | Image processing method, device and equipment | |
CN107959778B (en) | Imaging method and device based on dual camera | |
CN108024054A (en) | Image processing method, device and equipment | |
CN108111749A (en) | Image processing method and device | |
CN107948519A (en) | Image processing method, device and equipment | |
CN108156369A (en) | Image processing method and device | |
JP7015374B2 (en) | Methods for image processing using dual cameras and mobile terminals | |
CN108712608A (en) | Terminal device image pickup method and device | |
Phillips et al. | Camera image quality benchmarking | |
CN108419028B (en) | Image processing method, image processing device, computer-readable storage medium and electronic equipment | |
CN107835372A (en) | Imaging method, device, mobile terminal and storage medium based on dual camera | |
WO2020207261A1 (en) | Image processing method and apparatus based on multiple frames of images, and electronic device | |
CN109005364A (en) | Image formation control method, device, electronic equipment and computer readable storage medium | |
CN108024056B (en) | Imaging method and device based on dual camera | |
CN109005342A (en) | Panorama shooting method, device and imaging device | |
CN108537155A (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
CN107493432A (en) | Image processing method, device, mobile terminal and computer-readable recording medium | |
CN107948520A (en) | Image processing method and device | |
CN108024057A (en) | Background blurring processing method, device and equipment | |
CN108537749A (en) | Image processing method, device, mobile terminal and computer readable storage medium | |
CN107509031A (en) | Image processing method, device, mobile terminal and computer-readable recording medium | |
CN108053438A (en) | Depth of field acquisition methods, device and equipment | |
CN107396079B (en) | White balance adjustment method and device | |
CN107872631B (en) | Image shooting method and device based on double cameras and mobile terminal | |
CN110166707A (en) | Image processing method, device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., Ltd. Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |