CN107734264A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN107734264A
CN107734264A CN201710812062.6A CN201710812062A CN107734264A CN 107734264 A CN107734264 A CN 107734264A CN 201710812062 A CN201710812062 A CN 201710812062A CN 107734264 A CN107734264 A CN 107734264A
Authority
CN
China
Prior art keywords
image
depth
scene
virtual
brightness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710812062.6A
Other languages
Chinese (zh)
Other versions
CN107734264B (en
Inventor
张学勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710812062.6A priority Critical patent/CN107734264B/en
Publication of CN107734264A publication Critical patent/CN107734264A/en
Priority to EP18852861.6A priority patent/EP3680853A4/en
Priority to PCT/CN2018/105121 priority patent/WO2019047985A1/en
Priority to US16/815,177 priority patent/US11516412B2/en
Priority to US16/815,179 priority patent/US11503228B2/en
Application granted granted Critical
Publication of CN107734264B publication Critical patent/CN107734264B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The present invention relates to image processing method and device, wherein, method includes:Current scene brightness is detected, the brightness of default virtual background image is less than scene brightness if detecting, adds virtual light source in virtual background image according to both luminance differences, the brightness of virtual background image is matched with scene brightness;Obtain the scene image of active user;Obtain the depth image of active user;Processing scene image and depth image obtain personage's area image to extract people's object area of the active user in scene image;By personage's area image with virtual background image co-registration to obtain merging image.Thus, light filling is carried out to virtual background image according to the difference of virtual background and scene brightness, it is excessive compared to virtual background brightness of image gap to avoid scene brightness so that personage's area image is with virtual background image co-registration more naturally, improving the visual effect of image procossing.

Description

Image processing method and device
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of image processing method and device.
Background technology
With the development of Internet technology, increasing communication function is developed and applied, wherein, video communication work( Can be due to that can realize, the visualization of strange land user is linked up and is used widely.
However, in correlation technique, when user carries out Video chat, in order to protect the privacy of user, by where user Scene information is shielded using virtual background, however, in actual applications, brightness in real scene information generally with virtually The luminance difference of background causes personage's area image more lofty with virtual background image co-registration, visual effect is bad away from larger.
The content of the invention
The present invention provides a kind of image processing method and device, and to solve in the prior art, scene brightness is compared to virtual Background image luminance difference is away from excessive so that personage's area image and the more lofty technical problem of virtual background image co-registration.
The embodiment of the present invention provides a kind of image processing method, and for electronic installation, described image processing method includes:Inspection Current scene brightness is surveyed, the brightness of default virtual background image is less than the scene brightness if detecting, according to both Luminance difference add virtual light source in the virtual background image, make the brightness of the virtual background image and the scene Brightness matching;Obtain the scene image of active user;Obtain the depth image of the active user;Handle the scene image and The depth image obtains personage's area image to extract people's object area of the active user in the scene image;Will Personage's area image is with the virtual background image co-registration to obtain merging image.
Another embodiment of the present invention provides a kind of image processing apparatus, for electronic installation, including:Visible image capturing head, For detecting current scene brightness, if detecting, the brightness of default virtual background image is less than the scene brightness, root Virtual light source is added in the virtual background image according to both luminance differences, makes brightness and the institute of the virtual background image State scene brightness matching;Obtain the scene image of active user;Depth image acquisition component, for obtaining the active user's Depth image;Processor, for handling the scene image and the depth image to extract the active user in the field People's object area in scape image and obtain personage's area image;By personage's area image and the virtual background image co-registration To obtain merging image.
Further embodiment of this invention provides a kind of electronic installation, including:One or more processors;Memory;With one Or multiple programs, wherein one or more of programs are stored in the memory, and it is configured to by one Or multiple computing devices, described program include being used to perform the image processing method described in above-described embodiment.
A further embodiment of the present invention provides a kind of computer-readable recording medium, including the electronic installation knot with that can image The computer program used is closed, the computer program can be executed by processor to complete the image procossing described in above-described embodiment Method.
Technical scheme provided in an embodiment of the present invention can include the following benefits:
Current scene brightness is detected, the brightness of default virtual background image is less than scene brightness, root if detecting Virtual light source is added in virtual background image according to both luminance differences, makes brightness and the scene brightness of virtual background image Match somebody with somebody, obtain the scene image of active user, obtain the depth image of active user, handle scene image and depth image to extract People object area of the active user in scene image and obtain personage's area image, by personage's area image and virtual background image Merge to obtain merging image.Thus, light filling is carried out to virtual background image according to the difference of virtual background and scene brightness, kept away It is excessive compared to virtual background brightness of image gap scene brightness is exempted from so that personage's area image and virtual background image co-registration More naturally, improving the visual effect of image procossing.
The additional aspect of the present invention and advantage will be set forth in part in the description, and will partly become from the following description Obtain substantially, or recognized by the practice of the present invention.
Brief description of the drawings
Of the invention above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments Substantially and it is readily appreciated that, wherein:
Fig. 1 is the schematic flow sheet of the image processing method of some embodiments of the present invention;
Fig. 2 is the schematic flow sheet of the image processing method of some embodiments of the present invention;
Fig. 3 is the schematic flow sheet of the image processing method of some embodiments of the present invention;
Fig. 4 is the module diagram of the image processing apparatus of some embodiments of the present invention;
Fig. 5 is the structural representation of the electronic installation of some embodiments of the present invention;
Fig. 6 is the schematic flow sheet of the image processing method of some embodiments of the present invention;
Fig. 7 is the schematic flow sheet of the image processing method of some embodiments of the present invention;
Fig. 8 (a) to Fig. 8 (e) is the schematic diagram of a scenario of structural light measurement according to an embodiment of the invention;
Fig. 9 (a) and Fig. 9 (b) structural light measurements according to an embodiment of the invention schematic diagram of a scenario;
Figure 10 is the schematic flow sheet of the image processing method of some embodiments of the present invention;
Figure 11 is the schematic flow sheet of the image processing method of some embodiments of the present invention;
Figure 12 is the module diagram of the electronic installation of some embodiments of the present invention;And
Figure 13 is the module diagram of the electronic installation of some embodiments of the present invention.
Embodiment
Embodiments of the invention are described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end Same or similar label represents same or similar element or the element with same or like function.Below with reference to attached The embodiment of figure description is exemplary, it is intended to for explaining the present invention, and is not considered as limiting the invention.
Below with reference to the accompanying drawings the image processing method and device of the embodiment of the present invention are described.
Fig. 1 is the schematic flow sheet of the image processing method of some embodiments of the present invention.As shown in figure 1, this method bag Include:
Step 101, current scene brightness is detected, the brightness of default virtual background image is bright less than scene if detecting Degree, then virtual light source is added in virtual background image according to both luminance differences, make brightness and the field of virtual background image Scape brightness matching.
Wherein, virtual background image can be two-dimensional virtual background image, or, three-dimensional background image etc., wherein, Obtained if three-dimensional background image can be the real scene information modeling according to residing for user, this is not restricted, wherein, , can be random or true according to the preference profiles of active user according to preset mode in order to further lift the video tastes of user Determine virtual background image, such as, animation virtual background image should be set for the user of preference animation, be set for the user of preference scenery with hills and waters Landscape painting virtual background image etc. is put, wherein, virtual background image can be two-dimensional virtual background image, or, three-dimensional Background image, this is not restricted.
In addition, the type of above-mentioned virtual light source, including:One or more in area source, shot-light, ball lamp, sunshine Combination, the running parameter of virtual light source, including:One or more kinds of combinations in the angle of pitch, height, brightness, color, intensity.
It should be noted that according to the difference of application scenarios, current scene brightness can be detected in different ways, than Such as, current scene brightness can be detected by luminance sensor, likewise, according to the difference of application scenarios, difference can be used Mode obtain the brightness of default virtual background image, such as, the figure of virtual background by image processing techniques, can be extracted Image brightness parameter, the brightness of virtual background image is calculated by brightness of image parameter, and for example, can be by the default virtual back of the body Scape image inputs measurement of correlation model, and the brightness etc. of virtual background image is determined according to the output of model.
It is understood that in actual applications, when default virtual background image brightness and scene brightness difference compared with When big, the luminance characterization such as facial zone of active user and default virtual background image difference are larger, now, for the ease of rear Continuous facial zone for causing user etc. is with default virtual background image co-registration more naturally, can be added to virtual background image Virtual light source carries out luminance raising processing.
Specifically, as a kind of possible implementation, current scene brightness is detected, if detecting default virtual The brightness of background image is less than scene brightness, then adds virtual light source in virtual background image according to both luminance differences, The brightness of virtual background image is set to be matched with scene brightness.
It should be noted that according to the difference of concrete application scene, can realize in different ways according to both bright Degree difference adds virtual light source in virtual background image, is illustrated below:
Include as a kind of possible implementation, reference picture 2, above-mentioned steps 101:
Step 201, light filling information corresponding with default virtual light source is inquired about, the light source matched with luminance difference is obtained and mends Repay intensity and projecting direction.
Step 202, virtual light source corresponding to being added according to Light Source Compensation intensity and projecting direction in virtual background image.
It is appreciated that in this example, set for each virtual light source include what is matched with each luminance difference in advance The light filling information of Light Source Compensation intensity and projecting direction, for example, when virtual light source is area source and shot-light, its light filling letter Corresponding relation in breath is as shown in table 1 below:
Table 1
And then after above-mentioned luminance difference is obtained, inquire about light filling information corresponding with default virtual light source, acquisition with it is bright The Light Source Compensation intensity and projecting direction of difference matching are spent, and then, according to Light Source Compensation intensity and projecting direction in virtual background Virtual light source corresponding to addition in image, so as to so that the brightness phase in the brightness and actual scene information of virtual background image It coincide.
As alternatively possible implementation, reference picture 3, above-mentioned steps 101 include:
Step 301, the virtual light source of one or more kinds of types is set in virtual background image.
Step 302, according to the default light filling adjustment information of position enquiring of all types of virtual light sources, acquisition and luminance difference Corresponding target operating condition data.
Step 303, according to the running parameter of target operating condition data point reuse correspondence position virtual light source.
It is appreciated that in this example, the void of one or more types is provided with virtual background image in advance Intend light source, at this point it is possible to according to the default light filling adjustment information of position enquiring of all types of virtual light sources, acquisition and luminance difference Corresponding target operating condition data, wherein, target operating condition data correspond to various virtual light sources work when, sign it is total Brightness effects etc..
And then in order to realize brightness effects corresponding to the luminance difference, position is corresponded to according to target operating condition data point reuse The running parameter of virtual light source, such as the angle of pitch of adjustment virtual light source, height, brightness, color, intensity etc. are put, to cause void The brightness intended in the brightness and actual scene information of background image matches.
Step 102, the scene image of active user is obtained.
Step 103, the depth image of active user is obtained.
Step 104, scene image and depth image are handled to extract people object area of the active user in scene image and Obtain personage's area image.
Step 105, by personage's area image with virtual background image co-registration to obtain merging image.
Wherein, referring to Figure 4 and 5, the image processing method of embodiment of the present invention can be by the figure of embodiment of the present invention As processing unit 100 is realized.The image processing apparatus 100 of embodiment of the present invention is used for electronic installation 1000.As shown in figure 5, Image processing apparatus 100 includes visible image capturing first 11, depth image acquisition component 12 and processor 20.Step 101 and 102 can To realize that step 103 can realize by depth image acquisition component 12 by visible image capturing first 11, step 104 and step 105 by Processor 20 is realized.
In other words, it is seen that light video camera head 11 can be used for detecting current scene brightness, if detecting the default virtual back of the body The brightness of scape image is less than scene brightness, then adds virtual light source in virtual background image according to both luminance differences, make The brightness of virtual background image matches with scene brightness, and then, obtain the scene image of active user;Depth image acquisition component 12 can be used for the depth image of acquisition active user;It is current to extract that processor 20 can be used for processing scene image and depth image People object area of the user in scene image and obtain personage's area image, and by personage's area image and the default virtual back of the body Scape image co-registration with obtain merge image.
Wherein, it can be gray level image or coloured image that scene image, which is, and depth image characterizes the field for including active user Each personal or object depth information in scape.The scene domain of scene image is consistent with the scene domain of depth image, and scene Each pixel in image can be found in depth image to should pixel depth information.
The image processing apparatus 100 of embodiment of the present invention can apply to the electronic installation of embodiment of the present invention 1000.In other words, the electronic installation 1000 of embodiment of the present invention includes the image processing apparatus of embodiment of the present invention 100。
In some embodiments, electronic installation 1000 includes mobile phone, tablet personal computer, notebook computer, Intelligent bracelet, intelligence Energy wrist-watch, intelligent helmet, intelligent glasses etc..
The method of existing segmentation personage and background according to similitude of the adjacent pixel in terms of pixel value and does not connect mainly Continuous property carries out the segmentation of personage and background, but this dividing method is easily influenceed by environmental factors such as ambient light photographs.It is of the invention real Apply mode image processing apparatus 100 and electronic installation 1000 by obtaining the depth image of active user with by scene image Personage's extracted region come out.Influenceed because the acquisition of depth image is not easy the factor such as COLOR COMPOSITION THROUGH DISTRIBUTION in by illumination, scene, because This, the people's object area extracted by depth image is more accurate, it is particularly possible to which accurate calibration goes out the border of people's object area.Enter one Step ground, merge the better of image after more accurately personage's area image merges with default virtual background.
Fig. 6 is referred to, as a kind of possible implementation, the depth image of active user is obtained in above-mentioned steps 103 The step of include:
Step 401, to active user's projective structure light.
Step 402, the structure light image modulated through active user is shot.
Step 403, phase information corresponding to each pixel of demodulation structure light image is to obtain depth image.
In this example, structured light projector 121 and structure light are included with continued reference to Fig. 5, depth image acquisition component 12 Camera 122.Step 401 can be realized that step 402 and step 403 can be by structure light video camera heads by structured light projector 121 122 realize.
In other words, structured light projector 121 can be used for active user's transmittance structure light;Structure light video camera head 122 can For shooting the structure light image modulated through active user, and phase information corresponding to each pixel of demodulation structure light image To obtain depth image.
Specifically, structured light projector 121 is by the face and body of the project structured light of certain pattern to active user Afterwards, the structure light image after being modulated by active user can be formed in the face of active user and the surface of body.Structure light images Structure light image after first 122 shooting is modulated, then structure light image is demodulated to obtain depth image.Wherein, structure The pattern of light can be laser stripe, Gray code, sine streak, non-homogeneous speckle etc..
Referring to Fig. 7, in some embodiments, phase corresponding to each pixel of step 403 demodulation structure light image The step of information is to obtain depth image includes:
Step 501, phase information corresponding to each pixel in demodulation structure light image.
Step 502, phase information is converted into depth information.
Step 503, depth image is generated according to depth information.
Please continue to refer to Fig. 4, in some embodiments, step 501, step 502 and step 503 can be by structure lights Camera 122 is realized.
In other words, structure light video camera head 122 can be further used in demodulation structure light image phase corresponding to each pixel Position information, phase information is converted into depth information, and depth image is generated according to depth information.
Specifically, compared with non-modulated structure light, the phase information of the structure light after modulation is changed, and is being tied The structure light showed in structure light image is to generate the structure light after distortion, wherein, the phase information of change can characterize The depth information of object.Therefore, structure light video camera head 122 demodulates phase corresponding to each pixel in structure light image and believed first Breath, calculates depth information, so as to obtain final depth image further according to phase information.
In order that those skilled in the art is more apparent from gathering the face of active user and body according to structure The process of the depth image of body, illustrate it by taking a kind of widely used optical grating projection technology (fringe projection technology) as an example below Concrete principle.Wherein, optical grating projection technology belongs to sensu lato area-structure light.
As shown in Fig. 8 (a), when being projected using area-structure light, sine streak is produced by computer programming first, And sine streak is projected to measured object by structured light projector 121, recycle structure light video camera head 122 to shoot striped by thing Degree of crook after body modulation, then demodulates the curved stripes and obtains phase, then phase is converted into depth information to obtain Depth image.The problem of to avoid producing error or error coupler, needed before carrying out depth information collection using structure light to depth Image collection assembly 12 carries out parameter calibration, and demarcation includes geometric parameter (for example, structure light video camera head 122 and project structured light Relative position parameter between device 121 etc.) demarcation, the inner parameter and structured light projector 121 of structure light video camera head 122 The demarcation of inner parameter etc..
Specifically, the first step, computer programming produce sine streak.Need to obtain using the striped of distortion due to follow-up Phase, for example phase is obtained using four step phase-shifting methods, therefore produce four width phase differences here and beStriped, then structure light throw Emitter 121 projects the four spokes line timesharing on measured object (mask shown in Fig. 8 (a)), and structure light video camera head 122 collects Such as the figure on Fig. 8 (b) left sides, while to read the striped of the plane of reference shown on the right of Fig. 8 (b).
Second step, carry out phase recovery.The bar graph that structure light video camera head 122 is modulated according to four width collected is (i.e. Structure light image) to calculate the phase diagram by phase modulation, now obtained be to block phase diagram.Because four step Phase-shifting algorithms obtain Result be that gained is calculated by arctan function, therefore the phase after structure light modulation is limited between [- π, π], that is, Say, the phase after modulation exceedes [- π, π], and it can restart again.Shown in the phase main value such as Fig. 8 (c) finally given.
Wherein, it is necessary to carry out the saltus step processing that disappears, it is continuous phase that will block phase recovery during phase recovery is carried out Position.As shown in Fig. 8 (d), the left side is the continuous phase bitmap modulated, and the right is to refer to continuous phase bitmap.
3rd step, subtract each other to obtain phase difference (i.e. phase information) by the continuous phase modulated and with reference to continuous phase, should Phase difference characterizes depth information of the measured object with respect to the plane of reference, then phase difference is substituted into the conversion formula (public affairs of phase and depth The parameter being related in formula is by demarcation), you can obtain the threedimensional model of the object under test as shown in Fig. 8 (e).
It should be appreciated that in actual applications, according to the difference of concrete application scene, employed in the embodiment of the present invention Structure light in addition to above-mentioned grating, can also be other arbitrary graphic patterns.
As a kind of possible implementation, the depth information of pattern light progress active user also can be used in the present invention Collection.
Specifically, the method that pattern light obtains depth information is that this spreads out using a diffraction element for being essentially flat board The relief diffraction structure that there are element particular phases to be distributed is penetrated, cross section is with two or more concavo-convex step embossment knots Structure.Substantially 1 micron of the thickness of substrate in diffraction element, each step it is highly non-uniform, the span of height can be 0.7 Micron~0.9 micron.Structure shown in Fig. 9 (a) is the local diffraction structure of the collimation beam splitting element of the present embodiment.Fig. 9 (b) is edge The unit of the cross sectional side view of section A-A, abscissa and ordinate is micron.The speckle pattern of pattern photogenerated has The randomness of height, and can with the difference of distance changing patterns.Therefore, depth information is being obtained using pattern light Before, it is necessary first to the speckle pattern in space is calibrated, for example, in the range of 0~4 meter of distance structure light video camera head 122, A reference planes are taken every 1 centimetre, then just save 400 width speckle images after demarcating, the spacing of demarcation is smaller, obtains Depth information precision it is higher.Then, structured light projector 121 is by pattern light projection to measured object (i.e. active user) On, the speckle pattern that the difference in height on measured object surface to project the pattern light on measured object changes.Structure light Camera 122 is shot project speckle pattern (i.e. structure light image) on measured object after, then by speckle pattern and demarcation early stage The 400 width speckle images preserved afterwards carry out computing cross-correlation one by one, and then obtain 400 width correlation chart pictures.Measured object in space Position where body can show peak value on correlation chart picture, above-mentioned peak value is superimposed and after interpolation arithmetic i.e. It can obtain the depth information of measured object.
Most diffraction lights are obtained after diffraction is carried out to light beam due to common diffraction element, but per beam diffraction light light intensity difference Greatly, it is also big to the risk of human eye injury.Re-diffraction even is carried out to diffraction light, the uniformity of obtained light beam is relatively low. Therefore, the effect projected using the light beam of common diffraction element diffraction to measured object is poor.Using collimation in the present embodiment Beam splitting element, the element not only have the function that to collimate uncollimated rays, also have the function that light splitting, i.e., through speculum The non-collimated light of reflection is emitted multi-beam collimation light beam, and the multi-beam collimation being emitted after collimating beam splitting element toward different angles The area of section approximately equal of light beam, flux of energy approximately equal, and then to carry out using the scatterplot light after the beam diffraction The effect of projection is more preferable.Meanwhile laser emitting light is dispersed to every light beam, the risk of injury human eye is reduce further, and dissipate Spot structure light is for other uniform structure lights of arrangement, when reaching same collection effect, the consumption of pattern light Electricity is lower.
Referring to Fig. 10, as a kind of possible implementation, step 104 handles scene image and depth image to extract People object area of the active user in scene image and obtain personage's area image, including:
Step 601, the human face region in scene image is identified.
Step 602, depth information corresponding with human face region is obtained from depth image.
Step 603, the depth bounds of people's object area is determined according to the depth information of human face region.
Step 604, the people for being connected and being fallen into depth bounds with human face region is determined according to the depth bounds of people's object area Object area is to obtain personage's area image.
Referring again to Fig. 4, in some embodiments, step 601, step 602, step 603 and step 604 can be by Processor 20 is realized.
In other words, processor 20 can be further used for identifying the human face region in scene image, be obtained from depth image Depth information corresponding with human face region is taken, the depth bounds of people's object area is determined according to the depth information of human face region, and Determine to be connected with human face region according to the depth bounds of people's object area and people's object area for falling into depth bounds is to obtain personage Area image.
Specifically, the human face region that the deep learning Model Identification trained can be used to go out in scene image first, with The depth information of human face region is can determine that according to the corresponding relation of scene image and depth image afterwards.Because human face region includes The features such as nose, eyes, ear, lip, therefore, depth number of each feature corresponding in depth image in human face region According to being different, for example, in face face depth image acquisition component 12, depth that depth image acquisition component 12 is shot In image, depth data corresponding to nose may be smaller, and depth data corresponding to ear may be larger.Therefore, above-mentioned people The depth information in face region may be a numerical value or a number range.Wherein, when the depth information of human face region is one During individual numerical value, the numerical value can be by averaging to obtain to the depth data of human face region;Or can be by human face region Depth data take in be worth to.
Because people's object area includes human face region, in other words, people's object area is in some depth together with human face region In the range of, therefore, after processor 20 determines the depth information of human face region, it can be set according to the depth information of human face region The depth bounds of people's object area, the depth bounds extraction further according to people's object area fall into the depth bounds and with human face region phase People's object area of connection is to obtain personage's area image.
In this way, personage's area image can be extracted from scene image according to depth information.Due to obtaining for depth information The image of the not factor such as illumination, colour temperature in by environment is taken to ring, therefore, the personage's area image extracted is more accurate.
Figure 11 is referred to, in some embodiments, image processing method is further comprising the steps of:
Step 701, scene image is handled to obtain the whole audience edge image of scene image.
Step 702, according to whole audience edge image amendment personage's area image.
Referring again to Fig. 4, in some embodiments, step 701 and step 702 can be realized by processor 20.
In other words, processor 20 can also be used to handle scene image to obtain the whole audience edge image of scene image, with And according to whole audience edge image amendment personage's area image.
Processor 20 carries out edge extracting to obtain whole audience edge image to scene image first, wherein, whole audience edge graph Edge lines as in include the edge lines of background object in scene residing for active user and active user.Specifically, may be used Edge extracting is carried out to scene image by Canny operators.The core that Canny operators carry out the algorithm of edge extracting mainly includes The following steps:First, convolution is carried out to scene image to eliminate noise with 2D gaussian filterings template;Then, differential operator is utilized The Grad of the gray scale of each pixel, and the gradient direction of the gray scale according to each pixel of Grad calculating are obtained, passes through gradient Direction can find adjacent pixels of the respective pixel along gradient direction;Then, each pixel is traveled through, if the gray scale of some pixel Value is not maximum compared with the gray value of former and later two adjacent pixels on its gradient direction, then it is not side to think this pixel Edge point.In this way, the pixel that marginal position is in scene image is can determine that, so as to obtain the whole audience edge after edge extracting Image.
After processor 20 obtains whole audience edge image, personage's area image is modified further according to whole audience edge image. It is appreciated that personage's area image is will to be connected and fall into all pictures of the depth bounds of setting in scene image with human face region Obtained after element progress merger, in some scenarios, it is understood that there may be some are connected and fallen into depth bounds with human face region Object.Therefore, to cause personage's area image of extraction more accurate, whole audience edge graph can be used to carry out personage's area image Amendment.
Further, processor 20 can also carry out second-order correction to revised personage's area image, for example, can be to amendment Personage's area image afterwards carries out expansion process, expands personage's area image to retain the edge details of personage's area image.
After processor 20 obtains personage's area image, you can personage's area image is merged with predetermined virtual background, And then obtain merging image.In some embodiments, default virtual background can be randomly selected by processor 20, or Voluntarily selected by active user.Merging image after fusion can be shown on the display screen of electronic installation 1000, can also be led to The printer being connected with electronic installation 1000 is crossed to be printed.
In one embodiment of the invention, active user carries out wishing to hide the current back of the body during video with other people Scape, now, you can using the image processing method of embodiment of the present invention by personage's area image corresponding to active user and in advance If virtual background fusion, and in order that the personage's area image that must be merged and default virtual background fusion, in virtual background Virtual light source is added in image, to cause virtual background brightness of image and scene brightness to match, then shows and merges to targeted customer Merging image afterwards.Due to active user just with other side's video calling, therefore, it is seen that light video camera head 11 needs captured in real-time currently to use The scene image at family, depth image acquisition component 12 are also required to gather depth image corresponding to active user in real time, and by handling Device 20 carries out being processed so that other side it can be seen that smooth by multiframe to the scene image and depth image gathered in real time in time Merge the video pictures that image combines.
In summary, the image processing method of the embodiment of the present invention, current scene brightness is detected, if detecting default The brightness of virtual background image is less than scene brightness, then adds virtual optical in virtual background image according to both luminance differences Source, the brightness of virtual background image is matched with scene brightness, obtain the scene image of active user, obtain the depth of active user Image is spent, scene image and depth image is handled to extract people object area of the active user in scene image and obtains personage area Area image, by personage's area image with virtual background image co-registration to obtain merging image.Thus, according to virtual background and scene The difference of brightness carries out light filling to virtual background image, avoids scene brightness compared to virtual background brightness of image gap mistake Greatly so that personage's area image is with virtual background image co-registration more naturally, improving the visual effect of image procossing.
Also referring to Fig. 5 and Figure 12, embodiment of the present invention also proposes a kind of electronic installation 1000.Electronic installation 1000 Including image processing apparatus 100.Image processing apparatus 100 can utilize hardware and/or software to realize.Image processing apparatus 100 Including imaging device 10 and processor 20.
Imaging device 10 includes visible image capturing first 11 and depth image acquisition component 12.
Specifically, it is seen that light video camera head 11 includes imaging sensor 111 and lens 112, it is seen that light video camera head 11 can be used for Current scene brightness is detected, the brightness of default virtual background image is less than the scene brightness if detecting, according to two The luminance difference of person adds virtual light source in the virtual background image, makes the brightness of the virtual background image and the field Scape brightness matching, so catch active user colour information to obtain scene image, wherein, imaging sensor 111 include coloured silk Color filter array (such as Bayer filter arrays), the number of lens 112 can be one or more.Visible image capturing first 11 is obtaining During scene image, each imaging pixel in imaging sensor 111 senses luminous intensity and ripple in photographed scene Long message, generate one group of raw image data;Imaging sensor 111 sends this group of raw image data into processor 20, Processor 20 obtains the scene image of colour after the computings such as denoising, interpolation are carried out to raw image data.Processor 20 can be pressed Multiple format is handled each image pixel in raw image data one by one, for example, each image pixel can have 8,10,12 Or 14 bit bit depth, processor 20 can be handled each image pixel by identical or different bit depth.
Depth image acquisition component 12 includes structured light projector 121 and structure light video camera head 122, depth image collection group The depth information that part 12 can be used for catching active user is to obtain depth image.Structured light projector 121 is used to throw structure light Active user is incident upon, wherein, structured light patterns can be the speckle of laser stripe, Gray code, sine streak or random alignment Pattern etc..Structure light video camera head 122 includes imaging sensor 1221 and lens 1222, and the number of lens 1222 can be one or more It is individual.Imaging sensor 1221 is used for the structure light image that capturing structure light projector 121 is projected on active user.Structure light figure As can be sent by depth acquisition component 12 to processor 20 be demodulated, the processing such as phase recovery, phase information calculate to be to obtain The depth information of active user.
In some embodiments, it is seen that the function of light video camera head 11 and structure light video camera head 122 can be by a camera Realize, in other words, imaging device 10 only includes a camera and a structured light projector 121, and above-mentioned camera is not only Structure light image can also be shot with photographed scene image.
Except using structure light obtain depth image in addition to, can also by binocular vision method, based on differential time of flight (Time Of Flight, TOF) even depth obtains the depth image of active user as acquisition methods.
Processor 20 is further used for personage's area image by being extracted from scene image and depth image and preset Virtual background image co-registration, by merge image be shown to active user carry out video communication targeted customer.In extraction people During object area image, the depth information that processor 20 can be combined in depth image extracts the personage of two dimension from scene image Area image, the graphics of people's object area can also be established according to the depth information in depth image, in conjunction with scene image Color information to three-dimensional people's object area carry out color fill up with obtain three-dimensional colored personage's area image.Therefore, melt Can be by personage's area image of two dimension and default void when closing processing personage's area image and default virtual background image Intend background image to be merged to obtain merging image or colored personage's area image and the default void by three-dimensional Intend background image to be merged to obtain merging image.
In addition, image processing apparatus 100 also includes video memory 30.Video memory 30 can be embedded in electronic installation In 1000 or independently of the memory outside electronic installation 1000, and it may include direct memory access (DMA) (Direct Memory Access, DMA) feature.The raw image data or depth image acquisition component 12 of first 11 collection of visible image capturing are adopted The structure light image related data of collection, which can transmit, to be stored or is cached into video memory 30.Processor 20 can be from image Raw image data is read in memory 30 to be handled to obtain scene image, also can read structure from video memory 30 Light image related data is to be handled to obtain depth image.Deposited in addition, scene image and depth image are also storable in image In reservoir 30, calling is handled device 20 for processing at any time, for example, processor 20 calls scene image and depth image to carry out personage Extracted region, and obtained personage's area image after carrying carries out fusion treatment to be closed with default virtual background image And image.Wherein, default virtual background image and merging image may be alternatively stored in video memory 30.
Image processing apparatus 100 may also include display 50.Display 50 can obtain merging figure directly from processor 20 Picture, it can also be obtained from video memory 30 and merge image.The display of display 50 merges image so that targeted customer watches, or It is further processed by graphics engine or graphics processor (Graphics Processing Unit, GPU).Image procossing Device 100 also includes encoder/decoder 60, and encoder/decoder 60 can encoding and decoding scene image, depth image and merging figure The view data of picture etc., the view data of coding can be stored in video memory 30, and can be shown in display in image By decoder decompresses to be shown before on device 50.Encoder/decoder 60 can be by central processing unit (Central Processing Unit, CPU), GPU or coprocessor realize.In other words, encoder/decoder 60 can be central processing unit Any one or more in (Central Processing Unit, CPU), GPU and coprocessor.
Image processing apparatus 100 also includes control logic device 40.Imaging device 10 imaging when, processor 20 can according into As the data that equipment obtains are analyzed to determine one or more control parameters of imaging device 10 (for example, time for exposure etc.) Image statistics.Processor 20 sends image statistics to control logic device 40, the control imaging of control logic device 40 Equipment 10 is imaged with the control parameter determined.Control logic device 40 may include to perform one or more routines (such as firmware) Processor and/or microcontroller.One or more routines can determine imaging device 10 according to the image statistics of reception Control parameter.
Figure 13 is referred to, the electronic installation 1000 of embodiment of the present invention includes one or more processors 200, memory 300 and one or more programs 310.Wherein one or more programs 310 are stored in memory 300, and are configured to Performed by one or more processors 200.Program 310 includes being used for the image processing method for performing above-mentioned any one embodiment The instruction of method.
For example, program 310 includes being used for the instruction for performing the image processing method described in following steps:
Step 01:Current scene brightness is detected, the brightness of default virtual background image is less than the field if detecting Scape brightness, then virtual light source is added in the virtual background image according to both luminance differences, make the virtual background figure The brightness of picture matches with the scene brightness.
Step 02:Obtain the scene image of active user.
Step 03:Obtain the depth image of active user.
Step 04:Processing scene image and depth image are obtained with extracting people's object area of the active user in scene image Obtain personage's area image.
Step 05:By personage's area image with virtual background image co-registration to obtain merging image.
For another example program 310 also includes being used for the instruction for performing the image processing method described in following steps:
Step 0331:Phase information corresponding to each pixel in demodulation structure light image;
Step 0332:Phase information is converted into depth information;With
Step 0333:Depth image is generated according to depth information.
The computer-readable recording medium of embodiment of the present invention includes being combined with the electronic installation 1000 that can be imaged making Computer program.Computer program can be performed by processor 200 to complete at the image of above-mentioned any one embodiment Reason method.
For example, computer program can be performed by processor 200 to complete the image processing method described in following steps:
Step 01:Current scene brightness is detected, the brightness of default virtual background image is less than the field if detecting Scape brightness, then virtual light source is added in the virtual background image according to both luminance differences, make the virtual background figure The brightness of picture matches with the scene brightness.
Step 02:Obtain the scene image of active user.
Step 03:Obtain the depth image of active user.
Step 04:Processing scene image and depth image are obtained with extracting people's object area of the active user in scene image Obtain personage's area image.
Step 05:By personage's area image with virtual background image co-registration to obtain merging image.
For another example computer program can be also performed by processor 200 to complete the image processing method described in following steps:
0331:Phase information corresponding to each pixel in demodulation structure light image;
0332:Phase information is converted into depth information;With
0333:Depth image is generated according to depth information.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description Point is contained at least one embodiment or example of the present invention.In this manual, to the schematic representation of above-mentioned term not Identical embodiment or example must be directed to.Moreover, specific features, structure, material or the feature of description can be with office Combined in an appropriate manner in one or more embodiments or example.In addition, in the case of not conflicting, the skill of this area Art personnel can be tied the different embodiments or example and the feature of different embodiments or example described in this specification Close and combine.
In addition, term " first ", " second " are only used for describing purpose, and it is not intended that instruction or hint relative importance Or the implicit quantity for indicating indicated technical characteristic.Thus, define " first ", the feature of " second " can be expressed or Implicitly include at least one this feature.In the description of the invention, " multiple " are meant that at least two, such as two, three It is individual etc., unless otherwise specifically defined.
Any process or method described otherwise above description in flow chart or herein is construed as, and represents to include Module, fragment or the portion of the code of the executable instruction of one or more the step of being used to realize custom logic function or process Point, and the scope of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discuss suitable Sequence, including according to involved function by it is basic simultaneously in the way of or in the opposite order, carry out perform function, this should be of the invention Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for Instruction execution system, device or equipment (such as computer based system including the system of processor or other can be held from instruction The system of row system, device or equipment instruction fetch and execute instruction) use, or combine these instruction execution systems, device or set It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicate, propagate or pass Defeated program is for instruction execution system, device or equipment or the dress used with reference to these instruction execution systems, device or equipment Put.The more specifically example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wiring Connecting portion (electronic installation), portable computer diskette box (magnetic device), random access memory (RAM), read-only storage (ROM), erasable edit read-only storage (EPROM or flash memory), fiber device, and portable optic disk is read-only deposits Reservoir (CDROM).In addition, computer-readable medium, which can even is that, to print the paper of described program thereon or other are suitable Medium, because can then enter edlin, interpretation or if necessary with it for example by carrying out optical scanner to paper or other media His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each several part of the present invention can be realized with hardware, software, firmware or combinations thereof.Above-mentioned In embodiment, software that multiple steps or method can be performed in memory and by suitable instruction execution system with storage Or firmware is realized.Such as, if realized with hardware with another embodiment, following skill well known in the art can be used Any one of art or their combination are realized:With the logic gates for realizing logic function to data-signal from Logic circuit is dissipated, the application specific integrated circuit with suitable combinational logic gate circuit, programmable gate array (PGA), scene can compile Journey gate array (FPGA) etc..
Those skilled in the art are appreciated that to realize all or part of step that above-described embodiment method carries Suddenly it is that by program the hardware of correlation can be instructed to complete, described program can be stored in a kind of computer-readable storage medium In matter, the program upon execution, including one or a combination set of the step of embodiment of the method.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing module, can also That unit is individually physically present, can also two or more units be integrated in a module.Above-mentioned integrated mould Block can both be realized in the form of hardware, can also be realized in the form of software function module.The integrated module is such as Fruit is realized in the form of software function module and as independent production marketing or in use, can also be stored in a computer In read/write memory medium.
Storage medium mentioned above can be read-only storage, disk or CD etc..Although have been shown and retouch above Embodiments of the invention are stated, it is to be understood that above-described embodiment is exemplary, it is impossible to be interpreted as the limit to the present invention System, one of ordinary skill in the art can be changed to above-described embodiment, change, replace and become within the scope of the invention Type.

Claims (15)

1. a kind of image processing method, for electronic installation, it is characterised in that described image processing method includes:
Current scene brightness is detected, the brightness of default virtual background image is less than the scene brightness, root if detecting Virtual light source is added in the virtual background image according to both luminance differences, makes brightness and the institute of the virtual background image State scene brightness matching;
Obtain the scene image of active user;
Obtain the depth image of the active user;
The scene image and the depth image are handled to extract personage area of the active user in the scene image Domain and obtain personage's area image;
By personage's area image with the virtual background image co-registration to obtain merging image.
2. according to the method for claim 1, it is characterised in that the luminance difference of both described bases is in the virtual background Virtual light source is added in image, including:
Corresponding with the default virtual light source light filling information of inquiry, Light Source Compensation intensity that acquisition matches with the luminance difference and Projecting direction;
Virtual light source corresponding to being added according to the Light Source Compensation intensity and projecting direction in the virtual background image.
3. according to the method for claim 1, it is characterised in that the luminance difference of both described bases is in the virtual background Virtual light source is added in image, including:
The virtual light source of one or more kinds of types is set in the virtual background image;
According to the default light filling adjustment information of the position enquiring of all types of virtual light sources, mesh corresponding with the luminance difference is obtained Mark operating state data;
According to the running parameter of the target operating condition data point reuse correspondence position virtual light source.
4. according to the method for claim 3, it is characterised in that
The type of the virtual light source, including:One or more kinds of combinations in area source, shot-light, ball lamp, sunshine;
The running parameter of the virtual light source, including:One or more kinds of groups in the angle of pitch, height, brightness, color, intensity Close.
5. according to the method for claim 1, it is characterised in that also include:
The virtual background image is determined according to preset mode at random or according to the preference profiles of active user.
6. according to the method for claim 5, it is characterised in that the image type of the virtual background image includes:
Two-dimensional virtual background image, or, three-dimensional background image.
7. according to the method for claim 1, it is characterised in that the depth image for obtaining the active user, including:
To active user's projective structure light;
The structure light image that shooting is modulated through the active user;With
Phase information corresponding to each pixel of the structure light image is demodulated to obtain the depth image.
8. according to the method for claim 7, it is characterised in that each pixel of the demodulation structure light image is corresponding Phase information to obtain the depth image, including:
Demodulate phase information corresponding to each pixel in the structure light image;
The phase information is converted into depth information;With
The depth image is generated according to the depth information.
9. according to the method for claim 1, it is characterised in that it is described processing the scene image and the depth image with Extract people object area of the active user in the scene image and obtain personage's area image, including:
Identify the human face region in the scene image;
Depth information corresponding with the human face region is obtained from the depth image;
The depth bounds of people's object area is determined according to the depth information of the human face region;With
The people for determining to be connected and fall into the depth bounds with the human face region according to the depth bounds of people's object area Object area is to obtain personage's area image.
10. according to the method for claim 9, it is characterised in that also include:
The scene image is handled to obtain the whole audience edge image of the scene image;With
According to personage's area image described in the whole audience edge image amendment.
A kind of 11. image processing apparatus, it is characterised in that for electronic installation, including:
Visible image capturing head, for detecting current scene brightness, if detecting, the brightness of default virtual background image is less than The scene brightness, then virtual light source is added in the virtual background image according to both luminance differences, made described virtual The brightness of background image matches with the scene brightness;
Obtain the scene image of active user;
Depth image acquisition component, for obtaining the depth image of the active user;
Processor, for handling the scene image and the depth image to extract the active user in the scene image In people's object area and obtain personage's area image;
By personage's area image with the virtual background image co-registration to obtain merging image.
12. device as claimed in claim 11, it is characterised in that the depth image acquisition component includes structured light projector With structure light video camera head, the structured light projector is used for active user's projective structure light;
The structure light video camera head is used for:
The structure light image that shooting is modulated through the active user;With
Phase information corresponding to each pixel of the structure light image is demodulated to obtain the depth image.
13. device as claimed in claim 12, it is characterised in that, the structure light video camera head is additionally operable to:
Demodulate phase information corresponding to each pixel in the structure light image;
The phase information is converted into depth information;With
The depth image is generated according to the depth information.
14. a kind of electronic installation, it is characterised in that the electronic installation includes:
One or more processors;
Memory;With
One or more programs, wherein one or more of programs are stored in the memory, and be configured to by One or more of computing devices, described program include being used for any described image processing methods of perform claim requirement 1-10 Method.
A kind of 15. computer-readable recording medium, it is characterised in that the meter being used in combination including the electronic installation with that can image Calculation machine program, the computer program can be executed by processor to complete any described image processing methods of claim 1-10 Method.
CN201710812062.6A 2017-09-11 2017-09-11 Image processing method and device Active CN107734264B (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN201710812062.6A CN107734264B (en) 2017-09-11 2017-09-11 Image processing method and device
EP18852861.6A EP3680853A4 (en) 2017-09-11 2018-09-11 Image processing method and device, electronic device, and computer-readable storage medium
PCT/CN2018/105121 WO2019047985A1 (en) 2017-09-11 2018-09-11 Image processing method and device, electronic device, and computer-readable storage medium
US16/815,177 US11516412B2 (en) 2017-09-11 2020-03-11 Image processing method, image processing apparatus and electronic device
US16/815,179 US11503228B2 (en) 2017-09-11 2020-03-11 Image processing method, image processing apparatus and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710812062.6A CN107734264B (en) 2017-09-11 2017-09-11 Image processing method and device

Publications (2)

Publication Number Publication Date
CN107734264A true CN107734264A (en) 2018-02-23
CN107734264B CN107734264B (en) 2020-12-22

Family

ID=61206020

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710812062.6A Active CN107734264B (en) 2017-09-11 2017-09-11 Image processing method and device

Country Status (1)

Country Link
CN (1) CN107734264B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447927A (en) * 2018-10-15 2019-03-08 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
WO2019047985A1 (en) * 2017-09-11 2019-03-14 Oppo广东移动通信有限公司 Image processing method and device, electronic device, and computer-readable storage medium
CN111223192A (en) * 2020-01-09 2020-06-02 北京华捷艾米科技有限公司 Image processing method and application method, device and equipment thereof
CN112883759A (en) * 2019-11-29 2021-06-01 杭州海康威视数字技术股份有限公司 Method for detecting image noise of biological characteristic part
CN116206558A (en) * 2023-05-06 2023-06-02 惠科股份有限公司 Display panel control method and display device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1411277A (en) * 2001-09-26 2003-04-16 Lg电子株式会社 Video-frequency communication system
US20110299772A1 (en) * 2009-02-13 2011-12-08 Janssen Johannes H M Image processing system for processing a digital image and image processing method of processing a digital image
CN102663810A (en) * 2012-03-09 2012-09-12 北京航空航天大学 Full-automatic modeling approach of three dimensional faces based on phase deviation scanning
CN103606182A (en) * 2013-11-19 2014-02-26 华为技术有限公司 Method and device for image rendering
CN105430317A (en) * 2015-10-23 2016-03-23 东莞酷派软件技术有限公司 Video background setting method and terminal equipment
CN106909911A (en) * 2017-03-09 2017-06-30 广东欧珀移动通信有限公司 Image processing method, image processing apparatus and electronic installation
CN106954034A (en) * 2017-03-28 2017-07-14 宇龙计算机通信科技(深圳)有限公司 A kind of image processing method and device
CN107025635A (en) * 2017-03-09 2017-08-08 广东欧珀移动通信有限公司 Processing method, processing unit and the electronic installation of image saturation based on the depth of field

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1411277A (en) * 2001-09-26 2003-04-16 Lg电子株式会社 Video-frequency communication system
US20110299772A1 (en) * 2009-02-13 2011-12-08 Janssen Johannes H M Image processing system for processing a digital image and image processing method of processing a digital image
CN102663810A (en) * 2012-03-09 2012-09-12 北京航空航天大学 Full-automatic modeling approach of three dimensional faces based on phase deviation scanning
CN103606182A (en) * 2013-11-19 2014-02-26 华为技术有限公司 Method and device for image rendering
CN105430317A (en) * 2015-10-23 2016-03-23 东莞酷派软件技术有限公司 Video background setting method and terminal equipment
CN106909911A (en) * 2017-03-09 2017-06-30 广东欧珀移动通信有限公司 Image processing method, image processing apparatus and electronic installation
CN107025635A (en) * 2017-03-09 2017-08-08 广东欧珀移动通信有限公司 Processing method, processing unit and the electronic installation of image saturation based on the depth of field
CN106954034A (en) * 2017-03-28 2017-07-14 宇龙计算机通信科技(深圳)有限公司 A kind of image processing method and device

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019047985A1 (en) * 2017-09-11 2019-03-14 Oppo广东移动通信有限公司 Image processing method and device, electronic device, and computer-readable storage medium
US11503228B2 (en) 2017-09-11 2022-11-15 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method, image processing apparatus and computer readable storage medium
US11516412B2 (en) 2017-09-11 2022-11-29 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method, image processing apparatus and electronic device
CN109447927A (en) * 2018-10-15 2019-03-08 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN112883759A (en) * 2019-11-29 2021-06-01 杭州海康威视数字技术股份有限公司 Method for detecting image noise of biological characteristic part
CN112883759B (en) * 2019-11-29 2023-09-26 杭州海康威视数字技术股份有限公司 Method for detecting image noise of biological feature part
CN111223192A (en) * 2020-01-09 2020-06-02 北京华捷艾米科技有限公司 Image processing method and application method, device and equipment thereof
CN111223192B (en) * 2020-01-09 2023-10-03 北京华捷艾米科技有限公司 Image processing method, application method, device and equipment thereof
CN116206558A (en) * 2023-05-06 2023-06-02 惠科股份有限公司 Display panel control method and display device
CN116206558B (en) * 2023-05-06 2023-08-04 惠科股份有限公司 Display panel control method and display device

Also Published As

Publication number Publication date
CN107734264B (en) 2020-12-22

Similar Documents

Publication Publication Date Title
CN107734267A (en) Image processing method and device
CN107610077A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107707839A (en) Image processing method and device
CN107742296A (en) Dynamic image generation method and electronic installation
CN107734264A (en) Image processing method and device
CN107529096A (en) Image processing method and device
CN107707838A (en) Image processing method and device
CN107610078A (en) Image processing method and device
CN107509045A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107707831A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107707835A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107807806A (en) Display parameters method of adjustment, device and electronic installation
CN107610080A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107644440A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107509043A (en) Image processing method and device
CN107705277A (en) Image processing method and device
CN107705278A (en) The adding method and terminal device of dynamic effect
CN107610076A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107592491A (en) Video communication background display methods and device
CN107613223A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107705243A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107613239A (en) Video communication background display methods and device
CN107527335A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107454336A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107613228A (en) The adding method and terminal device of virtual dress ornament

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong

Applicant after: OPPO Guangdong Mobile Communications Co., Ltd.

Address before: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong

Applicant before: Guangdong OPPO Mobile Communications Co., Ltd.

GR01 Patent grant
GR01 Patent grant