CN107527335A - Image processing method and device, electronic installation and computer-readable recording medium - Google Patents
Image processing method and device, electronic installation and computer-readable recording medium Download PDFInfo
- Publication number
- CN107527335A CN107527335A CN201710813592.2A CN201710813592A CN107527335A CN 107527335 A CN107527335 A CN 107527335A CN 201710813592 A CN201710813592 A CN 201710813592A CN 107527335 A CN107527335 A CN 107527335A
- Authority
- CN
- China
- Prior art keywords
- image
- dimensional background
- depth
- predetermined
- scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 46
- 238000009434 installation Methods 0.000 title claims abstract description 29
- 239000000463 material Substances 0.000 claims abstract description 89
- 238000012545 processing Methods 0.000 claims abstract description 51
- 230000010354 integration Effects 0.000 claims description 42
- 238000000034 method Methods 0.000 claims description 23
- 238000004590 computer program Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 2
- 230000002708 enhancing effect Effects 0.000 abstract description 3
- 210000003128 head Anatomy 0.000 description 20
- 230000004927 fusion Effects 0.000 description 15
- 238000003384 imaging method Methods 0.000 description 15
- 230000006870 function Effects 0.000 description 12
- 238000003860 storage Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 7
- 230000000694 effects Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 238000011084 recovery Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 235000013399 edible fruits Nutrition 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000010587 phase diagram Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 208000020564 Eye injury Diseases 0.000 description 1
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000004907 flux Effects 0.000 description 1
- 239000012634 fragment Substances 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 210000003733 optic disk Anatomy 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 238000002834 transmittance Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/61—Scene description
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a kind of image processing method, merges image for handling.Merging image is merged by personage's area image of the predetermined three-dimensional background image with active user in the scene image under real scene to be formed.Image processing method includes:Three-dimensional structure is carried out to the object in scene image to form three-dimensional background material and store;With the three-dimensional background material according to selection so that three-dimensional background material to be merged with predetermined three-dimensional background image.The invention also discloses a kind of image processing apparatus, electronic installation and computer-readable recording medium.Image processing method, image processing apparatus, electronic installation and the computer-readable recording medium of embodiment of the present invention carry out three-dimensional modeling to the content in scene image and form background material, and it can be merged according to being added to selection in background image, so that it is more abundant to merge image, improve Consumer's Experience, enhancing is interesting.
Description
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of image processing method and device, electronic installation and
Computer-readable recording medium.
Background technology
When the character image of existing real scene merges with virtual background image, background image is typically more fixed,
Consumer's Experience is poor.
The content of the invention
Can the embodiment provides a kind of image processing method, image processing apparatus, electronic installation and computer
Read storage medium.
The image processing method of embodiment of the present invention, merge image for handling, the merging image is by predetermined three-dimensional
Personage area image of the background image with active user in the scene image under real scene, which merges, to be formed, described image processing
Method includes:
Three-dimensional structure is carried out to the object in the scene image to form three-dimensional background material and store;With
According to the three-dimensional background material of selection with by the three-dimensional background material and the predetermined three-dimensional background image
Fusion.
The image processing apparatus of embodiment of the present invention, merge image for handling, the merging image is by predetermined three-dimensional
Personage area image of the background image with active user in the scene image under real scene, which merges, to be formed, described image processing
Device includes:
Processor, the processor are used for:
Three-dimensional structure is carried out to the object in the scene image to form three-dimensional background material and store;With
According to the three-dimensional background material of selection with by the three-dimensional background material and the predetermined three-dimensional background image
Fusion.
The electronic installation of embodiment of the present invention includes one or more processors, memory and one or more programs.
Wherein one or more of programs are stored in the memory, and are configured to by one or more of processors
Perform, described program includes being used for the instruction for performing above-mentioned image processing method.
The computer-readable recording medium of embodiment of the present invention includes what is be used in combination with the electronic installation that can be imaged
Computer program, the computer program can be executed by processor to complete above-mentioned image processing method.
Image processing method, image processing apparatus, electronic installation and the computer-readable storage medium of embodiment of the present invention
Matter, in processing real person with virtual background when merging image, three-dimensional modeling is carried out to the content in scene image and forms the back of the body
Scape material, and can be merged according to being added to selection in background image so that merge image and more enrich, improve user's body
Test, enhancing is interesting.
The additional aspect and advantage of the present invention will be set forth in part in the description, and will partly become from the following description
Obtain substantially, or recognized by the practice of the present invention.
Brief description of the drawings
Of the invention above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments
Substantially and it is readily appreciated that, wherein:
Fig. 1 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Fig. 2 is the structural representation of the electronic installation of some embodiments of the present invention.
Fig. 3 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Fig. 4 is the module diagram of the image processing apparatus of some embodiments of the present invention.
Fig. 5 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Fig. 6 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Fig. 7 (a) to Fig. 7 (e) is the schematic diagram of a scenario of structural light measurement according to an embodiment of the invention.
Fig. 8 (a) and Fig. 8 (b) structural light measurements according to an embodiment of the invention schematic diagram of a scenario.
Fig. 9 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Figure 10 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Figure 11 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Figure 12 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Figure 13 is the module diagram of the electronic installation of some embodiments of the present invention.
Figure 14 is the module diagram of the electronic installation of some embodiments of the present invention.
Embodiment
Embodiments of the invention are described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end
Same or similar label represents same or similar element or the element with same or like function.Below with reference to attached
The embodiment of figure description is exemplary, it is intended to for explaining the present invention, and is not considered as limiting the invention.
Fig. 1 and Fig. 2 are referred to, the image processing method of embodiment of the present invention, merges image, image procossing for handling
Method includes step:
S10;Three-dimensional structure is carried out to the object in scene image to form three-dimensional background material and store;With
S20:According to the three-dimensional background material of selection so that three-dimensional background material to be merged with predetermined three-dimensional background image.
Referring to Fig. 2, the image processing apparatus 100 of embodiment of the present invention, merges image for handling.Wherein, merge
Image merged by personage area image of the predetermined three-dimensional background image with active user in the scene image under real scene and
Into.The image processing method of embodiment of the present invention can be realized by the image processing apparatus 100 of embodiment of the present invention, be used in combination
In electronic installation 1000.Image processing apparatus 100 includes processor 20.Step S10 and step S20 can be real by processor 20
It is existing.
In other words, processor 20 is used to carry out the object in scene image three-dimensional structure to form three-dimensional background material
And stored and merged three-dimensional background material with predetermined three-dimensional background image according to the three-dimensional background material of selection.
During some application scenarios, such as video conference or video, both sides are participated in for safety, privacy or increase interest
The demands such as taste, real scene is substituted as background and scene graph of the active user under real scene using predetermined three-dimensional image
Personage's area image as in, which is fused into merge image and export, is presented to other side.Usual predetermined three-dimensional image is more single solid
It is fixed, lack change, more uninteresting using identical predetermined three-dimensional background image for a long time, Consumer's Experience is poor.
The image processing method of embodiment of the present invention, to object present in the scene image under current real scene,
Three-dimensional structure is carried out, forms three-dimensional background material, it is necessary to which explanation, the collection of this three-dimensional background material can open in user
Preposition or rear camera any time coordinates related elements to complete, and can be stored after the completion of collection.And then by three-dimensional
In the application scenarios that background image merges with portrait, in addition to the background image of acquiescence, user can also be in background image, and addition is adopted
The three-dimensional background material collected, so as to the content of abundant three-dimensional background image, enhancing is interesting, improves Consumer's Experience.
The image processing apparatus 100 of embodiment of the present invention can apply to the electronic installation of embodiment of the present invention
1000.In other words, the electronic installation 1000 of embodiment of the present invention includes the image processing apparatus of embodiment of the present invention
100。
In some embodiments, electronic installation 1000 includes mobile phone, tablet personal computer, notebook computer, Intelligent bracelet, intelligence
Energy wrist-watch, intelligent helmet, intelligent glasses etc..
Fig. 3 and Fig. 4 are referred to, in some embodiments, image processing method includes:
S01:Obtain the scene image of active user;
S02:Obtain the depth image of active user;
S03:Processing scene image and depth image are obtained with extracting people's object area of the active user in scene image
Personage's area image;With
S04:Personage's area image is merged to obtain merging image with predetermined three-dimensional background image.
In some embodiments, image processing apparatus 100 also includes visible image capturing first 11 and depth image collection group
Part 12.Step S01 can realize that step 02 can be realized by depth image acquisition component 12, step by visible image capturing first 11
S03 and step 04 can be realized by processor 20.
In other words, it is seen that light video camera head 11 can be used for the scene image for obtaining active user;Depth image acquisition component
12 can be used for the depth image of acquisition active user;It is current to extract that processor 20 can be used for processing scene image and depth image
People object area of the user in scene image and obtain personage's area image, and by personage's area image and predetermined three-dimensional background
Image co-registration with obtain merge image.
Wherein, it can be gray level image or coloured image that scene image, which is, and depth image characterizes the field for including active user
Each personal or object depth information in scape.The scene domain of scene image is consistent with the scene domain of depth image, and scene
Each pixel in image can be found in depth image to should pixel depth information.
The method of existing segmentation personage and background according to similitude of the adjacent pixel in terms of pixel value and does not connect mainly
Continuous property carries out the segmentation of personage and background, but this dividing method is easily influenceed by environmental factors such as ambient light photographs.It is of the invention real
The image processing method, image processing apparatus 100 and electronic installation 1000 of mode are applied by obtaining the depth image of active user
So that personage's extracted region in scene image to be come out.Because the acquisition of depth image is not easy COLOR COMPOSITION THROUGH DISTRIBUTION in by illumination, scene
Etc. the influence of factor, therefore, the people's object area extracted by depth image is more accurate, it is particularly possible to which accurate calibration goes out personage
The border in region.Further, the effect of the merging image after more accurately personage's area image merges with predetermined three-dimensional background
Fruit is more preferably.
In some embodiments, predetermined three-dimensional background image can be that the predetermined three-dimensional for modeling to obtain by actual scene is carried on the back
The predetermined three-dimensional background image that scape image or cartoon making obtain.Predetermined three-dimensional background image can be processor 20
Select, can also voluntarily be selected by active user at random.
Referring to Fig. 5, in some embodiments, step S02 includes step:
S021:To active user's projective structure light;
S022:The structure light image that shooting is modulated through active user;With
S023:Phase information corresponding to each pixel of demodulation structure light image is to obtain depth image.
Referring again to Fig. 2, in some embodiments, depth image acquisition component 12 includes the He of structured light projector 121
Structure light video camera head 122.Step S021 can be realized that step S022 and step S023 can be by tying by structured light projector 121
Structure light video camera head 122 is realized.
In other words, structured light projector 121 can be used for active user's transmittance structure light;Structure light video camera head 122 can
For shooting the structure light image modulated through active user, and phase information corresponding to each pixel of demodulation structure light image
To obtain depth image.
Specifically, structured light projector 121 is by the face and body of the project structured light of certain pattern to active user
Afterwards, the structure light image after being modulated by active user can be formed in the face of active user and the surface of body.Structure light images
Structure light image after first 122 shooting is modulated, then structure light image is demodulated to obtain depth image.Wherein, structure
The pattern of light can be laser stripe, Gray code, sine streak, non-homogeneous speckle etc..
Referring to Fig. 6, in some embodiments, phase corresponding to each pixel of step S023 demodulation structure light images
The step of information is to obtain depth image includes:
S0231:Phase information corresponding to each pixel in demodulation structure light image;
S0232:Phase information is converted into depth information;With
S0233:Depth image is generated according to depth information.
In some embodiments, step S0231, step S0232 and step S0233 can be by structure light video camera head
122 realize.
In other words, structure light video camera head 122 can be further used in demodulation structure light image phase corresponding to each pixel
Position information, phase information is converted into depth information, and depth image is generated according to depth information.
Specifically, compared with non-modulated structure light, the phase information of the structure light after modulation is changed, and is being tied
The structure light showed in structure light image is to generate the structure light after distortion, wherein, the phase information of change can characterize
The depth information of object.Therefore, structure light video camera head 122 demodulates phase corresponding to each pixel in structure light image and believed first
Breath, calculates depth information, so as to obtain final depth image further according to phase information.
In order that those skilled in the art is more apparent from gathering the face of active user and body according to structure
The process of the depth image of body, illustrate it by taking a kind of widely used optical grating projection technology (fringe projection technology) as an example below
Concrete principle.Wherein, optical grating projection technology belongs to sensu lato area-structure light.
As shown in Fig. 7 (a), when being projected using area-structure light, sine streak is produced by computer programming first,
And sine streak is projected to measured object by structured light projector 121, recycle structure light video camera head 122 to shoot striped by thing
Degree of crook after body modulation, then demodulates the curved stripes and obtains phase, then phase is converted into depth information to obtain
Depth image.The problem of to avoid producing error or error coupler, needed before carrying out depth information collection using structure light to depth
Image collection assembly 12 carries out parameter calibration, and demarcation includes geometric parameter (for example, structure light video camera head 122 and project structured light
Relative position parameter between device 121 etc.) demarcation, the inner parameter and structured light projector 121 of structure light video camera head 122
The demarcation of inner parameter etc..
Specifically, the first step, computer programming produce sine streak.Need to obtain using the striped of distortion due to follow-up
Phase, for example phase is obtained using four step phase-shifting methods, therefore produce four width phase differences here and beStriped, then structure light throw
Emitter 121 projects the four spokes line timesharing on measured object (mask shown in Fig. 7 (a)), and structure light video camera head 122 collects
Such as the figure on Fig. 7 (b) left sides, while to read the striped of the plane of reference shown on the right of Fig. 7 (b).
Second step, carry out phase recovery.The bar graph that structure light video camera head 122 is modulated according to four width collected is (i.e.
Structure light image) to calculate the phase diagram by phase modulation, now obtained be to block phase diagram.Because four step Phase-shifting algorithms obtain
Result be that gained is calculated by arctan function, therefore the phase after structure light modulation is limited between [- π, π], that is,
Say, the phase after modulation exceedes [- π, π], and it can restart again.Shown in the phase main value such as Fig. 7 (c) finally given.
Wherein, it is necessary to carry out the saltus step processing that disappears, it is continuous phase that will block phase recovery during phase recovery is carried out
Position.As shown in Fig. 7 (d), the left side is the continuous phase bitmap modulated, and the right is to refer to continuous phase bitmap.
3rd step, subtract each other to obtain phase difference (i.e. phase information) by the continuous phase modulated and with reference to continuous phase, should
Phase difference characterizes depth information of the measured object with respect to the plane of reference, then phase difference is substituted into the conversion formula (public affairs of phase and depth
The parameter being related in formula is by demarcation), you can obtain the threedimensional model of the object under test as shown in Fig. 7 (e).
It should be appreciated that in actual applications, according to the difference of concrete application scene, employed in the embodiment of the present invention
Structure light in addition to above-mentioned grating, can also be other arbitrary graphic patterns.
As a kind of possible implementation, the depth information of pattern light progress active user also can be used in the present invention
Collection.
Specifically, the method that pattern light obtains depth information is that this spreads out using a diffraction element for being essentially flat board
The relief diffraction structure that there are element particular phases to be distributed is penetrated, cross section is with two or more concavo-convex step embossment knots
Structure.Substantially 1 micron of the thickness of substrate in diffraction element, each step it is highly non-uniform, the span of height can be 0.7
Micron~0.9 micron.Structure shown in Fig. 8 (a) is the local diffraction structure of the collimation beam splitting element of the present embodiment.Fig. 8 (b) is edge
The unit of the cross sectional side view of section A-A, abscissa and ordinate is micron.The speckle pattern of pattern photogenerated has
The randomness of height, and can with the difference of distance changing patterns.Therefore, depth information is being obtained using pattern light
Before, it is necessary first to the speckle pattern in space is calibrated, for example, in the range of 0~4 meter of distance structure light video camera head 122,
A reference planes are taken every 1 centimetre, then just save 400 width speckle images after demarcating, the spacing of demarcation is smaller, obtains
Depth information precision it is higher.Then, structured light projector 121 is by pattern light projection to measured object (i.e. active user)
On, the speckle pattern that the difference in height on measured object surface to project the pattern light on measured object changes.Structure light
Camera 122 is shot project speckle pattern (i.e. structure light image) on measured object after, then by speckle pattern and demarcation early stage
The 400 width speckle images preserved afterwards carry out computing cross-correlation one by one, and then obtain 400 width correlation chart pictures.Measured object in space
Position where body can show peak value on correlation chart picture, above-mentioned peak value is superimposed and after interpolation arithmetic i.e.
It can obtain the depth information of measured object.
Most diffraction lights are obtained after diffraction is carried out to light beam due to common diffraction element, but per beam diffraction light light intensity difference
Greatly, it is also big to the risk of human eye injury.Re-diffraction even is carried out to diffraction light, the uniformity of obtained light beam is relatively low.
Therefore, the effect projected using the light beam of common diffraction element diffraction to measured object is poor.Using collimation in the present embodiment
Beam splitting element, the element not only have the function that to collimate uncollimated rays, also have the function that light splitting, i.e., through speculum
The non-collimated light of reflection is emitted multi-beam collimation light beam, and the multi-beam collimation being emitted after collimating beam splitting element toward different angles
The area of section approximately equal of light beam, flux of energy approximately equal, and then to carry out using the scatterplot light after the beam diffraction
The effect of projection is more preferable.Meanwhile laser emitting light is dispersed to every light beam, the risk of injury human eye is reduce further, and dissipate
Spot structure light is for other uniform structure lights of arrangement, when reaching same collection effect, the consumption of pattern light
Electricity is lower.
Referring to Fig. 9, in some embodiments, step S03 also includes:
S031:Identify the human face region in scene image;
S032:Depth information corresponding with human face region is obtained from depth image;
S033:The depth bounds of people's object area is determined according to the depth information of human face region;With
S034:The personage area for determining to be connected and fallen into depth bounds with human face region according to the depth bounds of people's object area
Domain is to obtain personage's area image.
In some embodiments, step S031, step S032, step S033 and step S034 can be by processors 20
Realize.
In other words, processor 20 can be further used for identifying the human face region in scene image, be obtained from depth image
Depth information corresponding with human face region is taken, the depth bounds of people's object area is determined according to the depth information of human face region, and
Determine to be connected with human face region according to the depth bounds of people's object area and people's object area for falling into depth bounds is to obtain personage
Area image.
Specifically, the human face region that the deep learning Model Identification trained can be used to go out in scene image first, with
The depth information of human face region is can determine that according to the corresponding relation of scene image and depth image afterwards.Because human face region includes
The features such as nose, eyes, ear, lip, therefore, depth number of each feature corresponding in depth image in human face region
According to being different, for example, in face face depth image acquisition component 12, depth that depth image acquisition component 12 is shot
In image, depth data corresponding to nose may be smaller, and depth data corresponding to ear may be larger.Therefore, above-mentioned people
The depth information in face region may be a numerical value or a number range.Wherein, when the depth information of human face region is one
During individual numerical value, the numerical value can be by averaging to obtain to the depth data of human face region;Or can be by human face region
Depth data take in be worth to.
Because people's object area includes human face region, in other words, people's object area is in some depth together with human face region
In the range of, therefore, after processor 20 determines the depth information of human face region, it can be set according to the depth information of human face region
The depth bounds of people's object area, the depth bounds extraction further according to people's object area fall into the depth bounds and with human face region phase
People's object area of connection is to obtain personage's area image.
In this way, personage's area image can be extracted from scene image according to depth information.Due to obtaining for depth information
The image of the not factor such as illumination, colour temperature in by environment is taken to ring, therefore, the personage's area image extracted is more accurate.
In some embodiments, image processing method is further comprising the steps of:
Scene image is handled to obtain the whole audience edge image of scene image;With
According to whole audience edge image amendment personage's area image.
In some embodiments, handle scene image to obtain the whole audience edge image of scene image the step of and according to
The step of whole audience edge image amendment personage's area image, can be realized by processor 20.
In other words, processor 20 can also be used to handle scene image to obtain the whole audience edge image of scene image, with
And according to whole audience edge image amendment personage's area image.
Processor 20 carries out edge extracting to obtain whole audience edge image to scene image first, wherein, whole audience edge graph
Edge lines as in include the edge lines of background object in scene residing for active user and active user.Specifically, may be used
Edge extracting is carried out to scene image by Canny operators.The core that Canny operators carry out the algorithm of edge extracting mainly includes
The following steps:First, convolution is carried out to scene image to eliminate noise with 2D gaussian filterings template;Then, differential operator is utilized
The Grad of the gray scale of each pixel, and the gradient direction of the gray scale according to each pixel of Grad calculating are obtained, passes through gradient
Direction can find adjacent pixels of the respective pixel along gradient direction;Then, each pixel is traveled through, if the gray scale of some pixel
Value is not maximum compared with the gray value of former and later two adjacent pixels on its gradient direction, then it is not side to think this pixel
Edge point.In this way, the pixel that marginal position is in scene image is can determine that, so as to obtain the whole audience edge after edge extracting
Image.
After processor 20 obtains whole audience edge image, personage's area image is modified further according to whole audience edge image.
It is appreciated that personage's area image is will to be connected and fall into all pictures of the depth bounds of setting in scene image with human face region
Obtained after element progress merger, in some scenarios, it is understood that there may be some are connected and fallen into depth bounds with human face region
Object.Therefore, to cause personage's area image of extraction more accurate, whole audience edge graph can be used to carry out personage's area image
Amendment.
Further, processor 20 can also carry out second-order correction to revised personage's area image, for example, can be to amendment
Personage's area image afterwards carries out expansion process, expands personage's area image to retain the edge details of personage's area image.
Referring to Fig. 10, in some embodiments, step S10 includes step:
S11:Obtain the depth image of scene image;
S12:Identify the object area in scene image;
S13:Depth information corresponding with object area is obtained from the depth image of scene image;With
S14:Three-dimensional structure is carried out to object according to object area and depth information to form three-dimensional background material.
In some embodiments, step S11 can be realized by depth image acquisition component 12, step S12, S13 and S14
It can be realized by processor 20, in other words, depth image acquisition component 12 is used for the depth image for obtaining scene image, processor
20 object areas being used to identify in scene image, depth corresponding with object area is obtained from the depth image of scene image
Information simultaneously carries out three-dimensional structure to object to form three-dimensional background material according to object area and depth information.
Specifically, depth image record has the depth information in whole scene contents corresponding to scene image, passes through edge
Related algorithm is identified, such as each object that can determine that in scene is changed by color, the continuous and depth information of pixel
Contour edge, determine that edge that is to say and object region be determined.Further, obtain with determining object region phase
Corresponding depth information, the depth information of whole corresponding points in region is that is to say, so that according to edge and depth information to current
Object carries out three-dimensional structure, forms virtual three-dimensional background material.In order to reduce amount of calculation, user can voluntarily select current scene
In object interested built.With the change of scene, the material of structure also accordingly increases, and material database is expanded, and uses
Family can select the content of addition also gradually to increase.
Figure 11 is referred to, in some embodiments, step S20 includes step:
S21:Obtain the predetermined integration region in predetermined three-dimensional background image;
S22:The pixel region to be replaced of predetermined integration region is determined according to three-dimensional background material;With
S23:By the pixel region to be replaced of predetermined integration region replace with three-dimensional background material by three-dimensional background material with
Predetermined three-dimensional background image merges.
In some embodiments, step S21, S22, S23 can be realized by processor 20, in other words, processor 20
For obtaining intended pixel region corresponding to predetermined integration region in predetermined three-dimensional background image, determined according to three-dimensional background material
The pixel region to be replaced of predetermined integration region, and the pixel region to be replaced of predetermined integration region is replaced with into three-dimensional background
Material merges three-dimensional background material with predetermined three-dimensional background image.
It is appreciated that when predetermined three-dimensional background image models to obtain by actual scene, in predetermined three-dimensional background image
Depth data can be obtained directly in modeling process corresponding to each pixel;Pass through cartoon making in predetermined three-dimensional background image
When obtaining, depth data corresponding to each pixel can be by producer's sets itself in predetermined three-dimensional background image;It is in addition, predetermined
Each object present in three-dimensional background image is also known, therefore, is melted carrying out image using predetermined three-dimensional background image
Before processing is closed, three-dimensional background material first can be calibrated according to depth data and the object being present in predetermined three-dimensional background image
Fusion position, i.e., predetermined integration region.The size phase with predetermined three-dimensional background is needed due to the size of three-dimensional background material
Match somebody with somebody, therefore processor 20 needs to be determined in predetermined integration region according to the dimension scale of each object in predetermined three-dimensional background image
Band replacement pixel region.Then, the pixel region to be replaced in predetermined integration region is replaced with into three-dimensional background material
Merging image after being merged.In this way, realize merging for three-dimensional background material and predetermined three-dimensional background image.
Figure 12 is referred to, in some embodiments, step S20 includes step:
S24:Predetermined three-dimensional background image is handled to obtain the whole audience edge image of predetermined three-dimensional background image;
S25:Obtain the depth data of predetermined three-dimensional background image;
S26:Predetermined three-dimensional background image is determined according to the whole audience edge image and depth data of predetermined three-dimensional background image
Calculating integration region;
S27:Determined to calculate the pixel region to be replaced of integration region according to three-dimensional background material;With
S28:The pixel region to be replaced for calculating integration region is replaced with into three-dimensional background material with by three-dimensional background material
Merged with predetermined three-dimensional background image.
In some embodiments, step S24 to step S28 can be realized by processor 20, and in other words, processor 20 is used
In processing predetermined three-dimensional background image to obtain the whole audience edge image of predetermined three-dimensional background image, predetermined three-dimensional Background is obtained
The depth data of picture, predetermined three-dimensional background image is determined according to the whole audience edge image and depth data of predetermined three-dimensional background image
Calculating integration region, according to three-dimensional background material determine calculate integration region pixel region to be replaced and by calculate merge
The pixel region to be replaced in region replaces with three-dimensional background material so that three-dimensional background material to be merged with predetermined three-dimensional background image.
It is appreciated that when if three-dimensional background material merges with predetermined three-dimensional background image, the fusion position of background material is not
Demarcate in advance, then processor 20 needs to determine fusion position of the three-dimensional background material in predetermined three-dimensional background image first.Specifically
Ground, processor 20 first carries out edge extracting to obtain whole audience edge image to predetermined three-dimensional background image, and obtains predetermined three-dimensional
The depth data of background image, wherein, depth data obtains in the modeling of predetermined three-dimensional background image or animation process.With
Afterwards, processor 20 is determined in predetermined three-dimensional background image according to the whole audience edge image and depth data of predetermined three-dimensional background image
Calculating integration region.Because the size of three-dimensional background material and the object in three-dimensional predetermined background needs unanimously, therefore, needs
The size for the three-dimensional background material that can be added is calculated, and is calculated according to the size of three-dimensional background material in integration region
Pixel region to be replaced.Finally, the pixel region to be replaced calculated in integration region image is replaced with into personage's area image, from
And obtain merging image.In this way, realize merging for three-dimensional background material and predetermined three-dimensional background image.
Merging image after fusion can be shown on the display screen of electronic installation 1000, also can by with electronic installation
The printer of 1000 connections is printed.
In some embodiments, the predetermined integration region in three-dimensional background image or calculating integration region can be one
It is or multiple.When predetermined integration region is one, fusion position of the three-dimensional background material in predetermined three-dimensional background image is
For an above-mentioned unique predetermined integration region;When it is one to calculate integration region, three-dimensional background material is carried on the back in predetermined three-dimensional
Fusion position in scape image is above-mentioned unique calculating integration region;It is three-dimensional when predetermined integration region is multiple
Fusion position of the background image in predetermined three-dimensional background image can be any one in multiple predetermined integration regions, more enter one
Step ground, because three-dimensional background material has depth information, therefore it can be found in multiple predetermined integration regions and three-dimensional background element
The predetermined integration region that the depth information of material matches is as fusion position, to obtain more preferable syncretizing effect;Merged when calculating
When region is multiple, three-dimensional background material can be in multiple calculating integration regions in the fusion position in calculating three-dimensional background image
Any one, further, because three-dimensional background material has a depth information, therefore can be in multiple calculating integration regions
The calculating integration region to match with the depth information of three-dimensional background material is found as position is merged, preferably to be merged
Effect.
In some application scenarios, for example, active user carries out wishing to hide current background during video with other people,
Now, you can using the image processing method of embodiment of the present invention by personage's area image corresponding to active user with making a reservation for three
Background fusion is tieed up, the background in fused images carries out material addition, to show the merging image after fusion to other side.Due to
Active user just with other side's video calling, therefore, it is seen that light video camera head 11 needs the scene image of captured in real-time active user, depth
Image collection assembly 12 is also required to gather depth image corresponding to active user in real time, and by processor 20 in time to collection in real time
Scene image and depth image carries out being processed so that other side it can be seen that smooth merges what image combined by multiframe
Video pictures.
Figure 13 is referred to, embodiment of the present invention also proposes a kind of electronic installation 1000.Electronic installation 1000 includes image
Processing unit 100.Image processing apparatus 100 can utilize hardware and/or software to realize.Image processing apparatus 100 includes imaging
Equipment 10 and processor 20.
Imaging device 10 includes visible image capturing first 11 and depth image acquisition component 12.
Specifically, it is seen that light video camera head 11 includes imaging sensor 111 and lens 112, it is seen that light video camera head 11 can be used for
The colour information of active user is caught to obtain scene image, wherein, imaging sensor 111 includes color filter lens array (such as
Bayer filter arrays), the number of lens 112 can be one or more.Visible image capturing first 11 is obtaining scene image process
In, each imaging pixel in imaging sensor 111 senses luminous intensity and wavelength information in photographed scene, generation one
Group raw image data;Imaging sensor 111 sends this group of raw image data into processor 20, and processor 20 is to original
View data obtains colored scene image after carrying out the computings such as denoising, interpolation.Processor 20 can be in various formats to original
Each image pixel in view data is handled one by one, for example, each image pixel can have the locating depth of 8,10,12 or 14 bits
Degree, processor 20 can be handled each image pixel by identical or different bit depth.
Depth image acquisition component 12 includes structured light projector 121 and structure light video camera head 122, depth image collection group
The depth information that part 12 can be used for catching active user is to obtain depth image.Structured light projector 121 is used to throw structure light
Active user is incident upon, wherein, structured light patterns can be the speckle of laser stripe, Gray code, sine streak or random alignment
Pattern etc..Structure light video camera head 122 includes imaging sensor 1221 and lens 1222, and the number of lens 1222 can be one or more
It is individual.Imaging sensor 1221 is used for the structure light image that capturing structure light projector 121 is projected on active user.Structure light figure
As can be sent by depth acquisition component 12 to processor 20 be demodulated, the processing such as phase recovery, phase information calculate to be to obtain
The depth information of active user.
In some embodiments, it is seen that the function of light video camera head 11 and structure light video camera head 122 can be by a camera
Realize, in other words, imaging device 10 only includes a camera and a structured light projector 121, and above-mentioned camera is not only
Structure light image can also be shot with photographed scene image.
Except using structure light obtain depth image in addition to, can also by binocular vision method, based on differential time of flight (Time
Of Flight, TOF) even depth obtains the depth image of active user as acquisition methods.
Processor 20 is further used for personage's area image by being extracted from scene image and depth image and made a reservation for
Two-dimensional background image co-registration.When extracting personage's area image, processor 20 can combine depth image in depth information from
Personage's area image of two dimension is extracted in scene image, people's object area can also be established according to the depth information in depth image
Graphics, in conjunction with the color information in scene image to three-dimensional people's object area carry out color fill up with obtain three-dimensional coloured silk
Personage's area image of color.Therefore, fusion treatment personage area image and can be during predetermined two-dimensional background image will two dimension
Personage's area image is merged with predetermined two-dimensional background image to obtain merging image or the colored people by three-dimensional
Object area image is merged with predetermined two-dimensional background image to obtain merging image.
In addition, image processing apparatus 100 also includes video memory 30.Video memory 30 can be embedded in electronic installation
In 1000 or independently of the memory outside electronic installation 1000, and it may include direct memory access (DMA) (Direct
Memory Access, DMA) feature.The raw image data or depth image acquisition component 12 of first 11 collection of visible image capturing are adopted
The structure light image related data of collection, which can transmit, to be stored or is cached into video memory 30.Processor 20 can be from image
Raw image data is read in memory 30 to be handled to obtain scene image, also can read structure from video memory 30
Light image related data is to be handled to obtain depth image.Deposited in addition, scene image and depth image are also storable in image
In reservoir 30, calling is handled device 20 for processing at any time, for example, processor 20 calls scene image and depth image to carry out personage
Extracted region, and obtained personage's area image after carrying carries out fusion treatment to be merged with predetermined two-dimensional background image
Image.Wherein, predetermined two-dimensional background image and merging image may be alternatively stored in video memory 30.
Image processing apparatus 100 may also include display 50.Display 50 can obtain merging figure directly from processor 20
Picture, it can also be obtained from video memory 30 and merge image.The display of display 50 merges image so that user watches, or by scheming
Shape engine or graphics processor (Graphics Processing Unit, GPU) are further processed.Image processing apparatus
100 also include encoder/decoder 60, and encoder/decoder 60 encoding and decoding scene image, depth image and can merge image etc.
View data, the view data of coding can be stored in video memory 30, and can be shown in display 50 in image
By decoder decompresses to be shown before upper.Encoder/decoder 60 can be by central processing unit (Central
Processing Unit, CPU), GPU or coprocessor realize.In other words, encoder/decoder 60 can be central processing unit
Any one or more in (Central Processing Unit, CPU), GPU and coprocessor.
Image processing apparatus 100 also includes control logic device 40.Imaging device 10 imaging when, processor 20 can according into
As the data that equipment obtains are analyzed to determine one or more control parameters of imaging device 10 (for example, time for exposure etc.)
Image statistics.Processor 20 sends image statistics to control logic device 40, the control imaging of control logic device 40
Equipment 10 is imaged with the control parameter determined.Control logic device 40 may include to perform one or more routines (such as firmware)
Processor and/or microcontroller.One or more routines can determine imaging device 10 according to the image statistics of reception
Control parameter.
Figure 14 is referred to, the electronic installation 1000 of embodiment of the present invention includes one or more processors 200, memory
300 and one or more programs 310.Wherein one or more programs 310 are stored in memory 300, and are configured to
Performed by one or more processors 200.Program 310 includes being used for the image processing method for performing above-mentioned any one embodiment
The instruction of method.
For example, program 310 includes being used for the image processing method instruction for performing following steps:
Three-dimensional structure is carried out to the object in scene image to form three-dimensional background material and store;With
According to the three-dimensional background material of selection so that three-dimensional background material to be merged with predetermined three-dimensional background image.
And for example, program 310 includes being used to perform the instruction of the image processing method of following steps:
Obtain the scene image of active user;
Obtain the depth image of active user;
Processing scene image and the depth image to extract people object area of the active user in the scene image and
Obtain personage's area image;
Personage's area image is merged to obtain the merging image with predetermined three-dimensional background image.
For another example, program 310 also includes being used for the instruction for performing the image processing method described in following steps:
Phase information corresponding to each pixel in demodulation structure light image;
Phase information is converted into depth information;With
Depth image is generated according to depth information.
The computer-readable recording medium of embodiment of the present invention includes being combined with the electronic installation 1000 that can be imaged making
Computer program.Computer program can be performed by processor 200 to complete at the image of above-mentioned any one embodiment
Reason method.
For example, computer program can be performed by processor 200 to complete the image processing method described in following steps:
Three-dimensional structure is carried out to the object in scene image to form three-dimensional background material and store;With
According to the three-dimensional background material of selection so that three-dimensional background material to be merged with predetermined three-dimensional background image.
And for example, computer program can be performed by processor 200 to complete the image processing method described in following steps:
Obtain the scene image of active user;
Obtain the depth image of active user;
Processing scene image and the depth image to extract people object area of the active user in the scene image and
Obtain personage's area image;
Personage's area image is merged to obtain the merging image with predetermined three-dimensional background image.
For another example, computer program can be also performed by processor 200 to complete the image processing method described in following steps:
Phase information corresponding to each pixel in demodulation structure light image;
Phase information is converted into depth information;With
Depth image is generated according to depth information.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show
The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description
Point is contained at least one embodiment or example of the present invention.In this manual, to the schematic representation of above-mentioned term not
Identical embodiment or example must be directed to.Moreover, specific features, structure, material or the feature of description can be with office
Combined in an appropriate manner in one or more embodiments or example.In addition, in the case of not conflicting, the skill of this area
Art personnel can be tied the different embodiments or example and the feature of different embodiments or example described in this specification
Close and combine.
In addition, term " first ", " second " are only used for describing purpose, and it is not intended that instruction or hint relative importance
Or the implicit quantity for indicating indicated technical characteristic.Thus, define " first ", the feature of " second " can be expressed or
Implicitly include at least one this feature.In the description of the invention, " multiple " are meant that at least two, such as two, three
It is individual etc., unless otherwise specifically defined.
Any process or method described otherwise above description in flow chart or herein is construed as, and represents to include
Module, fragment or the portion of the code of the executable instruction of one or more the step of being used to realize specific logical function or process
Point, and the scope of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discuss suitable
Sequence, including according to involved function by it is basic simultaneously in the way of or in the opposite order, carry out perform function, this should be of the invention
Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use
In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for
Instruction execution system, device or equipment (such as computer based system including the system of processor or other can be held from instruction
The system of row system, device or equipment instruction fetch and execute instruction) use, or combine these instruction execution systems, device or set
It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicate, propagate or pass
Defeated program is for instruction execution system, device or equipment or the dress used with reference to these instruction execution systems, device or equipment
Put.The more specifically example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wiring
Connecting portion (electronic installation), portable computer diskette box (magnetic device), random access memory (RAM), read-only storage
(ROM), erasable edit read-only storage (EPROM or flash memory), fiber device, and portable optic disk is read-only deposits
Reservoir (CDROM).In addition, computer-readable medium, which can even is that, to print the paper of described program thereon or other are suitable
Medium, because can then enter edlin, interpretation or if necessary with it for example by carrying out optical scanner to paper or other media
His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each several part of the present invention can be realized with hardware, software, firmware or combinations thereof.Above-mentioned
In embodiment, software that multiple steps or method can be performed in memory and by suitable instruction execution system with storage
Or firmware is realized.If, and in another embodiment, can be with well known in the art for example, realized with hardware
Any one of row technology or their combination are realized:With the logic gates for realizing logic function to data-signal
Discrete logic, have suitable combinational logic gate circuit application specific integrated circuit, programmable gate array (PGA), scene
Programmable gate array (FPGA) etc..
Those skilled in the art are appreciated that to realize all or part of step that above-described embodiment method carries
Suddenly it is that by program the hardware of correlation can be instructed to complete, described program can be stored in a kind of computer-readable storage medium
In matter, the program upon execution, including one or a combination set of the step of embodiment of the method.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing module, can also
That unit is individually physically present, can also two or more units be integrated in a module.Above-mentioned integrated mould
Block can both be realized in the form of hardware, can also be realized in the form of software function module.The integrated module is such as
Fruit is realized in the form of software function module and as independent production marketing or in use, can also be stored in a computer
In read/write memory medium.
Storage medium mentioned above can be read-only storage, disk or CD etc..Although have been shown and retouch above
Embodiments of the invention are stated, it is to be understood that above-described embodiment is exemplary, it is impossible to be interpreted as the limit to the present invention
System, one of ordinary skill in the art can be changed to above-described embodiment, change, replace and become within the scope of the invention
Type.
Claims (20)
1. a kind of image processing method, merge image for handling, it is characterised in that the merging image is by predetermined three-dimensional background
Personage area image of the image with active user in the scene image under real scene, which merges, to be formed, described image processing method
Including:
Three-dimensional structure is carried out to the object in the scene image to form three-dimensional background material and store;With
According to the three-dimensional background material of selection so that the three-dimensional background material to be merged with the predetermined three-dimensional background image.
2. image processing method according to claim 1, it is characterised in that described image processing method also includes:
Obtain the scene image of active user;
Obtain the depth image of the active user;
The scene image and the depth image are handled to extract personage area of the active user in the scene image
Domain and obtain personage's area image;
Personage's area image is merged to obtain the merging image with predetermined three-dimensional background image.
3. image processing method according to claim 2, it is characterised in that the depth map for obtaining the active user
The step of picture, includes:
To active user's projective structure light;
The structure light image that shooting is modulated through the active user;With
Phase information corresponding to each pixel of the structure light image is demodulated to obtain the depth image.
4. image processing method according to claim 3, it is characterised in that described to demodulate each of the structure light image
The step of phase information corresponding to pixel is to obtain the depth image includes:
Demodulate phase information corresponding to each pixel in the structure light image;
The phase information is converted into depth information;With
The depth image is generated according to the depth information.
5. image processing method according to claim 2, it is characterised in that the processing scene image and the depth
The step of degree image obtains personage's area image to extract people's object area of the active user in the scene image is wrapped
Include:
Identify the human face region in the scene image;
Depth information corresponding with the human face region is obtained from the depth image;
The depth bounds of people's object area is determined according to the depth information of the human face region;With
The people for determining to be connected and fall into the depth bounds with the human face region according to the depth bounds of people's object area
Object area is to obtain personage's area image.
6. image processing method according to claim 5, it is characterised in that described image processing method also includes step:
The scene image is handled to obtain the whole audience edge image of the scene image;With
According to personage's area image described in the whole audience edge image amendment.
7. image processing method according to claim 1, it is characterised in that the object in the scene image enters
The three-dimensional structure of row is to form three-dimensional background material and include the step of stored:
Obtain the depth image of the scene image;
Identify the object area in the scene image;
Depth information corresponding with the object area is obtained from the depth image of the scene image;With
Three-dimensional structure is carried out to the object according to the object area and the depth information to form three-dimensional background material.
8. image processing method according to claim 1, it is characterised in that described to select the three-dimensional background material to incite somebody to action
The step of three-dimensional background material merges with the predetermined three-dimensional background image includes:
Obtain the predetermined integration region in the predetermined three-dimensional background image;
The pixel region to be replaced of the predetermined integration region is determined according to the three-dimensional background material;With
The pixel region to be replaced of the predetermined integration region is replaced with into the three-dimensional background material by three-dimensional background element
Material merges with the predetermined three-dimensional background image.
9. image processing method according to claim 1, it is characterised in that described to select the three-dimensional background material to incite somebody to action
The step of three-dimensional background material merges with the predetermined three-dimensional background image includes:
The predetermined three-dimensional background image is handled to obtain the whole audience edge image of the predetermined three-dimensional background image;
Obtain the depth data of the predetermined three-dimensional background image;
The predetermined three-dimensional background is determined according to the whole audience edge image of the predetermined three-dimensional background image and the depth data
The calculating integration region of image;
The pixel region to be replaced of the calculating integration region is determined according to three-dimensional background material;With
The pixel region to be replaced of the calculating integration region is replaced with into the three-dimensional background material with by the three-dimensional background
Material merges with the predetermined three-dimensional background image.
10. a kind of image processing apparatus, merge image for handling, it is characterised in that the merging image is carried on the back by predetermined three-dimensional
Personage area image of the scape image with active user in the scene image under real scene, which merges, to be formed, described image processing dress
Put including:
Processor, the processor are used for:
Three-dimensional structure is carried out to the object in the scene image to form three-dimensional background material and store;With
According to the three-dimensional background material of selection so that the three-dimensional background material to be merged with the predetermined three-dimensional background image.
11. image processing apparatus according to claim 10, it is characterised in that described image processing unit also includes:
Visible image capturing head, for obtaining the scene image of active user;With
Depth image acquisition component, for obtaining the depth image of the active user;
The processor is used for:
The scene image and the depth image are handled to extract personage area of the active user in the scene image
Domain and obtain personage's area image;With
Personage's area image is merged to obtain merging image with predetermined three-dimensional background image.
12. image processing apparatus according to claim 11, it is characterised in that the depth image acquisition component includes knot
Structure light projector and structure light video camera head, the structured light projector are used for active user's projective structure light;
The structure light video camera head is used for:
The structure light image that shooting is modulated through the active user;With
Phase information corresponding to each pixel of the structure light image is demodulated to obtain the depth image.
13. image processing apparatus according to claim 12, it is characterised in that the processor is additionally operable to:
Demodulate phase information corresponding to each pixel in the structure light image;
The phase information is converted into depth information;With
The depth image is generated according to the depth information.
14. image processing apparatus according to claim 11, it is characterised in that the processor is additionally operable to:
Identify the human face region in the scene image;
Depth information corresponding with the human face region is obtained from the depth image;
The depth bounds of people's object area is determined according to the depth information of the human face region;With
The people for determining to be connected and fall into the depth bounds with the human face region according to the depth bounds of people's object area
Object area is to obtain personage's area image.
15. image processing apparatus according to claim 14, it is characterised in that the processor is additionally operable to:
The scene image is handled to obtain the whole audience edge image of the scene image;With
According to personage's area image described in the whole audience edge image amendment of the scene image.
16. image processing apparatus according to claim 10, it is characterised in that described image processing unit includes depth map
As acquisition component, for obtaining the depth image of the scene image;
The processor is additionally operable to:
Identify the object area in the scene image;
Depth information corresponding with the object area is obtained from the depth image of the scene image;With
Three-dimensional structure is carried out to the object according to the object area and the depth information to form three-dimensional background material.
17. image processing apparatus according to claim 10, it is characterised in that the processor is additionally operable to:
Obtain intended pixel region corresponding to predetermined integration region in the predetermined three-dimensional background image;
The pixel region to be replaced of the predetermined integration region is determined according to the three-dimensional background material;With
The pixel region to be replaced of the predetermined integration region is replaced with into the three-dimensional background material by three-dimensional background element
Material merges with the predetermined three-dimensional background image.
18. image processing apparatus according to claim 10, it is characterised in that the processor is additionally operable to:
The predetermined three-dimensional background image is handled to obtain the whole audience edge image of the predetermined three-dimensional background image;
Obtain the depth data of the predetermined three-dimensional background image;
The predetermined three-dimensional background is determined according to the whole audience edge image of the predetermined three-dimensional background image and the depth data
The calculating integration region of image;
The pixel region to be replaced of the calculating integration region is determined according to three-dimensional background material;With
The pixel region to be replaced of the calculating integration region is replaced with into the three-dimensional background material with by the three-dimensional background
Material merges with the predetermined three-dimensional background image.
19. a kind of electronic installation, it is characterised in that the electronic installation includes:
One or more processors;
Memory;With
One or more programs, wherein one or more of programs are stored in the memory, and be configured to by
One or more of computing devices, described program include being used at the image that perform claim is required described in 1 to 9 any one
The instruction of reason method.
A kind of 20. computer-readable recording medium, it is characterised in that the meter being used in combination including the electronic installation with that can image
Calculation machine program, the computer program can be executed by processor to complete the image procossing described in claim 1 to 9 any one
Method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710813592.2A CN107527335A (en) | 2017-09-11 | 2017-09-11 | Image processing method and device, electronic installation and computer-readable recording medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710813592.2A CN107527335A (en) | 2017-09-11 | 2017-09-11 | Image processing method and device, electronic installation and computer-readable recording medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107527335A true CN107527335A (en) | 2017-12-29 |
Family
ID=60736659
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710813592.2A Pending CN107527335A (en) | 2017-09-11 | 2017-09-11 | Image processing method and device, electronic installation and computer-readable recording medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107527335A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108765272A (en) * | 2018-05-31 | 2018-11-06 | Oppo广东移动通信有限公司 | Image processing method, device, electronic equipment and readable storage medium storing program for executing |
CN108830891A (en) * | 2018-06-05 | 2018-11-16 | 成都精工华耀科技有限公司 | A kind of rail splice fastener loosening detection method |
CN111093301A (en) * | 2019-12-14 | 2020-05-01 | 安琦道尔(上海)环境规划建筑设计咨询有限公司 | Light control method and system |
CN112261347A (en) * | 2020-10-14 | 2021-01-22 | 浙江大华技术股份有限公司 | Method and device for adjusting participation right, storage medium and electronic device |
CN112907451A (en) * | 2021-03-26 | 2021-06-04 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device, computer equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102625129A (en) * | 2012-03-31 | 2012-08-01 | 福州一点通广告装饰有限公司 | Method for realizing remote reality three-dimensional virtual imitated scene interaction |
CN102800129A (en) * | 2012-06-20 | 2012-11-28 | 浙江大学 | Hair modeling and portrait editing method based on single image |
CN103106604A (en) * | 2013-01-23 | 2013-05-15 | 东华大学 | Three dimensional (3D) virtual fitting method based on somatosensory technology |
CN103389849A (en) * | 2012-05-07 | 2013-11-13 | 腾讯科技(北京)有限公司 | Image presentation method and system based on mobile terminal and mobile terminal |
CN103686140A (en) * | 2013-12-30 | 2014-03-26 | 张瀚宇 | Projection manufacturing method for three-dimensional object based on scheduled site |
CN106097435A (en) * | 2016-06-07 | 2016-11-09 | 北京圣威特科技有限公司 | A kind of augmented reality camera system and method |
CN106909911A (en) * | 2017-03-09 | 2017-06-30 | 广东欧珀移动通信有限公司 | Image processing method, image processing apparatus and electronic installation |
-
2017
- 2017-09-11 CN CN201710813592.2A patent/CN107527335A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102625129A (en) * | 2012-03-31 | 2012-08-01 | 福州一点通广告装饰有限公司 | Method for realizing remote reality three-dimensional virtual imitated scene interaction |
CN103389849A (en) * | 2012-05-07 | 2013-11-13 | 腾讯科技(北京)有限公司 | Image presentation method and system based on mobile terminal and mobile terminal |
CN102800129A (en) * | 2012-06-20 | 2012-11-28 | 浙江大学 | Hair modeling and portrait editing method based on single image |
CN103106604A (en) * | 2013-01-23 | 2013-05-15 | 东华大学 | Three dimensional (3D) virtual fitting method based on somatosensory technology |
CN103686140A (en) * | 2013-12-30 | 2014-03-26 | 张瀚宇 | Projection manufacturing method for three-dimensional object based on scheduled site |
CN106097435A (en) * | 2016-06-07 | 2016-11-09 | 北京圣威特科技有限公司 | A kind of augmented reality camera system and method |
CN106909911A (en) * | 2017-03-09 | 2017-06-30 | 广东欧珀移动通信有限公司 | Image processing method, image processing apparatus and electronic installation |
Non-Patent Citations (2)
Title |
---|
何宇: "《电脑平面设计系列 广告篇》", 30 September 2001, 浦东电子出版社 * |
李晓彬等: "《电脑动画基础》", 30 June 2016, 辽宁美术出版社 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108765272A (en) * | 2018-05-31 | 2018-11-06 | Oppo广东移动通信有限公司 | Image processing method, device, electronic equipment and readable storage medium storing program for executing |
CN108765272B (en) * | 2018-05-31 | 2022-07-08 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and readable storage medium |
CN108830891A (en) * | 2018-06-05 | 2018-11-16 | 成都精工华耀科技有限公司 | A kind of rail splice fastener loosening detection method |
CN108830891B (en) * | 2018-06-05 | 2022-01-18 | 成都精工华耀科技有限公司 | Method for detecting looseness of steel rail fishplate fastener |
CN111093301A (en) * | 2019-12-14 | 2020-05-01 | 安琦道尔(上海)环境规划建筑设计咨询有限公司 | Light control method and system |
CN112261347A (en) * | 2020-10-14 | 2021-01-22 | 浙江大华技术股份有限公司 | Method and device for adjusting participation right, storage medium and electronic device |
CN112907451A (en) * | 2021-03-26 | 2021-06-04 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device, computer equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107610077A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107509045A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107527335A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107797664A (en) | Content display method, device and electronic installation | |
CN107707831A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107707839A (en) | Image processing method and device | |
CN107707835A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107734264A (en) | Image processing method and device | |
CN107707838A (en) | Image processing method and device | |
CN107610080A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107610078A (en) | Image processing method and device | |
CN107644440A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107590793A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107509043A (en) | Image processing method and device | |
CN107705278A (en) | The adding method and terminal device of dynamic effect | |
CN107680034A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107610076A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107613223A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107705243A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107454336A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107592491A (en) | Video communication background display methods and device | |
CN107613228A (en) | The adding method and terminal device of virtual dress ornament | |
CN107705277A (en) | Image processing method and device | |
CN107682656A (en) | Background image processing method, electronic equipment and computer-readable recording medium | |
CN107707863A (en) | Image processing method and device, electronic installation and computer-readable recording medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong Applicant after: OPPO Guangdong Mobile Communications Co., Ltd. Address before: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong Applicant before: Guangdong OPPO Mobile Communications Co., Ltd. |
|
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171229 |