CN107590793A - Image processing method and device, electronic installation and computer-readable recording medium - Google Patents

Image processing method and device, electronic installation and computer-readable recording medium Download PDF

Info

Publication number
CN107590793A
CN107590793A CN201710811472.9A CN201710811472A CN107590793A CN 107590793 A CN107590793 A CN 107590793A CN 201710811472 A CN201710811472 A CN 201710811472A CN 107590793 A CN107590793 A CN 107590793A
Authority
CN
China
Prior art keywords
image
predetermined
dimensional
frame
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710811472.9A
Other languages
Chinese (zh)
Inventor
张学勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710811472.9A priority Critical patent/CN107590793A/en
Publication of CN107590793A publication Critical patent/CN107590793A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a kind of image processing method, for electronic installation.Image processing method includes:With the scene image and depth image of predeterminated frequency collection multiframe active user, every frame scene image and every frame depth image are handled to extract the action message of active user, predetermined three-dimensional foreground image is rendered according to action message so as to follow the action of active user per frame predetermined three-dimensional foreground image, and predetermined three-dimensional foreground image after every frame is rendered and predetermined three-dimensional background image merge to obtain multiframe and merge image with output video image.Invention additionally discloses a kind of image processing apparatus, electronic installation and computer-readable recording medium.The predetermined three-dimensional foreground image for the action message that active user can be followed with predetermined three-dimensional background image merge by image processing method and device, the electronic installation and computer-readable recording medium of embodiment of the present invention can obtain the three-dimensional merging image of multiframe, video image can be also formed, can so increase the interest of image co-registration.

Description

Image processing method and device, electronic installation and computer-readable recording medium
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of image processing method and device, electronic installation and Computer-readable recording medium.
Background technology
Existing image co-registration is typically to be merged the portrait of user with background image, but the interest of such a amalgamation mode Taste is relatively low.
The content of the invention
Can the embodiment provides a kind of image processing method, image processing apparatus, electronic installation and computer Read storage medium.
The image processing method of embodiment of the present invention is used for electronic installation, and described image processing method includes:
With the scene image of predeterminated frequency collection multiframe active user;
With the depth image of active user described in predeterminated frequency collection multiframe;
Scene image described in per frame and depth image described in per frame are handled to extract the action message of the active user;
Predetermined three-dimensional foreground image is rendered according to the action message so that predetermined three-dimensional foreground image described in per frame follows The action of the active user;With
The predetermined three-dimensional foreground image after every frame is rendered merges to obtain multiframe with the predetermined three-dimensional background image Merge image with output video image.
The image processing apparatus of embodiment of the present invention is used for electronic installation, and described image processing unit is taken the photograph including visible ray As head, depth image acquisition component and processor.The visible image capturing head is used to gather multiframe active user with predeterminated frequency Scene image;The depth image acquisition component is used for the depth map of active user described in predeterminated frequency collection multiframe Picture;The processor is used to handle scene image described in every frame and depth image described in per frame to extract the dynamic of the active user Make information, predetermined three-dimensional foreground image is rendered according to the action message so that predetermined three-dimensional foreground image described in per frame follows institute The action of active user is stated, and the predetermined three-dimensional foreground image after every frame is rendered and the predetermined three-dimensional background image Fusion obtains multiframe and merges image with output video image.
The electronic installation of embodiment of the present invention includes one or more processors, memory and one or more programs. Wherein one or more of programs are stored in the memory, and are configured to by one or more of processors Perform, described program includes being used for the instruction for performing above-mentioned image processing method.
The computer-readable recording medium of embodiment of the present invention includes what is be used in combination with the electronic installation that can be imaged Computer program, the computer program can be executed by processor to complete above-mentioned image processing method.
Image processing method, image processing apparatus, electronic installation and the computer-readable storage medium of embodiment of the present invention Matter will can follow the predetermined three-dimensional foreground image for imitating the action message of active user to be melted with predetermined three-dimensional background image Close, obtain the three-dimensional merging image of multiframe, the three-dimensional merging image of multiframe can also form video image output.In this way, it can increase The interest of image co-registration, lift the usage experience of user.
The additional aspect and advantage of the present invention will be set forth in part in the description, and will partly become from the following description Obtain substantially, or recognized by the practice of the present invention.
Brief description of the drawings
Of the invention above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments Substantially and it is readily appreciated that, wherein:
Fig. 1 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Fig. 2 is the schematic diagram of the electronic installation of some embodiments of the present invention.
Fig. 3 is the structural representation of the image processing apparatus of some embodiments of the present invention.
Fig. 4 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Fig. 5 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Fig. 6 (a) to Fig. 6 (e) is the schematic diagram of a scenario of structural light measurement according to an embodiment of the invention.
Fig. 7 (a) and Fig. 7 (b) is the schematic diagram of a scenario of structural light measurement according to an embodiment of the invention.
Fig. 8 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Fig. 9 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Figure 10 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Figure 11 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Figure 12 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Figure 13 is the schematic diagram of the image processing apparatus of some embodiments of the present invention.
Figure 14 is the schematic diagram of the electronic installation of some embodiments of the present invention.
Embodiment
Embodiments of the invention are described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end Same or similar label represents same or similar element or the element with same or like function.Below with reference to attached The embodiment of figure description is exemplary, it is intended to for explaining the present invention, and is not considered as limiting the invention.
Also referring to Fig. 1 to 2, the image processing method of embodiment of the present invention is used for electronic installation 1000.At image Reason method includes:
01:With the scene image of predeterminated frequency collection multiframe active user;
03:With the depth image of predeterminated frequency collection multiframe active user;
05:Every frame scene image and every frame depth image are handled to extract the action message of active user;
07:Predetermined three-dimensional foreground image is rendered according to action message so as to follow current use per frame predetermined three-dimensional foreground image The action at family;With
08:Predetermined three-dimensional foreground image and predetermined three-dimensional background image after every frame is rendered merge to obtain multiframe merging figure As with output video image.
Also referring to Fig. 2 and Fig. 3, the image processing method of embodiment of the present invention can be by embodiment of the present invention Image processing apparatus 100 is realized.The image processing apparatus 100 of embodiment of the present invention is used for electronic installation 1000.Image procossing Device 100 includes visible image capturing first 11, depth image acquisition component 12 and processor 20.Step 01 can be by visible image capturing First 11 are realized, step 03 can be realized that step 05, step 07 and step 08 can be by processors by depth image acquisition component 12 20 realize.
In other words, it is seen that light video camera head 11 can be used for the scene image that multiframe active user is gathered with predeterminated frequency;It is deep Spend the depth image that image collection assembly 12 can be used for gathering multiframe active user with predeterminated frequency;Processor 20 can be used for handling Per frame scene image and per frame depth image to extract the action message of active user, before rendering predetermined three-dimensional according to action message Scape image is so as to follow the action of active user per frame predetermined three-dimensional foreground image, and before predetermined three-dimensional after every frame is rendered Scape image and predetermined three-dimensional background image merge to obtain multiframe merging image with output video image.
Wherein, predeterminated frequency refers to that visible image capturing first 11 and depth image acquisition component 12 gather image each second Frame per second, the value of frame per second can be each second 30 frame, frame each second 60, frame each second 120 etc..Frame per second is higher, and video image is got over It is smooth.The scene image of first 11 collection of visible image capturing is gray level image or coloured image, and depth image acquisition component 12 gathers Depth image characterize each personal or object depth information in the scene comprising active user.In the specific embodiment of the present invention In, it is seen that light video camera head 11 and depth image acquisition component 12 should use same predeterminated frequency to carry out IMAQ, in this way, multiframe Scene image corresponds with multiframe depth image, then processor 20 obtains after handling every frame scene image and corresponding depth image To action message can render predetermined three-dimensional foreground image corresponding to a frame, consequently facilitating in step 08 to every frame predetermined three-dimensional before The fusion treatment of scape image and predetermined three-dimensional background image.In addition, the scene domain of scene image and the scene model of depth image Enclose it is consistent, and each pixel in scene image can be found in depth image to should pixel depth information.
In some embodiments, predetermined three-dimensional foreground image includes three-dimensional virtual portrait, three-dimensional real person, three At least one of animals and plants of dimension.Three-dimensional real person excludes active user itself.Wherein, three-dimensional virtual portrait can be with It is three-dimensional animated character, such as Mario, Conan, major part son, RNB etc.;Three-dimensional real person can be three-dimensional The personality of image, such as Hepburn Audery, handou sir, Harry Potter etc., three-dimensional animals and plants can be three-dimensional dynamic The animal or plant of picture, such as Micky Mouse, Donald duck, pea shooter etc..
The image processing apparatus 100 of embodiment of the present invention can apply to the electronic installation of embodiment of the present invention 1000.In other words, the electronic installation 1000 of embodiment of the present invention includes the image processing apparatus of embodiment of the present invention 100。
In some embodiments, electronic installation 1000 includes mobile phone, tablet personal computer, notebook computer, Intelligent bracelet, intelligence Energy wrist-watch, intelligent helmet, intelligent glasses etc..
In some embodiments, predetermined three-dimensional background image can be that the predetermined three-dimensional for modeling to obtain by actual scene is carried on the back The predetermined three-dimensional background image that scape image or cartoon making obtain.Predetermined three-dimensional foreground image can be by processor 20 Randomly select, or voluntarily selected by active user.
Image processing method, image processing apparatus 100 and the electronic installation 1000 of embodiment of the present invention can follow The predetermined three-dimensional foreground image for imitating the action message of active user is merged with predetermined three-dimensional background image, obtains multiframe three The merging image of dimension, the three-dimensional merging image of multiframe can also form video image output.In this way, the entertaining of image co-registration can be increased Property, lift the usage experience of user.Further, since merge the actual persons picture for not including user in image, therefore can be in certain journey The privacy of user is protected on degree.
Referring to Fig. 4, in some embodiments, step 03 gathers the depth image of multiframe active user with predeterminated frequency Including:
031:To active user's projective structure light;
032:The structure light image modulated with predeterminated frequency shooting multiframe through active user;With
033:Demodulate per phase information corresponding to each pixel of frame structure light image to obtain multiframe depth image.
Referring again to Fig. 3, in some embodiments, depth image acquisition component 12 includes the He of structured light projector 121 Structure light video camera head 122.Step 031 can be realized that step 032 and step 033 can be by structure lights by structured light projector 121 Camera 122 is realized.
In other words, structured light projector 121 can be used for active user's projective structure light;Structure light video camera head 122 can For the structure light image modulated with predeterminated frequency shooting multiframe through active user, and demodulation is per each of frame structure light image Phase information corresponding to pixel is to obtain multiframe depth image.
Specifically, structured light projector 121 is by the face and body of the project structured light of certain pattern to active user Afterwards, the structure light image after being modulated by active user can be formed in the face of active user and the surface of body.Structure light images First 122 structure light image with preset frame rate shooting multiframe after modulated, then each frame structure light image is demodulated to obtain To depth image corresponding with the frame structure light image, in this way, can obtain multiframe after being demodulated to multiframe structure light image Depth image.Wherein, the pattern of structure light can be laser stripe, Gray code, sine streak, non-homogeneous speckle etc..
Referring to Fig. 5, in some embodiments, step 033 is demodulated corresponding to each pixel of every frame structure light image Phase information is included with obtaining multiframe depth image:
0331:Phase information corresponding to each pixel in the every frame structure light image of demodulation;
0332:Phase information is converted into depth information;With
0333:Depth image is generated according to depth information.
Referring again to Fig. 3, in some embodiments, step 0331, step 0332 and step 0333 can be by structures Light video camera head 122 is realized.
In other words, it is corresponding can be further used for demodulating each pixel in every frame structure light image for structure light video camera head 122 Phase information, phase information is converted into depth information, and depth image is generated according to depth information.
Specifically, compared with non-modulated structure light, the phase information of the structure light after modulation is changed, and is being tied The structure light showed in structure light image is to generate the structure light after distortion, wherein, the phase information of change can characterize The depth information of object.Therefore, structure light video camera head 122 demodulates phase corresponding to each pixel in every frame structure light image first Position information, calculates depth information, so as to obtain depth image corresponding with the frame structure light image further according to phase information.
In order that those skilled in the art be more apparent from according to structure light come gather active user face and The process of the depth image of body, illustrated below by taking a kind of widely used optical grating projection technology (fringe projection technology) as an example Its concrete principle.Wherein, optical grating projection technology belongs to sensu lato area-structure light.
As shown in Fig. 6 (a), when being projected using area-structure light, sine streak is produced by computer programming first, And sine streak is projected to measured object by structured light projector 121, recycle structure light video camera head 122 to shoot striped by thing Degree of crook after body modulation, then demodulates the curved stripes and obtains phase, then phase is converted into depth information to obtain Depth image.The problem of to avoid producing error or error coupler, needed before carrying out depth information collection using structure light to depth Image collection assembly 12 carries out parameter calibration, and demarcation includes geometric parameter (for example, structure light video camera head 122 and project structured light Relative position parameter between device 121 etc.) demarcation, the inner parameter and structured light projector 121 of structure light video camera head 122 The demarcation of inner parameter etc..
Specifically, the first step, computer programming produce sine streak.Need to obtain using the striped of distortion due to follow-up Phase, for example phase is obtained using four step phase-shifting methods, therefore the striped that four width phase differences are pi/2, then structure light are produced here The projector 121 projects the four spokes line timesharing on measured object (mask shown in Fig. 6 (a)), and structure light video camera head 122 gathers To the figure on such as Fig. 6 (b) left sides, while to read the striped of the plane of reference shown on the right of Fig. 6 (b).
Second step, carry out phase recovery.The bar graph that structure light video camera head 122 is modulated according to four width collected is (i.e. Structure light image) to calculate the phase diagram by phase modulation, now obtained be to block phase diagram.Because four step Phase-shifting algorithms obtain Result be that gained is calculated by arctan function, therefore the phase after structure light modulation is limited between [- π, π], that is, Say, the phase after modulation exceedes [- π, π], and it can restart again.Shown in the phase main value such as Fig. 6 (c) finally given.
Wherein, it is necessary to carry out the saltus step processing that disappears, it is continuous phase that will block phase recovery during phase recovery is carried out Position.As shown in Fig. 6 (d), the left side is the continuous phase bitmap modulated, and the right is to refer to continuous phase bitmap.
3rd step, subtract each other to obtain phase difference (i.e. phase information) by the continuous phase modulated and with reference to continuous phase, should Phase difference characterizes depth information of the measured object with respect to the plane of reference, then phase difference is substituted into the conversion formula (public affairs of phase and depth The parameter being related in formula is by demarcation), you can obtain the threedimensional model of the object under test as shown in Fig. 6 (e).
It should be appreciated that in actual applications, according to the difference of concrete application scene, employed in the embodiment of the present invention Structure light in addition to above-mentioned grating, can also be other arbitrary graphic patterns.
As a kind of possible implementation, the depth information of pattern light progress active user also can be used in the present invention Collection.
Specifically, the method that pattern light obtains depth information is that this spreads out using a diffraction element for being essentially flat board The relief diffraction structure that there are element particular phases to be distributed is penetrated, cross section is with two or more concavo-convex step embossment knots Structure.Substantially 1 micron of the thickness of substrate in diffraction element, each step it is highly non-uniform, the span of height can be 0.7 Micron~0.9 micron.Structure shown in Fig. 7 (a) is the local diffraction structure of the collimation beam splitting element of the present embodiment.Fig. 7 (b) is edge The unit of the cross sectional side view of section A-A, abscissa and ordinate is micron.The speckle pattern of pattern photogenerated has The randomness of height, and can with the difference of distance changing patterns.Therefore, depth information is being obtained using pattern light Before, it is necessary first to the speckle pattern in space is calibrated, for example, in the range of 0~4 meter of distance structure light video camera head 122, A reference planes are taken every 1 centimetre, then just save 400 width speckle images after demarcating, the spacing of demarcation is smaller, obtains Depth information precision it is higher.Then, structured light projector 121 is by pattern light projection to measured object (i.e. active user) On, the speckle pattern that the difference in height on measured object surface to project the pattern light on measured object changes.Structure light Camera 122 is shot project speckle pattern (i.e. structure light image) on measured object after, then by speckle pattern and demarcation early stage The 400 width speckle images preserved afterwards carry out computing cross-correlation one by one, and then obtain 400 width correlation chart pictures.Measured object in space Position where body can show peak value on correlation chart picture, above-mentioned peak value is superimposed and after interpolation arithmetic i.e. It can obtain the depth information of measured object.
Multi beam diffraction light is obtained after diffraction is carried out to light beam due to common diffraction element, but per beam diffraction light light intensity difference Greatly, it is also big to the risk of human eye injury.Re-diffraction even is carried out to diffraction light, the uniformity of obtained light beam is relatively low. Therefore, the effect projected using the light beam of common diffraction element diffraction to measured object is poor.Using collimation in the present embodiment Beam splitting element, the element not only have the function that to collimate uncollimated rays, also have the function that light splitting, i.e., through speculum The non-collimated light of reflection is emitted multi-beam collimation light beam, and the multi-beam collimation being emitted after collimating beam splitting element toward different angles The area of section approximately equal of light beam, flux of energy approximately equal, and then to carry out using the scatterplot light after the beam diffraction The effect of projection is more preferable.Meanwhile laser emitting light is dispersed to every light beam, the risk of injury human eye is reduce further, and dissipate Spot structure light is for other uniform structure lights of arrangement, when reaching same collection effect, the consumption of pattern light Electricity is lower.
Referring to Fig. 8, in some embodiments, step 05 handles every frame scene image and every frame depth image to extract The action message of active user includes:
051:Identification is per the human face region in frame scene image;
052:Depth information corresponding with human face region is obtained from depth image corresponding with scene image;
053:The depth bounds of people's object area is determined according to the depth information of human face region;
054:The personage area for determining to be connected and fallen into depth bounds with human face region according to the depth bounds of people's object area Domain is to obtain personage's area image;With
057:Personage's area image is handled to obtain the action message of active user.
Referring again to Fig. 3, in some embodiments, step 051, step 052, step 053, step 054 and step 057 It can be realized by processor 20.
In other words, processor 20 can be further used for identifying the human face region in every frame scene image, from scene graph Depth information corresponding with human face region is obtained in the depth image as corresponding to, personage is determined according to the depth information of human face region The depth bounds in region, the personage for determining to be connected and fallen into depth bounds with human face region according to the depth bounds of people's object area Region is to obtain personage's area image, and processing personage's area image is to obtain the action message of active user.
Wherein, action message includes at least one of expression and limb action of active user.In other words, action letter Breath can be the expression of active user, or the limb action of active user can also be the expression and limbs of active user Action.
Specifically, the face area that the deep learning Model Identification trained can be used to go out in every frame scene image first Domain, face in each frame scene image then can determine that according to each frame scene image and each one-to-one relation of frame depth image The depth information in region.It is therefore, each in human face region because human face region includes the features such as nose, eyes, ear, lip Individual feature depth data corresponding in depth image is different, for example, in face face depth image acquisition component 12 When, in the depth image that depth image acquisition component 12 is shot, depth data corresponding to nose may be smaller, and ear is corresponding Depth data may be larger.Therefore, the depth information of above-mentioned human face region may be a numerical value or a numerical value model Enclose.Wherein, when the depth information of human face region is a numerical value, the numerical value can be by making even to the depth data of human face region It is worth to;Or can be by being worth in being taken to the depth data of human face region.
Because people's object area includes human face region, in other words, people's object area is in some depth together with human face region In the range of, therefore, after processor 20 determines the depth information of human face region, it can be set according to the depth information of human face region The depth bounds of people's object area, the depth bounds extraction further according to people's object area fall into the depth bounds and with human face region phase People's object area of connection, so as to obtain personage's area image.
After processor 20 calculates personage's area image, personage's area image can be handled.Specifically, processor 20 The human face region in personage's area image can be identified first, so as to carry out Expression Recognition to human face region again;Or processor The human face region obtained in direct processing step 051 is to identify the expression of active user.Then, processor 20 is to every frame scene Personage's area image in image is handled to obtain the information of active user's limb action.Wherein, active user's limbs move The information of work can be obtained by way of template matches.Processor 20 by people's object area in personage's area image with it is multiple Personage's template is matched.The head of people's object area is matched first;After the completion of being matched on head, then the residue to fit heads Multiple personage's templates carry out next limbs matching, i.e. upper part of the body trunk matching;After the completion of the matching of upper part of the body trunk, then The remaining multiple personage's templates to match to head and upper part of the body trunk carry out the matching of next limbs, i.e., upper limb body is with The matching of limbs, so as to determine the information of active user's limb action according to the method for template matches.Then, processor 20 again will The expression and limb action of the active user identified renders to predetermined three-dimensional foreground image, makes predetermined three-dimensional foreground image In personage or animals and plants can follow imitate active user expression and limb action.Finally, after processor 20 will render Predetermined three-dimensional foreground image is merged with predetermined three-dimensional background image to obtain merging image.
In this way, the predetermined three-dimensional foreground image that can follow the expression for imitating active user and limb action can be obtained. Because personage's area image is split from every frame scene image according to depth information, and the acquisition of depth information not by The image of the factor such as illumination, colour temperature rings in environment, and therefore, the personage's area image extracted is more accurate, then at processor 20 The expression and limb action for the active user that reason personage's area image obtains are also more accurate, so as to which processor 20 can be used more Action message is rendered to predetermined three-dimensional foreground image preferably to be followed imitation effect exactly.
Referring to Fig. 9, in some embodiments, step 05 handles every frame scene image and every frame depth image to extract The action message of active user also includes:
055:Handle per frame scene image to obtain the whole audience edge image of every frame scene image;With
056:According to the whole audience edge image amendment of every frame scene image personage area corresponding with the frame whole audience edge image Area image;
Step 057 processing personage's area image is included with obtaining the action message of active user:
0571:Revised personage's area image is handled to obtain the action message of active user.
Referring again to Fig. 3, in some embodiments, step 055, step 056 and step 0571 can be by processors 20 realize.
In other words, processor 20 can also be used to handle every frame scene image to obtain the whole audience edge of every frame scene image Image, according to the whole audience edge image amendment of every frame scene image personage's area image corresponding with the frame whole audience edge image, And revised personage's area image is handled to obtain the action message of active user.
Processor 20 carries out edge extracting to obtain multiframe whole audience edge image to every frame scene image first, wherein, field Edge lines in the whole audience edge image of scape image include background object in scene residing for active user and active user Edge lines.Specifically, edge extracting can be carried out to every frame scene image by Canny operators.Canny operators carry out edge and carried The core of the algorithm taken mainly includes the following steps:First, convolution is carried out to scene image to eliminate with 2D gaussian filterings template Noise;Then, the Grad of the gray scale of each pixel is obtained using differential operator, and the ash of each pixel is calculated according to Grad The gradient direction of degree, adjacent pixels of the respective pixel along gradient direction can be found by gradient direction;Then, each is traveled through Pixel, if the gray value of some pixel is not maximum compared with the gray value of former and later two adjacent pixels on its gradient direction, It is considered that this pixel is not marginal point.In this way, the pixel that marginal position is in scene image is can determine that, so as to obtain Obtain the whole audience edge image of the scene image after edge extracting.
The corresponding frame whole audience edge image of each frame scene image, similarly, the corresponding frame personage of each frame scene image Area image, therefore, the whole audience edge image and personage's area image of scene image are one-to-one.Processor 20 obtains field After the whole audience edge image of scape image, according to the whole audience edge image pair of the whole audience edge image pair of scene image and scene image The personage's area image answered is modified.It is appreciated that people's object area is will to be connected and fall into human face region in scene image Obtained after all pixels progress merger of the depth bounds of setting, in some scenarios, it is understood that there may be some and human face region The object for connecting and falling into depth bounds.Therefore, the whole audience edge graph of scene image can be used to carry out personage's area image Correct to obtain more accurate people's object area.
Further, processor 20 can also carry out second-order correction to revised people's object area, for example, can be to revised People's object area carries out expansion process, expands people's object area to retain the edge details of people's object area.In this way, what processor 20 was handled Ethical person object area image is more accurate.
Referring to Fig. 10, in some embodiments, step 08 every frame is rendered after predetermined three-dimensional foreground image with it is pre- Determine three-dimensional background image co-registration obtain multiframe merge image included with output video image:
0811:Obtain per the predetermined integration region in frame predetermined three-dimensional background image;
0812:Predetermined integration region is determined according to predetermined three-dimensional foreground image corresponding with the frame predetermined three-dimensional background image Pixel region to be replaced;
0813:The pixel region to be replaced of predetermined integration region is replaced with corresponding with the frame predetermined three-dimensional background image Three-dimensional foreground image with obtain merge image;With
0814:Handle multiframe and merge image with output video image.
Referring again to Fig. 3, in some embodiments, step 0811, step 0812, step 0813 and step 0814 To be realized by processor 20.
In other words, processor 20 can be used for obtaining the predetermined integration region per in frame predetermined three-dimensional background image, according to Predetermined three-dimensional foreground image corresponding with the frame predetermined three-dimensional background image determines the pixel region to be replaced of predetermined integration region, The pixel region to be replaced of predetermined integration region is replaced with into three-dimensional foreground image corresponding with the frame predetermined three-dimensional background image To obtain merging image, and processing multiframe merges image with output video image.
It is appreciated that when predetermined three-dimensional background image models to obtain by actual scene, in predetermined three-dimensional background image Depth data can be obtained directly in modeling process corresponding to each pixel;Pass through cartoon making in predetermined three-dimensional background image When obtaining, depth data corresponding to each pixel can be by producer's sets itself in predetermined three-dimensional background image;It is in addition, predetermined Each object present in three-dimensional background image is also known, therefore, is melted carrying out image using predetermined three-dimensional background image Before processing is closed, predetermined three-dimensional prospect first can be calibrated according to depth data and the object being present in predetermined three-dimensional background image The fusion position of image, i.e., predetermined integration region.Due to the personage in each predetermined three-dimensional foreground image or the size of animals and plants Differ, therefore, processor 20 needs to determine predetermined melt according to per the personage in frame predetermined three-dimensional foreground image or the size of animals and plants Close the pixel region to be replaced in region.Then, the pixel region to be replaced in predetermined integration region is replaced with into predetermined three-dimensional Foreground image is the merging image after being merged.In this way, realize predetermined three-dimensional foreground image and predetermined three-dimensional background image Fusion.
Refer to Figure 11, in some embodiments, step 08 every frame is rendered after predetermined three-dimensional foreground image with it is pre- Determine three-dimensional background image co-registration obtain multiframe merge image included with output video image:
0821:Predetermined three-dimensional background image described in per frame is handled to obtain the whole audience of predetermined three-dimensional background image described in every frame Edge image;
0822:Obtain the depth data of the predetermined three-dimensional background image;
0823:Determined according to the whole audience edge image of predetermined three-dimensional background image described in every frame and the depth data per frame The calculating integration region of the predetermined three-dimensional background image;
0824:The corresponding predetermined three-dimensional foreground image of the predetermined three-dimensional background image according to the frame determines the calculating The pixel region to be replaced of integration region;
0825:The pixel region to be replaced is replaced with corresponding with predetermined three-dimensional background image described in the frame described pre- Three-dimensional foreground image is determined to obtain the merging image;With
0826:Handle and merge image described in multiframe with output video image.
Referring again to Fig. 3, in some embodiments, step 0821, step 0822, step 0823, step 0824 and step Rapid 0825 can be realized by processor 20.
In other words, processor 20 can be used for processing predetermined three-dimensional background image described in per frame to make a reservation for obtain described in every frame The whole audience edge image of three-dimensional background image, the depth data of the predetermined three-dimensional background image is obtained, obtained pre- described in per frame Determine the whole audience edge image of three-dimensional background image and the depth data determines the calculating of predetermined three-dimensional background image described in per frame Integration region, the corresponding predetermined three-dimensional foreground image of the predetermined three-dimensional background image according to the frame determine described to calculate fusion The pixel region to be replaced in region, the pixel region to be replaced is replaced with corresponding with predetermined three-dimensional background image described in the frame The predetermined three-dimensional foreground image to obtain the merging image, and merge image described in processing multiframe to export video figure Picture.
It is appreciated that when if predetermined three-dimensional background image merges with predetermined three-dimensional foreground image, predetermined three-dimensional foreground image Fusion position do not demarcate in advance, then processor 20 need first determine predetermined three-dimensional foreground image in predetermined three-dimensional background image Fusion position.Specifically, processor 20 first carries out edge extracting to obtain whole audience edge image to predetermined three-dimensional background image, And the depth data of predetermined three-dimensional background image is obtained, wherein, depth data is in the modeling of predetermined three-dimensional background image or animation system Obtained during work.Then, processor 20 determines pre- according to the whole audience edge image and depth data of predetermined three-dimensional background image Determine the calculating integration region in three-dimensional background image.Due to the personage in each predetermined three-dimensional foreground image or the size of animals and plants Differ, therefore, the size of personage or animals and plants in predetermined three-dimensional foreground image need to be calculated, and according to predetermined three-dimensional foreground image The size of middle personage or animals and plants determines to calculate the pixel region to be replaced in integration region.Finally, corresponding circle of sensation will be calculated per frame Pixel region to be replaced in area image replaces with predetermined three-dimensional foreground image, merges image so as to obtain multiframe.In this way, realize Predetermined three-dimensional foreground image merges with predetermined three-dimensional background image.
After processor 20 obtains multiframe merging image, multiframe merges image sequence and arranges and store, and multiframe merges image can Video format is stored as by processor 20 and forms video image, when video image with certain frame per second electronic installation 1000 display When being shown on device 50 (shown in Figure 13), user is the video pictures that can watch smoothness.
In some embodiments, the predetermined integration region in predetermined three-dimensional background image or calculating integration region can be One or more.When predetermined integration region is one, predetermined three-dimensional foreground image melting in predetermined three-dimensional background image It is an above-mentioned unique predetermined integration region to close position;When it is one to calculate integration region, predetermined three-dimensional foreground image Fusion position in predetermined three-dimensional background image is above-mentioned unique calculating integration region;When predetermined integration region is When multiple, fusion position of the predetermined three-dimensional foreground image in predetermined three-dimensional background image can be in multiple predetermined integration regions Any one, further, because predetermined three-dimensional foreground image has depth information, therefore can be in multiple predetermined integration regions Middle searching is used as with the predetermined integration region that the depth information of predetermined three-dimensional foreground image matches and merges position, more preferable to obtain Syncretizing effect;When it is multiple to calculate integration region, fusion of the predetermined three-dimensional foreground image in three-dimensional background image is calculated Position can be any one in multiple calculating integration regions, further, because predetermined three-dimensional foreground image has depth Information, therefore the calculating to match with the depth information of predetermined three-dimensional foreground image can be found in multiple calculating integration regions and is melted Region is closed as fusion position, to obtain more preferable syncretizing effect.
Figure 12 is referred to, in some embodiments, the image processing method of embodiment of the present invention also includes:
091:Gather the acoustic information of active user;With
092:Video image is merged to export sound video with acoustic information.
Referring again to Fig. 3, in some embodiments, image processing apparatus 100 also includes acoustoelectric element 70.Step 091 Step 092 can be realized that step 092 can be realized by processor 20 by acoustoelectric element 70.In other words, acoustoelectric element 70 can be used In the acoustic information of collection active user, processor 20 can also be used to acoustic information merge video image to export sound regard Frequently.
Specifically, opened in visible image capturing first 11 and depth image acquisition component 12 to gather scene image and depth map During picture, acoustoelectric element 70 is also opened to gather the acoustic information of active user simultaneously.In this way, the sound letter that acoustoelectric element 70 gathers Cease synchronous with the video image holding that multiframe merging image is formed.Then, processor 20 merges acoustic information and video image To be output into sound video.Sound video is sound to regard when being played on the display 50 (shown in Figure 13) of electronic installation 1000 Picture and sound in frequency can keep synchronous broadcasting.In invention embodiment, acoustoelectric element 70 can be microphone.
Also referring to Fig. 2 and Figure 13, embodiment of the present invention also proposes a kind of electronic installation 1000.Electronic installation 1000 Including image processing apparatus 100.Image processing apparatus 100 can utilize hardware and/or software to realize.Image processing apparatus 100 Including imaging device 10 and processor 20.
Imaging device 10 includes visible image capturing first 11 and depth image acquisition component 12.
Specifically, it is seen that light video camera head 11 includes imaging sensor 111 and lens 112, it is seen that light video camera head 11 can be used for The colour information of active user is caught to obtain multiframe scene image, wherein, imaging sensor 111 includes color filter lens array (such as Bayer filter arrays), the number of lens 112 can be one or more.Visible image capturing first 11 is being obtained per frame scene graph As during, each imaging pixel in imaging sensor 111 senses luminous intensity and wavelength information in photographed scene, Generate one group of raw image data;Imaging sensor 111 sends this group of raw image data into processor 20, processor 20 The scene image of colour is obtained after the computings such as denoising, interpolation are carried out to raw image data.Processor 20 can be in various formats Each image pixel in raw image data is handled one by one, for example, each image pixel there can be 8,10,12 or 14 bits Bit depth, processor 20 can be handled each image pixel by identical or different bit depth.
Depth image acquisition component 12 includes structured light projector 121 and structure light video camera head 122, depth image collection group The depth information that part 12 can be used for catching active user is to obtain depth image.Structured light projector 121 is used to throw structure light Active user is incident upon, wherein, structured light patterns can be the speckle of laser stripe, Gray code, sine streak or random alignment Pattern etc..Structure light video camera head 122 includes imaging sensor 1221 and lens 1222, and the number of lens 1222 can be one or more It is individual.Imaging sensor 1221 is used for the multiframe structure light image that capturing structure light projector 121 is projected on active user.Per frame Structure light image can be sent by depth acquisition component 12 to processor 20 be demodulated, phase recovery, phase information calculate etc. Handle to obtain the depth information of active user.
In some embodiments, it is seen that the function of light video camera head 11 and structure light video camera head 122 can be by a camera Realize, in other words, imaging device 10 only includes a camera and a structured light projector 121, and above-mentioned camera is not only Structure light image can also be shot with photographed scene image.
Except using structure light obtain depth image in addition to, can also by binocular vision method, based on differential time of flight (Time Of Flight, TOF) even depth obtains the depth image of active user as acquisition methods.
In addition, image processing apparatus 100 also includes memory 30.Memory 30 can be embedded in electronic installation 1000, The memory that can be independently of outside electronic installation 1000, and may include direct memory access (DMA) (Direct Memory Access, DMA) feature.The knot that the raw image data or depth image acquisition component 12 of first 11 collection of visible image capturing gather Structure light image related data, which can transmit, to be stored or is cached into memory 30.Processor 20 can be read from memory 30 Raw image data also can read structure light image related data to enter to be handled to obtain scene image from memory 30 Row processing obtains depth image.In addition, scene image and depth image are also storable in memory 30, device 20 for processing with When calling handle, for example, processor 20 calls multiframe scene image and multiframe depth image to carry out the action message of active user Extraction, and the predetermined three-dimensional foreground image after being rendered via action message is merged with corresponding predetermined three-dimensional background image Processing merges image to obtain multiframe, and multiframe merges image sequence arrangement or storage forms video image.Wherein, before predetermined three-dimensional Scape image, predetermined three-dimensional background image, merging image, video image may be alternatively stored in memory 30.
Image processing apparatus 100 may also include display 50.Display 50 can obtain video figure directly from processor 20 Picture, also it can obtain video image from memory 30.Display 50 shows that video image is watched for user, or is drawn by figure Hold up or graphics processor (Graphics Processing Unit, GPU) is further processed.Image processing apparatus 100 Also include encoder/decoder 60, encoder/decoder 60 can encoding and decoding scene image, depth image, predetermined three-dimensional foreground picture Picture, predetermined three-dimensional background image, the view data for merging image, video image etc., the view data of coding, which can be stored in, deposits In reservoir 30, and it can be shown before image is shown on display 50 by decoder decompresses.Encoder/decoding Device 60 can be realized by central processing unit (Central Processing Unit, CPU), GPU or coprocessor.In other words, encode Device/decoder 60 can be appointing in central processing unit (Central Processing Unit, CPU), GPU and coprocessor Meaning is one or more.
Image processing apparatus 100 also includes control logic device 40.Imaging device 10 imaging when, processor 20 can according into As the data that equipment 10 obtains are analyzed to determine one or more control parameters of imaging device 10 (for example, the time for exposure Deng) image statistics.Image statistics are sent to control logic device 40, control logic device 40 and controlled into by processor 20 As equipment 10 is imaged with the control parameter determined.Control logic device 40 may include to perform one or more routines (as admittedly Part) processor and/or microcontroller.One or more routines can determine imaging device 10 according to the image statistics of reception Control parameter.
Image processing apparatus 100 also includes acoustoelectric element 70.Acoustoelectric element 70 is converted sound using electromagnetic induction principle Exported for electric current.The air vibration inside acoustoelectric element 70 can be driven during active user's sounding so that inside acoustoelectric element 70 Occurs micro-displacement between coil and magnetic core, so as to which cutting magnetic induction line produces electric current.Electric current is sent to processing by acoustoelectric element 70 Device 20, processor 20 handle electric current to generate acoustic information.Acoustic information can deliver to memory 30 and be stored.When processor 20 When merging to obtain sound video with acoustic information by video image, processor 20 can send sound video to display 50 and electricity In sound component (not shown), display 50 shows the video pictures in sound video, electroacoustic component synchronization back sound information.
Figure 12 is referred to, the electronic installation 1000 of embodiment of the present invention includes one or more processors 20, memory 30 and one or more programs 31.Wherein one or more programs 31 are stored in memory 30, and are configured to by one Individual or multiple processors 20 perform.Program 31 includes being used to perform the finger of the image processing method of above-mentioned any one embodiment Order.
For example, program 31 includes being used for the instruction for performing the image processing method described in following steps:
01:With the scene image of predeterminated frequency collection multiframe active user;
03:With the depth image of predeterminated frequency collection multiframe active user;
05:Every frame scene image and every frame depth image are handled to extract the action message of active user;
07:Predetermined three-dimensional foreground image is rendered according to action message so as to follow current use per frame predetermined three-dimensional foreground image The action at family;With
08:Predetermined three-dimensional foreground image and predetermined three-dimensional background image after every frame is rendered merge to obtain multiframe merging figure As with output video image.
For another example program 31 also includes being used for the instruction for performing the image processing method described in following steps:
051:Identification is per the human face region in frame scene image;
052:Depth information corresponding with human face region is obtained from depth image corresponding with scene image;
053:The depth bounds of people's object area is determined according to the depth information of human face region;
054:The personage area for determining to be connected and fallen into depth bounds with human face region according to the depth bounds of people's object area Domain is to obtain personage's area image;With
057:Personage's area image is handled to obtain the action message of active user.
The computer-readable recording medium of embodiment of the present invention includes being combined with the electronic installation 1000 that can be imaged making Computer program.Computer program can be performed by processor 20 to complete the image procossing of above-mentioned any one embodiment Method.
01:With the scene image of predeterminated frequency collection multiframe active user;
03:With the depth image of predeterminated frequency collection multiframe active user;
05:Every frame scene image and every frame depth image are handled to extract the action message of active user;
07:Predetermined three-dimensional foreground image is rendered according to action message so as to follow current use per frame predetermined three-dimensional foreground image The action at family;With
08:Predetermined three-dimensional foreground image and predetermined three-dimensional background image after every frame is rendered merge to obtain multiframe merging figure As with output video image.
For another example computer program can be also performed by processor 20 to complete the image processing method described in following steps:
051:Identification is per the human face region in frame scene image;
052:Depth information corresponding with human face region is obtained from depth image corresponding with scene image;
053:The depth bounds of people's object area is determined according to the depth information of human face region;
054:The personage area for determining to be connected and fallen into depth bounds with human face region according to the depth bounds of people's object area Domain is to obtain personage's area image;With
057:Personage's area image is handled to obtain the action message of active user.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description Point is contained at least one embodiment or example of the present invention.In this manual, to the schematic representation of above-mentioned term not Identical embodiment or example must be directed to.Moreover, specific features, structure, material or the feature of description can be with office Combined in an appropriate manner in one or more embodiments or example.In addition, in the case of not conflicting, the skill of this area Art personnel can be tied the different embodiments or example and the feature of different embodiments or example described in this specification Close and combine.
In addition, term " first ", " second " are only used for describing purpose, and it is not intended that instruction or hint relative importance Or the implicit quantity for indicating indicated technical characteristic.Thus, define " first ", the feature of " second " can be expressed or Implicitly include at least one this feature.In the description of the invention, " multiple " are meant that at least two, such as two, three It is individual etc., unless otherwise specifically defined.
Any process or method described otherwise above description in flow chart or herein is construed as, and represents to include Module, fragment or the portion of the code of the executable instruction of one or more the step of being used to realize specific logical function or process Point, and the scope of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discuss suitable Sequence, including according to involved function by it is basic simultaneously in the way of or in the opposite order, carry out perform function, this should be of the invention Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for Instruction execution system, device or equipment (such as computer based system including the system of processor or other can be held from instruction The system of row system, device or equipment instruction fetch and execute instruction) use, or combine these instruction execution systems, device or set It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicate, propagate or pass Defeated program is for instruction execution system, device or equipment or the dress used with reference to these instruction execution systems, device or equipment Put.The more specifically example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wiring Connecting portion (electronic installation), portable computer diskette box (magnetic device), random access memory (RAM), read-only storage (ROM), erasable edit read-only storage (EPROM or flash memory), fiber device, and portable optic disk is read-only deposits Reservoir (CDROM).In addition, computer-readable medium, which can even is that, to print the paper of described program thereon or other are suitable Medium, because can then enter edlin, interpretation or if necessary with it for example by carrying out optical scanner to paper or other media His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each several part of the present invention can be realized with hardware, software, firmware or combinations thereof.Above-mentioned In embodiment, software that multiple steps or method can be performed in memory and by suitable instruction execution system with storage Or firmware is realized.If, and in another embodiment, can be with well known in the art for example, realized with hardware Any one of row technology or their combination are realized:With the logic gates for realizing logic function to data-signal Discrete logic, have suitable combinational logic gate circuit application specific integrated circuit, programmable gate array (PGA), scene Programmable gate array (FPGA) etc..
Those skilled in the art are appreciated that to realize all or part of step that above-described embodiment method carries Suddenly it is that by program the hardware of correlation can be instructed to complete, described program can be stored in a kind of computer-readable storage medium In matter, the program upon execution, including one or a combination set of the step of embodiment of the method.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing module, can also That unit is individually physically present, can also two or more units be integrated in a module.Above-mentioned integrated mould Block can both be realized in the form of hardware, can also be realized in the form of software function module.The integrated module is such as Fruit is realized in the form of software function module and as independent production marketing or in use, can also be stored in a computer In read/write memory medium.
Storage medium mentioned above can be read-only storage, disk or CD etc..Although have been shown and retouch above Embodiments of the invention are stated, it is to be understood that above-described embodiment is exemplary, it is impossible to be interpreted as the limit to the present invention System, one of ordinary skill in the art can be changed to above-described embodiment, change, replace and become within the scope of the invention Type.

Claims (18)

1. a kind of image processing method, for electronic installation, it is characterised in that described image processing method includes:
With the scene image of predeterminated frequency collection multiframe active user;
With the depth image of active user described in predeterminated frequency collection multiframe;
Scene image described in per frame and depth image described in per frame are handled to extract the action message of the active user;
Predetermined three-dimensional foreground image is rendered according to the action message described in so that often predetermined three-dimensional foreground image described in frame follows The action of active user;With
The predetermined three-dimensional foreground image after every frame is rendered merges to obtain multiframe merging with the predetermined three-dimensional background image Image is with output video image.
2. image processing method according to claim 1, it is characterised in that described image processing method also includes:
Gather the acoustic information of the active user;With
The video image is merged to export sound video with the acoustic information.
3. image processing method according to claim 1, it is characterised in that described with predeterminated frequency collection multiframe institute The step of depth image for stating active user, includes:
To active user's projective structure light;
The structure light image modulated with predeterminated frequency shooting multiframe through the active user;With
Phase information corresponding to each pixel of structure light image described in per frame is demodulated to obtain depth image described in multiframe.
4. image processing method according to claim 3, it is characterised in that demodulation structure light image described in per frame The step of phase information corresponding to each pixel is to obtain depth image described in multiframe includes:
Phase information corresponding to each pixel in demodulation structure light image described in per frame;
The phase information is converted into depth information;With
The depth image is generated according to the depth information.
5. image processing method according to claim 1, it is characterised in that it is described every frame is rendered after described predetermined three Dimension foreground image merges to obtain multiframe merging image with the predetermined three-dimensional background image to be included with the step of output video image:
Obtain the predetermined integration region in predetermined three-dimensional background image described in per frame;
The corresponding predetermined three-dimensional foreground image of the predetermined three-dimensional background image according to the frame determines the predetermined fusion The pixel region to be replaced in region;
The pixel region to be replaced of the predetermined integration region is replaced with corresponding with predetermined three-dimensional background image described in the frame The three-dimensional foreground image is to obtain the merging image;With
Handle and merge image described in multiframe with output video image.
6. image processing method according to claim 1, it is characterised in that it is described every frame is rendered after described predetermined three Dimension foreground image merges to obtain multiframe merging image with the predetermined three-dimensional background image to be included with the step of output video image:
Predetermined three-dimensional background image described in per frame is handled to obtain the whole audience edge image of predetermined three-dimensional background image described in every frame;
Obtain the depth data of the predetermined three-dimensional background image;
Obtain the whole audience edge image of predetermined three-dimensional background image described in per frame and the depth data determines to make a reservation for described in per frame The calculating integration region of three-dimensional background image;
The corresponding predetermined three-dimensional foreground image of the predetermined three-dimensional background image according to the frame determines the calculating integration region Pixel region to be replaced;
Before the pixel region to be replaced is replaced with into the predetermined three-dimensional corresponding with predetermined three-dimensional background image described in the frame Scape image is to obtain the merging image;With
Handle and merge image described in multiframe with output video image.
7. image processing method according to claim 1, it is characterised in that the predetermined three-dimensional foreground image includes three-dimensional Virtual portrait, three-dimensional real person, at least one of three-dimensional animals and plants, the three-dimensional real person eliminates institute State active user itself;The predetermined three-dimensional background image includes modeling obtained predetermined three-dimensional background image by actual scene, And/or the predetermined three-dimensional background image that cartoon making obtains, the predetermined three-dimensional background image can randomly select or by described Active user selectes.
8. image processing method according to claim 1, it is characterised in that the action message includes the active user Expression and at least one of limb action.
9. a kind of image processing apparatus, for electronic installation, it is characterised in that described image processing unit includes:
Visible image capturing head, the visible image capturing head are used for the scene image that multiframe active user is gathered with predeterminated frequency;
Depth image acquisition component, the depth image acquisition component are used for currently to be used described in predeterminated frequency collection multiframe The depth image at family;With
Processor, the processor are used for:
Scene image described in per frame and depth image described in per frame are handled to extract the action message of the active user;
Predetermined three-dimensional foreground image is rendered according to the action message described in so that often predetermined three-dimensional foreground image described in frame follows The action of active user;With
The predetermined three-dimensional foreground image after every frame is rendered merges to obtain multiframe merging with the predetermined three-dimensional background image Image is with output video image.
10. image processing apparatus according to claim 9, it is characterised in that described image processing unit also includes acoustic-electric Element, the acoustoelectric element are used for the acoustic information for gathering the active user;
The processor is additionally operable to the acoustic information merge the video image to export sound video.
11. image processing apparatus according to claim 9, it is characterised in that the depth acquisition component includes structure light The projector and structure light video camera head, the structured light projector are used for active user's projective structure light;
The structure light video camera head is used for:
The structure light image modulated with predeterminated frequency shooting multiframe through the active user;With
Phase information corresponding to each pixel of structure light image described in per frame is demodulated to obtain depth image described in multiframe.
12. image processing apparatus according to claim 11, it is characterised in that the structure light video camera head is additionally operable to:
Phase information corresponding to each pixel in demodulation structure light image described in per frame;
The phase information is converted into depth information;With
The depth image is generated according to the depth information.
13. image processing apparatus according to claim 9, it is characterised in that the processor is additionally operable to:
Obtain the predetermined integration region in predetermined three-dimensional background image described in per frame;
The corresponding predetermined three-dimensional foreground image of the predetermined three-dimensional background image according to the frame determines the predetermined fusion The pixel region to be replaced in region;
The pixel region to be replaced of the predetermined integration region is replaced with corresponding with predetermined three-dimensional background image described in the frame The three-dimensional foreground image is to obtain the merging image;With
Handle and merge image described in multiframe with output video image.
14. image processing apparatus according to claim 9, it is characterised in that the processor is additionally operable to:
Predetermined three-dimensional background image described in per frame is handled to obtain the whole audience edge image of predetermined three-dimensional background image described in every frame;
Obtain the depth data of the predetermined three-dimensional background image;
Obtain the whole audience edge image of predetermined three-dimensional background image described in per frame and the depth data determines to make a reservation for described in per frame The calculating integration region of three-dimensional background image;
The corresponding predetermined three-dimensional foreground image of the predetermined three-dimensional background image according to the frame determines the calculating integration region Pixel region to be replaced;
Before the pixel region to be replaced is replaced with into the predetermined three-dimensional corresponding with predetermined three-dimensional background image described in the frame Scape image is to obtain the merging image;With
Handle and merge image described in multiframe with output video image.
15. image processing apparatus according to claim 9, it is characterised in that the predetermined three-dimensional foreground image includes three At least one of the virtual portrait of dimension, three-dimensional real person, three-dimensional animals and plants, the three-dimensional real person eliminates The active user itself;The predetermined three-dimensional background image includes modeling obtained predetermined three-dimensional Background by actual scene Picture, and/or the predetermined three-dimensional background image that cartoon making obtains, the predetermined three-dimensional background image can randomly select or by institutes Active user is stated to select.
16. image processing apparatus according to claim 9, it is characterised in that the action message includes the current use At least one of the expression at family and limb action.
17. a kind of electronic installation, it is characterised in that the electronic installation includes:
One or more processors;
Memory;With
One or more programs, wherein one or more of programs are stored in the memory, and be configured to by One or more of computing devices, described program include being used at the image that perform claim is required described in 1 to 8 any one The instruction of reason method.
A kind of 18. computer-readable recording medium, it is characterised in that the meter being used in combination including the electronic installation with that can image Calculation machine program, the computer program can be executed by processor to complete the image procossing described in claim 1 to 8 any one Method.
CN201710811472.9A 2017-09-11 2017-09-11 Image processing method and device, electronic installation and computer-readable recording medium Pending CN107590793A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710811472.9A CN107590793A (en) 2017-09-11 2017-09-11 Image processing method and device, electronic installation and computer-readable recording medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710811472.9A CN107590793A (en) 2017-09-11 2017-09-11 Image processing method and device, electronic installation and computer-readable recording medium

Publications (1)

Publication Number Publication Date
CN107590793A true CN107590793A (en) 2018-01-16

Family

ID=61050668

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710811472.9A Pending CN107590793A (en) 2017-09-11 2017-09-11 Image processing method and device, electronic installation and computer-readable recording medium

Country Status (1)

Country Link
CN (1) CN107590793A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107622495A (en) * 2017-09-11 2018-01-23 广东欧珀移动通信有限公司 Image processing method and device, electronic installation and computer-readable recording medium
CN108525305A (en) * 2018-03-26 2018-09-14 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108986062A (en) * 2018-07-23 2018-12-11 Oppo(重庆)智能科技有限公司 Image processing method and device, electronic device, storage medium and computer equipment
CN109005348A (en) * 2018-08-22 2018-12-14 Oppo广东移动通信有限公司 The control method of electronic device and electronic device
CN111182348A (en) * 2018-11-09 2020-05-19 阿里巴巴集团控股有限公司 Live broadcast picture display method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101685549A (en) * 2008-09-23 2010-03-31 何云 Real human computer three-dimensional Chinese painting cartoon
CN103440677A (en) * 2013-07-30 2013-12-11 四川大学 Multi-view free stereoscopic interactive system based on Kinect somatosensory device
CN105049747A (en) * 2015-08-06 2015-11-11 广州市博源数码科技有限公司 System for identifying static image and converting static image into dynamic display
CN106131536A (en) * 2016-08-15 2016-11-16 万象三维视觉科技(北京)有限公司 A kind of bore hole 3D augmented reality interactive exhibition system and methods of exhibiting thereof
CN106502388A (en) * 2016-09-26 2017-03-15 惠州Tcl移动通信有限公司 A kind of interactive movement technique and head-wearing type intelligent equipment
CN106909911A (en) * 2017-03-09 2017-06-30 广东欧珀移动通信有限公司 Image processing method, image processing apparatus and electronic installation
CN106937039A (en) * 2017-04-26 2017-07-07 努比亚技术有限公司 A kind of imaging method based on dual camera, mobile terminal and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101685549A (en) * 2008-09-23 2010-03-31 何云 Real human computer three-dimensional Chinese painting cartoon
CN103440677A (en) * 2013-07-30 2013-12-11 四川大学 Multi-view free stereoscopic interactive system based on Kinect somatosensory device
CN105049747A (en) * 2015-08-06 2015-11-11 广州市博源数码科技有限公司 System for identifying static image and converting static image into dynamic display
CN106131536A (en) * 2016-08-15 2016-11-16 万象三维视觉科技(北京)有限公司 A kind of bore hole 3D augmented reality interactive exhibition system and methods of exhibiting thereof
CN106502388A (en) * 2016-09-26 2017-03-15 惠州Tcl移动通信有限公司 A kind of interactive movement technique and head-wearing type intelligent equipment
CN106909911A (en) * 2017-03-09 2017-06-30 广东欧珀移动通信有限公司 Image processing method, image processing apparatus and electronic installation
CN106937039A (en) * 2017-04-26 2017-07-07 努比亚技术有限公司 A kind of imaging method based on dual camera, mobile terminal and storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107622495A (en) * 2017-09-11 2018-01-23 广东欧珀移动通信有限公司 Image processing method and device, electronic installation and computer-readable recording medium
CN108525305A (en) * 2018-03-26 2018-09-14 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108525305B (en) * 2018-03-26 2020-08-14 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN108986062A (en) * 2018-07-23 2018-12-11 Oppo(重庆)智能科技有限公司 Image processing method and device, electronic device, storage medium and computer equipment
CN109005348A (en) * 2018-08-22 2018-12-14 Oppo广东移动通信有限公司 The control method of electronic device and electronic device
CN111182348A (en) * 2018-11-09 2020-05-19 阿里巴巴集团控股有限公司 Live broadcast picture display method and device
CN111182348B (en) * 2018-11-09 2022-06-14 阿里巴巴集团控股有限公司 Live broadcast picture display method and device, storage device and terminal

Similar Documents

Publication Publication Date Title
CN107590793A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107509045A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107610077A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107707835A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107707831A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107707838A (en) Image processing method and device
CN107452034A (en) Image processing method and its device
CN107610080A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107734264A (en) Image processing method and device
CN107610078A (en) Image processing method and device
CN107705278A (en) The adding method and terminal device of dynamic effect
CN107644440A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107454336A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107705243A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107613223A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107610076A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107527335A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107613228A (en) The adding method and terminal device of virtual dress ornament
CN107682656A (en) Background image processing method, electronic equipment and computer-readable recording medium
CN107734265A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107730509A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107592491A (en) Video communication background display methods and device
CN107622495A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107613239A (en) Video communication background display methods and device
CN107590795A (en) Image processing method and device, electronic installation and computer-readable recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong

Applicant after: OPPO Guangdong Mobile Communications Co., Ltd.

Address before: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong

Applicant before: Guangdong OPPO Mobile Communications Co., Ltd.

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20180116

RJ01 Rejection of invention patent application after publication