CN107529020A - Image processing method and device, electronic installation and computer-readable recording medium - Google Patents

Image processing method and device, electronic installation and computer-readable recording medium Download PDF

Info

Publication number
CN107529020A
CN107529020A CN201710812665.6A CN201710812665A CN107529020A CN 107529020 A CN107529020 A CN 107529020A CN 201710812665 A CN201710812665 A CN 201710812665A CN 107529020 A CN107529020 A CN 107529020A
Authority
CN
China
Prior art keywords
image
personage
size
depth
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710812665.6A
Other languages
Chinese (zh)
Other versions
CN107529020B (en
Inventor
张学勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710812665.6A priority Critical patent/CN107529020B/en
Publication of CN107529020A publication Critical patent/CN107529020A/en
Application granted granted Critical
Publication of CN107529020B publication Critical patent/CN107529020B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses a kind of image processing method.Image processing method includes:Obtain the scene image and depth image of active user, scene image and depth image are handled to extract personage's area image of active user, obtain the original size of personage's area image and the first size ratio of scene image, the size of personage's area image is adjusted according to first size ratio, the second dimension scale of predetermined three-dimensional background image, predetermined depth and original size, personage's area image after size adjusting and predetermined three-dimensional background image are merged to obtain merging image.The invention also discloses a kind of image processing apparatus, electronic installation and computer-readable recording medium.Image processing method and device, the electronic installation and computer-readable recording medium of the present invention adjust the size of personage's area image according to first size ratio, the second dimension scale and predetermined depth, so that personage's area image after size adjusting can more be merged in phase with predetermined three-dimensional background image.

Description

Image processing method and device, electronic installation and computer-readable recording medium
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of image processing method, image processing apparatus, electronics Device and computer-readable recording medium.
Background technology
When reality scene image and virtual scene image co-registration, due to reality scene image and virtual scene image people or The inconsistent image caused after fusion of the dimension scale of thing seems uncoordinated, and image visual effect is poor.
The content of the invention
Can the embodiment provides a kind of image processing method, image processing apparatus, electronic installation and computer Read storage medium.
The image processing method of embodiment of the present invention is used for electronic installation.Described image processing method includes:
Obtain the scene image of active user;
Obtain the depth image of the active user;
The scene image and the depth image are handled to extract people of the active user in the scene image Object area and obtain personage's area image;
Obtain the original size of personage's area image;
Obtain first size ratio of the active user in the scene image;
According to the first size ratio, the second dimension scale of predetermined three-dimensional background image, personage's area image Predetermined depth and the original size in the predetermined three-dimensional background image adjust the size of personage's area image;With
Personage's area image after size adjusting and the predetermined three-dimensional background image are merged to obtain merging figure Picture.
The image processing apparatus of embodiment of the present invention is used for electronic installation.Image processing apparatus includes visible image capturing Head, depth image acquisition component and processor.The visible image capturing head is used for the scene image for obtaining active user.The depth Degree image collection assembly is used for the depth image for obtaining the active user.The processor is used for:
The scene image and the depth image are handled to extract people of the active user in the scene image Object area and obtain personage's area image;
Obtain the original size of personage's area image;
Obtain first size ratio of the active user in the scene image;
According to the first size ratio, the second dimension scale of predetermined three-dimensional background image, personage's area image Predetermined depth and the original size in the predetermined three-dimensional background image adjust the size of personage's area image;With
Personage's area image after size adjusting and the predetermined three-dimensional background image are merged to obtain merging figure Picture.
The electronic installation of embodiment of the present invention includes one or more processors, memory, one or more programs.Its Described in one or more programs be stored in the memory, and be configured to be held by one or more of processors OK, described program includes being used to perform above-mentioned image processing method.
The computer-readable recording medium of embodiment of the present invention includes what is be used in combination with the electronic installation that can be imaged Computer program, the computer program can be executed by processor to complete above-mentioned image processing method.
Image processing method, image processing apparatus, electronic installation and the computer-readable storage medium of embodiment of the present invention Matter according to first size ratio of personage's area image in scene image, the second dimension scale of predetermined three-dimensional background image, The original size adjustment personage area of predetermined depth and personage area image of personage's area image in predetermined three-dimensional background image The size of area image, so that personage's area image after size adjusting can be with predetermined three-dimensional background image more in phase Merged.In addition, personage's extracted region in scene image is come out by obtaining the depth image of active user.Due to The acquisition of depth image, which is not easy the factor such as COLOR COMPOSITION THROUGH DISTRIBUTION in by illumination, scene, to be influenceed, and therefore, is extracted by depth image People's object area is more accurate, it is particularly possible to which accurate calibration goes out the border of people's object area.Further, more accurately people's object area Image merged with predetermined three-dimensional background after merging image it is better.
The additional aspect and advantage of the present invention will be set forth in part in the description, and will partly become from the following description Obtain substantially, or recognized by the practice of the present invention.
Brief description of the drawings
Of the invention above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments Substantially and it is readily appreciated that, wherein:
Fig. 1 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Fig. 2 is the structural representation of the electronic installation of some embodiments of the present invention.
Fig. 3 is the schematic diagram of the image processing apparatus of some embodiments of the present invention.
Fig. 4 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Fig. 5 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Fig. 6 (a) to Fig. 6 (e) is the schematic diagram of a scenario of structural light measurement according to an embodiment of the invention.
Fig. 7 (a) and Fig. 7 (b) structural light measurements according to an embodiment of the invention schematic diagram of a scenario.
Fig. 8 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Fig. 9 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Figure 10 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Figure 11 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Figure 12 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Figure 13 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Figure 14 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Figure 15 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Figure 16 is the schematic diagram of the electronic installation of some embodiments of the present invention.
Figure 17 is the schematic diagram of the electronic installation of some embodiments of the present invention.
Embodiment
Embodiments of the invention are described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end Same or similar label represents same or similar element or the element with same or like function.Below with reference to attached The embodiment of figure description is exemplary, it is intended to for explaining the present invention, and is not considered as limiting the invention.
Also referring to Fig. 1 and Fig. 2, the image processing method of embodiment of the present invention is used for electronic installation 1000.Image Processing method includes:
01:Obtain the scene image of active user;
02:Obtain the depth image of active user;
03:Processing scene image and depth image obtain people to extract people's object area of the active user in scene image Object area image;
04:Obtain the original size of personage's area image;
05:Obtain first size ratio of the active user in scene image;
06:According to first size ratio, the second dimension scale of predetermined three-dimensional background image, personage's area image predetermined The size of predetermined depth and original size adjustment personage's area image in three-dimensional background image;With
07:Personage's area image after size adjusting and predetermined three-dimensional background image are merged to obtain merging image.
Referring to Fig. 3, the image processing method of embodiment of the present invention can be by the image procossing of embodiment of the present invention Device 100 is realized.The image processing apparatus 100 of embodiment of the present invention is used for electronic installation 1000.Image processing apparatus 100 wraps Include visible image capturing first 11, depth image acquisition component 12 and processor 20.Step 01 can by visible image capturing it is first 11 realize, Step 02 can realize by depth image acquisition component 12, and step 03, step 04, step 05, step 06 and step 07 can be by Processor 20 is realized.
In other words, it is seen that light video camera head 11 can be used for the scene image for obtaining active user.Depth image acquisition component 12 can be used for the depth image of acquisition active user.It is current to extract that processor 20 can be used for processing scene image and depth image People object area of the user in scene image and obtain personage's area image;Obtain the original size of personage's area image;Obtain First size ratio of the active user in scene image;According to first size ratio, the second chi of predetermined three-dimensional background image The predetermined depth and original size adjustment personage's area image of very little ratio, personage's area image in predetermined three-dimensional background image Size;With by personage's area image after size adjusting and predetermined three-dimensional background image fusion with obtain merge image.
Wherein, it can be gray level image or coloured image that scene image, which is, and depth image characterizes the field for including active user Each personal or object depth information in scape.The scene domain of scene image is consistent with the scene domain of depth image, and scene Each pixel in image can be found in depth image to should pixel depth information.
The image processing apparatus 100 of embodiment of the present invention can apply to the electronic installation of embodiment of the present invention 1000.In other words, the electronic installation 1000 of embodiment of the present invention includes the image processing apparatus of embodiment of the present invention 100。
In some embodiments, electronic installation 1000 includes mobile phone, tablet personal computer, notebook computer, Intelligent bracelet, intelligence Energy wrist-watch, intelligent helmet, intelligent glasses etc..
When existing scene image and predetermined three-dimensional background image are merged, scene image is directly placed into predetermined three Tie up in background image, the merging image that this amalgamation mode is formed is due to the size of scene image and predetermined three-dimensional background image ratio Example is inconsistent and seems very uncoordinated.Image processing method, image processing apparatus 100 and the electronics dress of embodiment of the present invention 1000 are put according to first size ratio of personage's area image in scene image, the second size ratio of predetermined three-dimensional background image Example, the original size of predetermined depth and personage area image of personage's area image in predetermined three-dimensional background image adjustment personage The size of area image, so that personage's area image after size adjusting can more be coordinated with predetermined three-dimensional background image Merged on ground.In addition, personage's extracted region in scene image is come out by obtaining the depth image of active user.By Being not easy the factor such as COLOR COMPOSITION THROUGH DISTRIBUTION in by illumination, scene in the acquisition of depth image is influenceed, and therefore, is extracted by depth image People's object area it is more accurate, it is particularly possible to accurate calibration goes out the border of people's object area.Further, more accurately personage area Area image merged with predetermined three-dimensional background after merging image it is better.
In some embodiments, predetermined three-dimensional background image can be that the predetermined three-dimensional for modeling to obtain by actual scene is carried on the back The predetermined three-dimensional background image that scape image or cartoon making obtain.Predetermined three-dimensional background image can be processor 20 Select, can also voluntarily be selected by active user at random.It should be noted that in embodiments of the present invention, cartoon making Obtained predetermined three-dimensional background image can make to obtain with reference to real scene, i.e., the predetermined three-dimensional back of the body that cartoon making obtains The predetermined three-dimensional background image that scape and actual scene model to obtain is respectively provided with the second dimension scale.
The size of embodiment of the present invention can be finger widths, height, area etc., be not specifically limited herein.
Referring to Fig. 4, in some embodiments, the step of step 02 obtains the depth image of active user, includes:
021:To active user's projective structure light;
022:The structure light image that shooting is modulated through active user;With
023:Phase information corresponding to each pixel of demodulation structure light image is to obtain depth image.
Referring again to Fig. 3, in some embodiments, depth image acquisition component 12 includes the He of structured light projector 121 Structure light video camera head 122.Step 021 can be realized that step 022 and step 023 can be by structure lights by structured light projector 121 Camera 122 is realized.
In other words, structured light projector 121 can be used for active user's projective structure light;Structure light video camera head 122 can For shooting the structure light image modulated through active user, and phase information corresponding to each pixel of demodulation structure light image To obtain depth image.
Specifically, structured light projector 121 is by the face and body of the project structured light of certain pattern to active user Afterwards, the structure light image after being modulated by active user can be formed in the face of active user and the surface of body.Structure light images Structure light image after first 122 shooting is modulated, then structure light image is demodulated to obtain depth image.Wherein, structure The pattern of light can be laser stripe, Gray code, sine streak, non-homogeneous speckle etc..
Referring to Fig. 5, in some embodiments, phase corresponding to each pixel of step 023 demodulation structure light image The step of information is to obtain depth image includes:
0231:Phase information corresponding to each pixel in demodulation structure light image;
0232:Phase information is converted into depth information;With
0233:Depth image is generated according to depth information.
Referring again to Fig. 2, in some embodiments, step 0231, step 0232 and step 0233 can be by structures Light video camera head 122 is realized.
In other words, structure light video camera head 122 can be further used in demodulation structure light image phase corresponding to each pixel Position information, phase information is converted into depth information, and depth image is generated according to depth information.
Specifically, compared with non-modulated structure light, the phase information of the structure light after modulation is changed, and is being tied The structure light showed in structure light image is to generate the structure light after distortion, wherein, the phase information of change can characterize The depth information of object.Therefore, structure light video camera head 122 demodulates phase corresponding to each pixel in structure light image and believed first Breath, calculates depth information, so as to obtain final depth image further according to phase information.
In order that those skilled in the art is more apparent from gathering the face of active user and body according to structure The process of the depth image of body, illustrate it by taking a kind of widely used optical grating projection technology (fringe projection technology) as an example below Concrete principle.Wherein, optical grating projection technology belongs to sensu lato area-structure light.
As shown in Fig. 6 (a), when being projected using area-structure light, sine streak is produced by computer programming first, And sine streak is projected to measured object by structured light projector 121, recycle structure light video camera head 122 to shoot striped by thing Degree of crook after body modulation, then demodulates the curved stripes and obtains phase, then phase is converted into depth information to obtain Depth image.The problem of to avoid producing error or error coupler, needed before carrying out depth information collection using structure light to depth Image collection assembly 12 carries out parameter calibration, and demarcation includes geometric parameter (for example, structure light video camera head 122 and project structured light Relative position parameter between device 121 etc.) demarcation, the inner parameter and structured light projector 121 of structure light video camera head 122 The demarcation of inner parameter etc..
Specifically, the first step, computer programming produce sine streak.Need to obtain using the striped of distortion due to follow-up Phase, for example phase is obtained using four step phase-shifting methods, therefore the striped that four width phase differences are pi/2, then structure light are produced here The projector 121 projects the four spokes line timesharing on measured object (mask shown in Fig. 6 (a)), and structure light video camera head 122 gathers To the figure on such as Fig. 6 (b) left sides, while to read the striped of the plane of reference shown on the right of Fig. 6 (b).
Second step, carry out phase recovery.The bar graph that structure light video camera head 122 is modulated according to four width collected is (i.e. Structure light image) to calculate the phase diagram by phase modulation, now obtained be to block phase diagram.Because four step Phase-shifting algorithms obtain Result be that gained is calculated by arctan function, therefore the phase after structure light modulation is limited between [- π, π], that is, Say, the phase after modulation exceedes [- π, π], and it can restart again.Shown in the phase main value such as Fig. 6 (c) finally given.
Wherein, it is necessary to carry out the saltus step processing that disappears, it is continuous phase that will block phase recovery during phase recovery is carried out Position.As shown in Fig. 6 (d), the left side is the continuous phase bitmap modulated, and the right is to refer to continuous phase bitmap.
3rd step, subtract each other to obtain phase difference (i.e. phase information) by the continuous phase modulated and with reference to continuous phase, should Phase difference characterizes depth information of the measured object with respect to the plane of reference, then phase difference is substituted into the conversion formula (public affairs of phase and depth The parameter being related in formula is by demarcation), you can obtain the threedimensional model of the object under test as shown in Fig. 6 (e).
It should be appreciated that in actual applications, according to the difference of concrete application scene, employed in the embodiment of the present invention Structure light in addition to above-mentioned grating, can also be other arbitrary graphic patterns.
As a kind of possible implementation, the depth information of pattern light progress active user also can be used in the present invention Collection.
Specifically, the method that pattern light obtains depth information is that this spreads out using a diffraction element for being essentially flat board The relief diffraction structure that there are element particular phases to be distributed is penetrated, cross section is with two or more concavo-convex step embossment knots Structure.Substantially 1 micron of the thickness of substrate in diffraction element, each step it is highly non-uniform, the span of height can be 0.7 Micron~0.9 micron.Structure shown in Fig. 7 (a) is the local diffraction structure of the collimation beam splitting element of the present embodiment.Fig. 7 (b) is edge The unit of the cross sectional side view of section A-A, abscissa and ordinate is micron.The speckle pattern of pattern photogenerated has The randomness of height, and can with the difference of distance changing patterns.Therefore, depth information is being obtained using pattern light Before, it is necessary first to the speckle pattern in space is calibrated, for example, in the range of 0~4 meter of distance structure light video camera head 122, A reference planes are taken every 1 centimetre, then just save 400 width speckle images after demarcating, the spacing of demarcation is smaller, obtains Depth information precision it is higher.Then, structured light projector 121 is by pattern light projection to measured object (i.e. active user) On, the speckle pattern that the difference in height on measured object surface to project the pattern light on measured object changes.Structure light Camera 122 is shot project speckle pattern (i.e. structure light image) on measured object after, then by speckle pattern and demarcation early stage The 400 width speckle images preserved afterwards carry out computing cross-correlation one by one, and then obtain 400 width correlation chart pictures.Measured object in space Position where body can show peak value on correlation chart picture, above-mentioned peak value is superimposed and after interpolation arithmetic i.e. It can obtain the depth information of measured object.
Multi beam diffraction light is obtained after diffraction is carried out to light beam due to common diffraction element, but per beam diffraction light light intensity difference Greatly, it is also big to the risk of human eye injury.Re-diffraction even is carried out to diffraction light, the uniformity of obtained light beam is relatively low. Therefore, the effect projected using the light beam of common diffraction element diffraction to measured object is poor.Using collimation in the present embodiment Beam splitting element, the element not only have the function that to collimate uncollimated rays, also have the function that light splitting, i.e., through speculum The non-collimated light of reflection is emitted multi-beam collimation light beam, and the multi-beam collimation being emitted after collimating beam splitting element toward different angles The area of section approximately equal of light beam, flux of energy approximately equal, and then to carry out using the scatterplot light after the beam diffraction The effect of projection is more preferable.Meanwhile laser emitting light is dispersed to every light beam, the risk of injury human eye is reduce further, and dissipate Spot structure light is for other uniform structure lights of arrangement, when reaching same collection effect, the consumption of pattern light Electricity is lower.
Referring to Fig. 8, in some embodiments, step 03 handles scene image and depth image to extract active user People's object area in scene image and include the step of obtain personage's area image:
031:Identify the human face region in scene image;
032:Depth information corresponding with human face region is obtained from depth image;
033:The depth bounds of people's object area is determined according to the depth information of human face region;With
034:The personage area for determining to be connected and fallen into depth bounds with human face region according to the depth bounds of people's object area Domain is to obtain personage's area image.
Referring again to Fig. 2, in some embodiments, step 031, step 032, step 033 and step 034 can be by Processor 20 is realized.
In other words, processor 20 can be further used for identifying the human face region in scene image, be obtained from depth image Depth information corresponding with human face region is taken, the depth bounds of people's object area is determined according to the depth information of human face region, and Determine to be connected with human face region according to the depth bounds of people's object area and people's object area for falling into depth bounds is to obtain personage Area image.
Specifically, the human face region that the deep learning Model Identification trained can be used to go out in scene image first, with The depth information of human face region is can determine that according to the corresponding relation of scene image and depth image afterwards.Because human face region includes The features such as nose, eyes, ear, lip, therefore, depth number of each feature corresponding in depth image in human face region According to being different, for example, in face face depth image acquisition component 12, depth that depth image acquisition component 12 is shot In image, depth data corresponding to nose may be smaller, and depth data corresponding to ear may be larger.Therefore, above-mentioned people The depth information in face region may be a numerical value or a number range.Wherein, when the depth information of human face region is one During individual numerical value, the numerical value can be by averaging to obtain to the depth data of human face region;Or can be by human face region Depth data take in be worth to.
Because people's object area includes human face region, in other words, people's object area is in some depth together with human face region In the range of, therefore, after processor 20 determines the depth information of human face region, it can be set according to the depth information of human face region The depth bounds of people's object area, the depth bounds extraction further according to people's object area fall into the depth bounds and with human face region phase People's object area of connection is to obtain personage's area image.
In this way, personage's area image can be extracted from scene image according to depth information.Due to obtaining for depth information The image of the not factor such as illumination, colour temperature in by environment is taken to ring, therefore, the personage's area image extracted is more accurate.
Referring to Fig. 9, in some embodiments, image processing method is further comprising the steps of:
081:Scene image is handled to obtain the whole audience edge image of scene image;With
082:According to whole audience edge image amendment personage's area image of scene image.
Referring again to Fig. 2, in some embodiments, step 081 and step 082 can be realized by processor 20.
In other words, processor 20 can also be used to handle scene image to obtain the whole audience edge image of scene image, with And whole audience edge image amendment personage's area image according to scene image.
Processor 20 carries out edge extracting to obtain the whole audience edge image of scene image to scene image first, wherein, Edge lines in the whole audience edge image of scene image include background object in scene residing for active user and active user Edge lines.Specifically, edge extracting can be carried out to scene image by Canny operators.Canny operators carry out edge extracting The core of algorithm mainly include the following steps:First, convolution is carried out to scene image with 2D gaussian filterings template to make an uproar to eliminate Sound;Then, the Grad of the gray scale of each pixel is obtained using differential operator, and the gray scale of each pixel is calculated according to Grad Gradient direction, adjacent pixels of the respective pixel along gradient direction can be found by gradient direction;Then, each picture is traveled through Element, if the gray value of some pixel is not maximum compared with the gray value of former and later two adjacent pixels on its gradient direction, that It is not marginal point to think this pixel.In this way, the pixel that marginal position is in scene image is can determine that, so as to obtain The whole audience edge image of scene image after edge extracting.
Processor 20 obtain scene image whole audience edge image after, further according to scene image whole audience edge image to people Object area image is modified.It is appreciated that personage's area image is will to be connected and fall into set with human face region in scene image Obtained after all pixels progress merger of fixed depth bounds, in some scenarios, it is understood that there may be some and human face region connect The object for connecing and falling into depth bounds.Therefore, to cause personage's area image of extraction more accurate, scene image can be used The whole audience edge graph personage's area image is modified.
Further, processor 20 can also carry out second-order correction to revised personage's area image, for example, can be to amendment Personage's area image afterwards carries out expansion process, expands personage's area image to retain the edge details of personage's area image.
Referring to Fig. 10, in some embodiments, the step of step 04 obtains the original size of personage's area image, wraps Include:
041:Calculate personage's pixel count of personage's area image;With
042:Original size is obtained according to the resolution ratio of personage's pixel count and scene image.
Referring again to Fig. 2, in some embodiments, step 041 and step 042 can be realized by processor 20.
In other words, processor 20 can be further used for calculating personage's pixel count of personage's area image, and according to people The resolution ratio of image prime number and scene image obtains original size.
Specifically, the original size of personage's area image can refer to that personage's area image is corresponding when scene image is shown Picture size.Because scene image may pass through different degrees of scaling, cause the original size of personage's area image may Changed, therefore can first calculate personage's pixel count of personage's area image, personage's pixel count refers to personage's area image Comprising pixel number, personage's pixel count can be sum or the personage of the pixel of personage's area image The number of pixel between the leftmost pixel and rightmost pixel of area image, can also be personage's area image The number of pixel between the top pixel and bottom pixel, it will be understood that the scaling of scene image will not be to people Image prime number has an impact, but the general resolution ratio for changing scene image of the scaling of scene image, such as by scene image Become greatly original twice, then the resolution ratio of scene image is changed into original half, therefore passes through personage's pixel count and scene graph The resolution ratio as corresponding to can calculate the original size for obtaining personage's area image.
In some embodiments, original size is the ratio of the resolution ratio of bust prime number and scene image.At one In embodiment, of the pixel between the top pixel and bottom pixel of personage's pixel count behaviour object area image Number, personage's pixel count are 400, and the resolution ratio of scene image is 800 pixels/inch, and it is 400/800 to calculate and obtain original size =0.5 inch, i.e. the height of personage's area image is 0.5 inch.
Figure 11 is referred to, in some embodiments, step 05 obtains first size of the active user in scene image The step of ratio, includes:
051:Obtain the actual size of active user;With
052:First size ratio is calculated according to actual size and original size.
Referring again to Fig. 2, in some embodiments, step 051 and step 052 can be realized by processor 20.
In other words, processor 20 can be further used for obtaining the actual size of active user, and according to actual size First size ratio is calculated with original size.
Specifically, first size ratio of the active user in scene image can refer to the original chi of personage's area image The very little and proportionate relationship of the actual size of active user, therefore by obtaining the actual size and personage's area image of active user Original size can obtain first size ratio.In some embodiments, first size ratio is original size and reality The ratio of size.
Refer to Figure 12, in some embodiments, step 051 obtain active user actual size the step of include:
0511:Obtain the size of imaging sensor and the scene pixel number of scene image corresponding to scene image;
0512:The size of imaging sensor, scene pixel number and personage's pixel count according to corresponding to scene image, which calculate, works as The size of the real picture of preceding user;With
0513:According to the focal length of visible image capturing corresponding to the depth of active user, scene image first 11 and real picture Size calculation actual size.
Referring again to Fig. 2, in some embodiments, step 0511, step 0512 and step 0513 can be by handling Device 20 is realized.
In other words, processor 20 can be further used for obtaining size and the field of the imaging sensor of visible image capturing first 11 The scene pixel number of scape image, according to the size of the imaging sensor of visible image capturing first 11, scene pixel number and personage's pixel Number calculates the size of the real pictures of active users, and according to the focal length of depth, the visible image capturing first 11 of active user and The Size calculation actual size of real picture.
Specifically, the real image as formation on the image sensor of active user, according to imaging sensor Size, scene pixel number and personage's pixel count can calculate the size of the real picture of active user, and wherein scene pixel number is Refer to the number for the pixel that scene image is included, scene pixel number can be the sum of the pixel of scene image, can also It is the number of the pixel between the leftmost pixel of scene image and rightmost pixel, can also be scene image most The number of pixel between topmost pixel point and bottom pixel.In some embodiments, the size of real picture with The ratio and personage's pixel count of the size of imaging sensor are equal with the ratio of scene pixel number.In one embodiment, image The height of sensor is 2 inches, and the number of the pixel between the top pixel and bottom pixel of scene image is 1000, the number of the pixel between the top pixel and bottom pixel of personage's area image is 500, then real picture Size be 1 inch, i.e., the height of real picture is 1 inch.
On the other hand, the actual size of active user can correspond to visible ray according to the depth of active user, scene image The Size calculation of the focal length of camera 11 and real picture obtains.Specifically, according to lens imaging formula:1/u+1/v=1/f, Wherein u is object distance, v is image distance, f is focal length, can obtain v=u*f/ (u-f), in addition, according to H/h=u/v, wherein H is to work as The actual size of preceding user, h are the size of real picture, can obtain H=u*h/v.V=u*f/ (u-f) is substituted into H=u*h/ In v, H=u*h/f-h can be obtained, wherein object distance u can be understood as the distance of active user and visible image capturing first 11, that is, work as The depth of preceding user.Therefore, the focal length of the visible image capturing according to corresponding to the depth of active user, scene image first 11 and true The size of picture can calculate actual size.
Figure 13 is referred to, in some embodiments, step 06 is according to first size ratio, predetermined three-dimensional background image The predetermined depth and original size adjustment people's object area of second dimension scale, personage's area image in predetermined three-dimensional background image The step of size of image, includes:
061:The pantograph ratio of personage's area image is obtained according to first size ratio, the second dimension scale and predetermined depth Example;With
062:According to the size of scaling and original size adjustment personage's area image to obtain the people after size adjusting Object area image.
Referring again to Fig. 2, in some embodiments, step 061 and step 062 can be realized by processor 20.
In other words, processor 20 can be further used for according to first size ratio, the second dimension scale and predetermined depth Obtain the scaling of personage's area image, and according to scaling and original size adjust the size of personage's area image with Obtain personage's area image after size adjusting.
In some embodiments, the second dimension scale of predetermined three-dimensional background image is just known in advance, such as It can be obtained in the building process of predetermined three-dimensional background image, the mode of acquisition can be with the mode phase of first size ratio acquisition Together, will not be repeated here.If predetermined three-dimensional background image, which is real scene, reduces 10 times of formation, the second size ratio may determine that Example is 1:10.The scaling of personage's area image, tool are obtained according to first size ratio, the second dimension scale and predetermined depth Body, according to first size ratio and the second dimension scale first preliminary scaling for judging personage's area image, such as first Dimension scale (for example being the ratio of original size and actual size) is 1:100, the second dimension scale (simulation size and actual chi Very little ratio) it is 1:10, then the preliminary scaling for judging personage's area image is 10 times of amplification;Further according to the contracting tentatively judged The scaling that ratio and predetermined depth obtain personage's area image is put, such as when predetermined depth is 1 meter, personage's area image Scaling can be the scaling that tentatively judges, and when predetermined depth is 2 meters, the scaling of personage's area image can be with For the first prearranged multiple of the scaling tentatively judged, such as 0.9 times, when predetermined depth is 0.5 meter, personage's area image Scaling can be the second prearranged multiple of the scaling tentatively judged, such as 1.1 times, wherein, predetermined depth and pantograph ratio Example is in inverse ratio.In this way, the scaling of personage's area image can be obtained and personage is adjusted according to scaling and original size The size of area image is to obtain personage's area image after size adjusting.In one embodiment, scaling is 10 times, original Size is 0.1 inch, then the size of personage's area image after size adjusting is 1 inch.
In some embodiments, predetermined depth of personage's area image in predetermined three-dimensional background image can according to Family demand is configured, it will be understood that in other embodiments, personage's area image is pre- in predetermined three-dimensional background image If depth can also place depth corresponding to the correct position of personage's area image by calculating in predetermined three-dimensional background image, This is not specifically limited.
Figure 14 is referred to, in some embodiments, step 07 is by personage's area image after size adjusting with making a reservation for three The fusion of dimension background image is included with obtaining merging image:
0711:Obtain intended pixel region corresponding to predetermined integration region in predetermined three-dimensional background image;
0712:The pixel region to be replaced of predetermined integration region is determined according to personage's area image after adjustment;With
0713:The pixel region to be replaced of predetermined integration region is replaced with into personage's area image to obtain merging image.
Referring again to Fig. 2, in some embodiments, step 0711, step 0712 and step 0713 can be by handling Device 20 is realized.
In other words, processor 20 can be further used for obtaining in predetermined three-dimensional background image corresponding to predetermined integration region Intended pixel region, the pixel region to be replaced of predetermined integration region is determined according to personage's area image after adjustment, and will The pixel region to be replaced of predetermined integration region replaces with personage's area image to obtain merging image.
It is appreciated that when predetermined three-dimensional background image models to obtain by actual scene, in predetermined three-dimensional background image Depth data can be obtained directly in modeling process corresponding to each pixel;Pass through cartoon making in predetermined three-dimensional background image When obtaining, depth data corresponding to each pixel can be by producer's sets itself in predetermined three-dimensional background image;It is in addition, predetermined Each object present in three-dimensional background image is also known, therefore, is melted carrying out image using predetermined three-dimensional background image Before processing is closed, personage's area image first can be calibrated according to depth data and the object being present in predetermined three-dimensional background image Fusion position, i.e., predetermined integration region.Processor 20 need to determine predetermined melt according to the size of personage's area image after adjustment Close the pixel region to be replaced in region.Then, the pixel region to be replaced in predetermined integration region is replaced with into people's object area Image is the merging image after being merged.In this way, realize merging for personage's area image and predetermined three-dimensional background image.
Figure 15 is referred to, in some embodiments, step 07 is by personage's area image after size adjusting with making a reservation for three The fusion of dimension background image is included with obtaining merging image:
0721:Predetermined three-dimensional background image is handled to obtain the whole audience edge image of predetermined three-dimensional background image;
0722:Obtain the depth data of predetermined three-dimensional background image;
0723:Predetermined three-dimensional background image is determined according to the whole audience edge image and depth data of predetermined three-dimensional background image Calculating integration region;
0724:Determined to calculate the pixel region to be replaced of integration region according to personage's area image after adjustment;With
0725:The pixel region to be replaced for calculating integration region is replaced with into personage's area image to obtain merging image.
Referring again to Fig. 2, in some embodiments, step 0721, step 0722, step 0723, step 0724 and step Rapid 0725 can be realized by processor 20.
In other words, processor 20 can be further used for handling predetermined three-dimensional background image to obtain predetermined three-dimensional background The whole audience edge image of image, the depth data of predetermined three-dimensional background image is obtained, according to the whole audience of predetermined three-dimensional background image Edge image and depth data determine the calculating integration region of predetermined three-dimensional background image, according to personage's area image after adjustment It is determined that calculating the pixel region to be replaced of integration region, and the pixel region to be replaced for calculating integration region is replaced with into personage Area image with obtain merge image.
It is appreciated that when if predetermined three-dimensional background image merges with personage's area image, the fusion position of personage's area image Not demarcation in advance is put, then processor 20 needs to determine fusion position of personage's area image in predetermined three-dimensional background image first. Specifically, processor 20 first carries out edge extracting to obtain whole audience edge image to predetermined three-dimensional background image, and obtains predetermined The depth data of three-dimensional background image, wherein, depth data obtains in the modeling of predetermined three-dimensional background image or animation process Take.Then, processor 20 determines predetermined three-dimensional background according to the whole audience edge image and depth data of predetermined three-dimensional background image Calculating integration region in image.Because the size of personage's area image is influenceed by the collection distance of visible image capturing first 11, Processor 20 need to determine to calculate the pixel region to be replaced in integration region according to the size of personage's area image after adjustment.Most Eventually, the pixel region to be replaced calculated in integration region image is replaced with into personage's area image, so as to obtain merging image.Such as This, realizes merging for personage's area image and predetermined three-dimensional background image.
Merging image after fusion can be shown on the display screen of electronic installation 1000, also can by with electronic installation The printer of 1000 connections is printed.
In some embodiments, personage's area image can be the personage's area image or three-dimensional of two dimension Personage's area image.Wherein, the depth information that processor 20 can be combined in depth image extracts from scene image obtains two dimension Personage's area image, processor 20 can also establish the 3-D view of people's object area according to the depth information in depth image, then Color is carried out with reference to the color information in scene image to people's object area of three-dimensional to fill up to obtain the colored personage area of three-dimensional Area image.
In some embodiments, the predetermined integration region in three-dimensional background image or calculating integration region can be one It is or multiple.When predetermined integration region is one, personage's area image of two-dimentional personage's area image or three-dimensional is predetermined Fusion position in three-dimensional background image is an above-mentioned unique predetermined integration region;It it is one when calculating integration region When, on fusion position of personage's area image in predetermined three-dimensional background image of two-dimentional personage's area image or three-dimensional be State unique calculating integration region;When predetermined integration region is multiple, the people of two-dimentional personage's area image or three-dimensional Fusion position of the object area image in predetermined three-dimensional background image can be any one in multiple predetermined integration regions, more enter One step, because personage's area image of three-dimensional has depth information, therefore can be found in multiple predetermined integration regions and three The predetermined integration region that the depth information of dimension personage's area image matches is as fusion position, to obtain preferably fusion effect Fruit;When it is multiple to calculate integration region, personage's area image of two-dimentional personage's area image or three-dimensional is calculating the three-dimensional back of the body Fusion position in scape image can be any one in multiple calculating integration regions, further, due to the personage of three-dimensional Area image has depth information, therefore can find in multiple calculating integration regions and believe with the depth of three dimensional character area image The calculating integration region of manner of breathing matching is as fusion position, to obtain more preferable syncretizing effect.
In some application scenarios, for example, active user carries out wishing to hide current background during video with other people, Now, you can using the image processing method of embodiment of the present invention by personage's area image corresponding to active user with making a reservation for three Background fusion is tieed up, then the merging image after fusion is shown to other side., therefore, can because active user is just with other side's video calling See that light video camera head 11 needs the scene image of captured in real-time active user, it is current that depth image acquisition component 12 is also required to collection in real time Depth image corresponding to user, and by processor 20 scene image and depth image gathered in real time is handled in time so that Other side is obtained it can be seen that the video pictures combined by multiframe merging image of smoothness.
Also referring to Fig. 2 and Figure 16, embodiment of the present invention also proposes a kind of electronic installation 1000.Electronic installation 1000 Including image processing apparatus 100.Image processing apparatus 100 can utilize hardware and/or software to realize.Image processing apparatus 100 Including imaging device 10 and processor 20.
Imaging device 10 includes visible image capturing first 11 and depth image acquisition component 12.
Specifically, it is seen that light video camera head 11 includes imaging sensor 111 and lens 112, it is seen that light video camera head 11 can be used for The colour information of active user is caught to obtain scene image, wherein, imaging sensor 111 includes color filter lens array (such as Bayer filter arrays), the number of lens 112 can be one or more.Visible image capturing first 11 is obtaining scene image process In, each imaging pixel in imaging sensor 111 senses luminous intensity and wavelength information in photographed scene, generation one Group raw image data;Imaging sensor 111 sends this group of raw image data into processor 20, and processor 20 is to original View data obtains colored scene image after carrying out the computings such as denoising, interpolation.Processor 20 can be in various formats to original Each image pixel in view data is handled one by one, for example, each image pixel can have the locating depth of 8,10,12 or 14 bits Degree, processor 20 can be handled each image pixel by identical or different bit depth.
Depth image acquisition component 12 includes structured light projector 121 and structure light video camera head 122, depth image collection group The depth information that part 12 can be used for catching active user is to obtain depth image.Structured light projector 121 is used to throw structure light Active user is incident upon, wherein, structured light patterns can be the speckle of laser stripe, Gray code, sine streak or random alignment Pattern etc..Structure light video camera head 122 includes imaging sensor 1221 and lens 1222, and the number of lens 1222 can be one or more It is individual.Imaging sensor 1221 is used for the structure light image that capturing structure light projector 121 is projected on active user.Structure light figure As can be sent by depth acquisition component 12 to processor 20 be demodulated, the processing such as phase recovery, phase information calculate to be to obtain The depth information of active user.
In some embodiments, it is seen that the function of light video camera head 11 and structure light video camera head 122 can be by a camera Realize, in other words, imaging device 10 only includes a camera and a structured light projector 121, and above-mentioned camera is not only Structure light image can also be shot with photographed scene image.
Except using structure light obtain depth image in addition to, can also by binocular vision method, based on differential time of flight (Time Of Flight, TOF) even depth obtains the depth image of active user as acquisition methods.
Processor 20 is further used for personage's area image by being extracted from scene image and depth image and made a reservation for Three-dimensional background image co-registration.Wherein, fusion treatment personage area image and can be during predetermined three-dimensional background image will two dimension Personage's area image is merged with predetermined three-dimensional background image to obtain merging image or the colored people by three-dimensional Object area image is merged with predetermined three-dimensional background image to obtain merging image.
In addition, image processing apparatus 100 also includes memory 30.Memory 30 can be embedded in electronic installation 1000, The memory that can be independently of outside electronic installation 1000, and may include direct memory access (DMA) (Direct Memory Access, DMA) feature.The knot that the raw image data or depth image acquisition component 12 of first 11 collection of visible image capturing gather Structure light image related data, which can transmit, to be stored or is cached into memory 30.Processor 20 can be read from memory 30 Raw image data also can read structure light image related data to enter to be handled to obtain scene image from memory 30 Row processing obtains depth image.In addition, scene image and depth image are also storable in memory 30, device 20 for processing with When calling handle, for example, processor 20 calls scene image and depth image to carry out personage's extracted region, and obtain after carrying Personage's area image and predetermined three-dimensional background image carry out fusion treatment to obtain merging image.Wherein, predetermined three-dimensional background Image and merging image may be alternatively stored in memory 30.
Image processing apparatus 100 may also include display 50.Display 50 can obtain merging figure directly from processor 20 Picture, it can also be obtained from memory 30 and merge image.The display of display 50 merges image so that user watches, or is drawn by figure Hold up or graphics processor (Graphics Processing Unit, GPU) is further processed.Image processing apparatus 100 Also include encoder/decoder 60, encoder/decoder 60 encoding and decoding scene image, depth image and can merge image etc. View data, the view data of coding can be saved in memory 30, and can be before image is shown on display 50 By decoder decompresses to be shown.Encoder/decoder 60 can be by central processing unit (Central Processing Unit, CPU), GPU or coprocessor realize.In other words, encoder/decoder 60 can be central processing unit (Central Processing Unit, CPU), any one or more in GPU and coprocessor.
Image processing apparatus 100 also includes control logic device 40.Imaging device 10 imaging when, processor 20 can according into As the data that equipment obtains are analyzed to determine one or more control parameters of imaging device 10 (for example, time for exposure etc.) Image statistics.Processor 20 sends image statistics to control logic device 40, the control imaging of control logic device 40 Equipment 10 is imaged with the control parameter determined.Control logic device 40 may include to perform one or more routines (such as firmware) Processor and/or microcontroller.One or more routines can determine imaging device 10 according to the image statistics of reception Control parameter.
Figure 17 is referred to, the electronic installation 1000 of embodiment of the present invention includes one or more processors 20, memory 30 and one or more programs 31.Wherein one or more programs 31 are stored in memory 30, and are configured to by one Individual or multiple processors 20 perform.Program 31 includes being used to perform the finger of the image processing method of above-mentioned any one embodiment Order.
For example, program 31 includes being used for the instruction for performing the image processing method described in following steps:
01:Obtain the scene image of active user;
02:Obtain the depth image of active user;
03:Scene image and depth image are handled to extract people object area of the active user in scene image to obtain people Object area image;
04:Obtain the original size of personage's area image;
05:Obtain first size ratio of the active user in scene image;
06:According to first size ratio, the second dimension scale of predetermined three-dimensional background image, personage's area image predetermined The size of predetermined depth and original size adjustment personage's area image in three-dimensional background image;With
07:Personage's area image after size adjusting and predetermined three-dimensional background image are merged to obtain merging image.
For another example program 31 also includes being used for the instruction for performing the image processing method described in following steps:
0231:Phase information corresponding to each pixel in demodulation structure light image;
0232:Phase information is converted into depth information;With
0233:Depth image is generated according to depth information.
The computer-readable recording medium of embodiment of the present invention includes being combined with the electronic installation 1000 that can be imaged making Computer program.Computer program can be performed by processor 20 to complete the image procossing of above-mentioned any one embodiment Method.
For example, computer program can be performed by processor 20 to complete the image processing method described in following steps:
01:Obtain the scene image of active user;
02:Obtain the depth image of active user;
03:Scene image and depth image are handled to extract people object area of the active user in scene image to obtain people Object area image;
04:Obtain the original size of personage's area image;
05:Obtain first size ratio of the active user in scene image;
06:According to first size ratio, the second dimension scale of predetermined three-dimensional background image, personage's area image predetermined The size of predetermined depth and original size adjustment personage's area image in three-dimensional background image;With
07:Personage's area image after size adjusting and predetermined three-dimensional background image are merged to obtain merging image.
For another example computer program can be also performed by processor 20 to complete the image processing method described in following steps:
0231:Phase information corresponding to each pixel in demodulation structure light image;
0232:Phase information is converted into depth information;With
0233:Depth image is generated according to depth information.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description Point is contained at least one embodiment or example of the present invention.In this manual, to the schematic representation of above-mentioned term not Identical embodiment or example must be directed to.Moreover, specific features, structure, material or the feature of description can be with office Combined in an appropriate manner in one or more embodiments or example.In addition, in the case of not conflicting, the skill of this area Art personnel can be tied the different embodiments or example and the feature of different embodiments or example described in this specification Close and combine.
In addition, term " first ", " second " are only used for describing purpose, and it is not intended that instruction or hint relative importance Or the implicit quantity for indicating indicated technical characteristic.Thus, define " first ", the feature of " second " can be expressed or Implicitly include at least one this feature.In the description of the invention, " multiple " are meant that at least two, such as two, three It is individual etc., unless otherwise specifically defined.
Any process or method described otherwise above description in flow chart or herein is construed as, and represents to include Module, fragment or the portion of the code of the executable instruction of one or more the step of being used to realize specific logical function or process Point, and the scope of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discuss suitable Sequence, including according to involved function by it is basic simultaneously in the way of or in the opposite order, carry out perform function, this should be of the invention Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for Instruction execution system, device or equipment (such as computer based system including the system of processor or other can be held from instruction The system of row system, device or equipment instruction fetch and execute instruction) use, or combine these instruction execution systems, device or set It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicate, propagate or pass Defeated program is for instruction execution system, device or equipment or the dress used with reference to these instruction execution systems, device or equipment Put.The more specifically example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wiring Connecting portion (electronic installation), portable computer diskette box (magnetic device), random access memory (RAM), read-only storage (ROM), erasable edit read-only storage (EPROM or flash memory), fiber device, and portable optic disk is read-only deposits Reservoir (CDROM).In addition, computer-readable medium, which can even is that, to print the paper of described program thereon or other are suitable Medium, because can then enter edlin, interpretation or if necessary with it for example by carrying out optical scanner to paper or other media His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each several part of the present invention can be realized with hardware, software, firmware or combinations thereof.Above-mentioned In embodiment, software that multiple steps or method can be performed in memory and by suitable instruction execution system with storage Or firmware is realized.If, and in another embodiment, can be with well known in the art for example, realized with hardware Any one of row technology or their combination are realized:With the logic gates for realizing logic function to data-signal Discrete logic, have suitable combinational logic gate circuit application specific integrated circuit, programmable gate array (PGA), scene Programmable gate array (FPGA) etc..
Those skilled in the art are appreciated that to realize all or part of step that above-described embodiment method carries Suddenly it is that by program the hardware of correlation can be instructed to complete, described program can be stored in a kind of computer-readable storage medium In matter, the program upon execution, including one or a combination set of the step of embodiment of the method.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing module, can also That unit is individually physically present, can also two or more units be integrated in a module.Above-mentioned integrated mould Block can both be realized in the form of hardware, can also be realized in the form of software function module.The integrated module is such as Fruit is realized in the form of software function module and as independent production marketing or in use, can also be stored in a computer In read/write memory medium.
Storage medium mentioned above can be read-only storage, disk or CD etc..Although have been shown and retouch above Embodiments of the invention are stated, it is to be understood that above-described embodiment is exemplary, it is impossible to be interpreted as the limit to the present invention System, one of ordinary skill in the art can be changed to above-described embodiment, change, replace and become within the scope of the invention Type.

Claims (16)

1. a kind of image processing method, for electronic installation, it is characterised in that described image processing method includes:
Obtain the scene image of active user;
Obtain the depth image of the active user;
The scene image and the depth image are handled to extract personage area of the active user in the scene image Domain and obtain personage's area image;
Obtain the original size of personage's area image;
Obtain first size ratio of the active user in the scene image;
According to the first size ratio, the second dimension scale of predetermined three-dimensional background image, personage's area image in institute State the predetermined depth in predetermined three-dimensional background image and the original size adjusts the size of personage's area image;With
Personage's area image after size adjusting and the predetermined three-dimensional background image are merged to obtain merging image.
2. image processing method according to claim 1, it is characterised in that the depth map for obtaining the active user The step of picture, includes:
To active user's projective structure light;
The structure light image that shooting is modulated through the active user;With
Phase information corresponding to each pixel of the structure light image is demodulated to obtain the depth image.
3. image processing method according to claim 2, it is characterised in that described to demodulate each of the structure light image The step of phase information corresponding to pixel is to obtain the depth image includes:
Demodulate phase information corresponding to each pixel in the structure light image;
The phase information is converted into depth information;With
The depth image is generated according to the depth information.
4. image processing method according to claim 1, it is characterised in that the original for obtaining personage's area image The step of beginning size, includes:
Calculate personage's pixel count of personage's area image;With
The original size is obtained according to the resolution ratio of personage's pixel count and the scene image.
5. image processing method according to claim 1, it is characterised in that described to obtain the active user in the field The step of first size ratio in scape image, includes:
Obtain the actual size of the active user;With
The first size ratio is calculated according to the actual size and the original size.
6. image processing method according to claim 5, it is characterised in that the actual chi for obtaining the active user Very little step includes:
Obtain the scene pixel number of the size of imaging sensor corresponding to the scene image and the scene image;
Calculated according to the size of imaging sensor, the scene pixel number and personage's pixel count corresponding to the scene image The size of the real picture of the active user;With
According to the focal length of visible image capturing head corresponding to the depth of the active user, the scene image and the real picture Size calculation described in actual size.
7. image processing method according to claim 1, it is characterised in that it is described according to the first size ratio, it is pre- Determine the default depth of the second dimension scale, personage's area image of three-dimensional background image in the predetermined three-dimensional background image The step of degree and the original size adjust the size of personage's area image includes:
Personage's area image is obtained according to the first size ratio, second dimension scale and the predetermined depth Scaling;With
After the size of personage's area image being adjusted according to the scaling and the original size to obtain size adjusting Personage's area image.
8. a kind of image processing apparatus, for electronic installation, it is characterised in that described image processing unit includes:
Visible image capturing head, the visible image capturing head are used for the scene image for obtaining active user;
Depth image acquisition component, the depth image acquisition component are used for the depth image for obtaining the active user;With
Processor, the processor are used for:
The scene image and the depth image are handled to extract personage area of the active user in the scene image Domain and obtain personage's area image;
Obtain the original size of personage's area image;
Obtain first size ratio of the active user in the scene image;
According to the first size ratio, the second dimension scale of predetermined three-dimensional background image, personage's area image in institute State the predetermined depth in predetermined three-dimensional background image and the original size adjusts the size of personage's area image;With
Personage's area image after size adjusting and the predetermined three-dimensional background image are merged to obtain merging image.
9. image processing apparatus according to claim 8, it is characterised in that the depth image acquisition component includes structure Light projector and structure light video camera head, the structured light projector are used for active user's projective structure light;
The structure light video camera head is used for:
The structure light image that shooting is modulated through the active user;With
Phase information corresponding to each pixel of the structure light image is demodulated to obtain the depth image.
10. image processing apparatus according to claim 9, it is characterised in that the structure light video camera head is additionally operable to:
Demodulate phase information corresponding to each pixel in the structure light image;
The phase information is converted into depth information;With
The depth image is generated according to the depth information.
11. image processing apparatus according to claim 8, it is characterised in that the processor is additionally operable to:
Calculate personage's pixel count of personage's area image;With
The original size is obtained according to the resolution ratio of personage's pixel count and the scene image.
12. image processing apparatus according to claim 8, it is characterised in that the processor is additionally operable to:
Obtain the actual size of the active user;With
The first size ratio is calculated according to the actual size and the original size.
13. image processing apparatus according to claim 12, it is characterised in that the processor is additionally operable to:
Obtain the size of the imaging sensor of the visible image capturing head and the scene pixel number of the scene image;
Calculated according to the size of the imaging sensor of the visible image capturing head, the scene pixel number and personage's pixel count The size of the real picture of the active user;With
According to the depth of the active user, the visible image capturing head focal length and it is described it is real as Size calculation described in Actual size.
14. image processing apparatus according to claim 8, it is characterised in that the processor is additionally operable to:
Personage's area image is obtained according to the first size ratio, second dimension scale and the predetermined depth Scaling;With
After the size of personage's area image being adjusted according to the scaling and the original size to obtain size adjusting Personage's area image.
15. a kind of electronic installation, it is characterised in that the electronic installation includes:
One or more processors;
Memory;
One or more programs, wherein one or more of programs are stored in the memory, and be configured to by One or more of computing devices, described program include being used for perform claim requirement 1 to 7 any one described image processing Method.
A kind of 16. computer-readable recording medium, it is characterised in that the meter being used in combination including the electronic installation with that can image Calculation machine program, the computer program can be executed by processor to complete the image procossing described in claim 1 to 7 any one Method.
CN201710812665.6A 2017-09-11 2017-09-11 Image processing method and apparatus, electronic apparatus, and computer-readable storage medium Active CN107529020B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710812665.6A CN107529020B (en) 2017-09-11 2017-09-11 Image processing method and apparatus, electronic apparatus, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710812665.6A CN107529020B (en) 2017-09-11 2017-09-11 Image processing method and apparatus, electronic apparatus, and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN107529020A true CN107529020A (en) 2017-12-29
CN107529020B CN107529020B (en) 2020-10-13

Family

ID=60736488

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710812665.6A Active CN107529020B (en) 2017-09-11 2017-09-11 Image processing method and apparatus, electronic apparatus, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN107529020B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111046748A (en) * 2019-11-22 2020-04-21 四川新网银行股份有限公司 Method and device for enhancing and identifying large-head photo scene
CN112598571A (en) * 2019-11-27 2021-04-02 中兴通讯股份有限公司 Image scaling method, device, terminal and storage medium
CN115334239A (en) * 2022-08-10 2022-11-11 青岛海信移动通信技术股份有限公司 Method for fusing photographing of front camera and photographing of rear camera, terminal equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101144703A (en) * 2007-10-15 2008-03-19 陕西科技大学 Article geometrical size measuring device and method based on multi-source image fusion
CN101340664A (en) * 2008-08-15 2009-01-07 深圳华为通信技术有限公司 Method for measurement by mobile terminal and mobile terminal
CN103136746A (en) * 2011-12-02 2013-06-05 索尼公司 Image processing device and image processing method
CN103337079A (en) * 2013-07-09 2013-10-02 广州新节奏智能科技有限公司 Virtual augmented reality teaching method and device
WO2014190221A1 (en) * 2013-05-24 2014-11-27 Microsoft Corporation Object display with visual verisimilitude
CN104902189A (en) * 2015-06-24 2015-09-09 小米科技有限责任公司 Picture processing method and picture processing device
US20160117864A1 (en) * 2011-11-11 2016-04-28 Microsoft Technology Licensing, Llc Recalibration of a flexible mixed reality device
CN105791793A (en) * 2014-12-17 2016-07-20 光宝电子(广州)有限公司 Image processing method and electronic device
CN106774937A (en) * 2017-01-13 2017-05-31 宇龙计算机通信科技(深圳)有限公司 Image interactive method and its device in a kind of augmented reality
CN106909911A (en) * 2017-03-09 2017-06-30 广东欧珀移动通信有限公司 Image processing method, image processing apparatus and electronic installation
CN107025635A (en) * 2017-03-09 2017-08-08 广东欧珀移动通信有限公司 Processing method, processing unit and the electronic installation of image saturation based on the depth of field

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101144703A (en) * 2007-10-15 2008-03-19 陕西科技大学 Article geometrical size measuring device and method based on multi-source image fusion
CN101340664A (en) * 2008-08-15 2009-01-07 深圳华为通信技术有限公司 Method for measurement by mobile terminal and mobile terminal
US20160117864A1 (en) * 2011-11-11 2016-04-28 Microsoft Technology Licensing, Llc Recalibration of a flexible mixed reality device
CN103136746A (en) * 2011-12-02 2013-06-05 索尼公司 Image processing device and image processing method
WO2014190221A1 (en) * 2013-05-24 2014-11-27 Microsoft Corporation Object display with visual verisimilitude
CN103337079A (en) * 2013-07-09 2013-10-02 广州新节奏智能科技有限公司 Virtual augmented reality teaching method and device
CN105791793A (en) * 2014-12-17 2016-07-20 光宝电子(广州)有限公司 Image processing method and electronic device
CN104902189A (en) * 2015-06-24 2015-09-09 小米科技有限责任公司 Picture processing method and picture processing device
CN106774937A (en) * 2017-01-13 2017-05-31 宇龙计算机通信科技(深圳)有限公司 Image interactive method and its device in a kind of augmented reality
CN106909911A (en) * 2017-03-09 2017-06-30 广东欧珀移动通信有限公司 Image processing method, image processing apparatus and electronic installation
CN107025635A (en) * 2017-03-09 2017-08-08 广东欧珀移动通信有限公司 Processing method, processing unit and the electronic installation of image saturation based on the depth of field

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111046748A (en) * 2019-11-22 2020-04-21 四川新网银行股份有限公司 Method and device for enhancing and identifying large-head photo scene
CN111046748B (en) * 2019-11-22 2023-06-09 四川新网银行股份有限公司 Method and device for enhancing and identifying big head scene
CN112598571A (en) * 2019-11-27 2021-04-02 中兴通讯股份有限公司 Image scaling method, device, terminal and storage medium
CN115334239A (en) * 2022-08-10 2022-11-11 青岛海信移动通信技术股份有限公司 Method for fusing photographing of front camera and photographing of rear camera, terminal equipment and storage medium
CN115334239B (en) * 2022-08-10 2023-12-15 青岛海信移动通信技术有限公司 Front camera and rear camera photographing fusion method, terminal equipment and storage medium

Also Published As

Publication number Publication date
CN107529020B (en) 2020-10-13

Similar Documents

Publication Publication Date Title
CN107610077A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107734267A (en) Image processing method and device
CN107797664A (en) Content display method, device and electronic installation
CN107707835A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107707831A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107509045A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107807806A (en) Display parameters method of adjustment, device and electronic installation
CN107707838A (en) Image processing method and device
CN107610078A (en) Image processing method and device
CN107734264A (en) Image processing method and device
CN107644440A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107705278A (en) The adding method and terminal device of dynamic effect
CN107509043A (en) Image processing method and device
CN107610076A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107705277A (en) Image processing method and device
CN107613223A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107705243A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107527335A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107454336A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107592491A (en) Video communication background display methods and device
CN107613228A (en) The adding method and terminal device of virtual dress ornament
CN107613239A (en) Video communication background display methods and device
CN107529020A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107707833A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107734266A (en) Image processing method and device, electronic installation and computer-readable recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: OPPO Guangdong Mobile Communications Co., Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: Guangdong OPPO Mobile Communications Co., Ltd.

GR01 Patent grant
GR01 Patent grant