CN107705243A - Image processing method and device, electronic installation and computer-readable recording medium - Google Patents

Image processing method and device, electronic installation and computer-readable recording medium Download PDF

Info

Publication number
CN107705243A
CN107705243A CN201710812757.4A CN201710812757A CN107705243A CN 107705243 A CN107705243 A CN 107705243A CN 201710812757 A CN201710812757 A CN 201710812757A CN 107705243 A CN107705243 A CN 107705243A
Authority
CN
China
Prior art keywords
image
dimensional
people
predetermined
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710812757.4A
Other languages
Chinese (zh)
Inventor
张学勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710812757.4A priority Critical patent/CN107705243A/en
Publication of CN107705243A publication Critical patent/CN107705243A/en
Pending legal-status Critical Current

Links

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The invention discloses a kind of image processing method, for electronic installation.Image processing method includes:Obtain the three-dimensional scene image and depth image of active user;Scene image and depth image are handled with people's object area in split sence image and except the background area beyond people's object area to obtain background area image;With predetermined three-dimensional image is merged with background area image with obtain merge image.The invention also discloses a kind of image processing apparatus, electronic installation and computer-readable recording medium.Image processing method, image processing apparatus, electronic installation and the computer-readable recording medium of embodiment of the present invention split figure and ground by depth information, so that the three-dimensional people's object area being partitioned into and three-dimensional background area are more accurate, and obtained three-dimensional background area image will be split and merge to obtain merging image with predetermined three-dimensional image, image of the active user in scene image is replaced by predetermined three-dimensional image, lifts the usage experience of user.

Description

Image processing method and device, electronic installation and computer-readable recording medium
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of image processing method and device, electronic installation and Computer-readable recording medium.
Background technology
Existing image co-registration is typically to be merged the portrait of user with background image, but the interest of such a amalgamation mode Taste is relatively low.
The content of the invention
Can the embodiment provides a kind of image processing method, image processing apparatus, electronic installation and computer Read storage medium.
The image processing method of embodiment of the present invention is used for electronic installation, and described image processing method includes:
Obtain the three-dimensional scene image and depth image of active user;
The scene image and the depth image are handled to split people's object area in the scene image and except institute The background area beyond people's object area is stated to obtain background area image;With
Predetermined three-dimensional image is merged to obtain merging image with the background area image.
The image processing apparatus of embodiment of the present invention is used for electronic installation, and described image processing unit includes imaging device And processor.The imaging device head is used for the three-dimensional scene image and depth image for obtaining active user.The processor For handling the scene image and the depth image to split people's object area in the scene image and except the people Melt to obtain background area image, and by predetermined three-dimensional image and the background area image background area beyond object area Close to obtain merging image.
The electronic installation of embodiment of the present invention includes one or more processors, memory and one or more programs. Wherein one or more of programs are stored in the memory, and are configured to by one or more of processors Perform, described program includes being used for the instruction for performing above-mentioned image processing method.
The computer-readable recording medium of embodiment of the present invention includes what is be used in combination with the electronic installation that can be imaged Computer program, the computer program can be executed by processor to complete above-mentioned image processing method.
Image processing method, image processing apparatus, electronic installation and the computer-readable storage medium of embodiment of the present invention Matter splits figure and ground so that be partitioned into after the scene image and depth image of three-dimensional is got by depth information People's object area of three-dimensional and three-dimensional background area are more accurate, and will split obtained three-dimensional background area image with making a reservation for 3-D view, which merges to obtain, merges image, and location drawing picture of the active user in scene image is replaced (i.e., by predetermined three-dimensional image People's object area), it is interesting higher, lift the usage experience of user.
The additional aspect and advantage of the present invention will be set forth in part in the description, and will partly become from the following description Obtain substantially, or recognized by the practice of the present invention.
Brief description of the drawings
Of the invention above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments Substantially and it is readily appreciated that, wherein:
Fig. 1 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Fig. 2 is the structural representation of the electronic installation of some embodiments of the present invention.
Fig. 3 is the schematic diagram of the image processing apparatus of some embodiments of the present invention.
Fig. 4 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Fig. 5 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Fig. 6 (a) to Fig. 6 (e) is the schematic diagram of a scenario of structural light measurement according to an embodiment of the invention.
Fig. 7 (a) and Fig. 7 (b) is the schematic diagram of a scenario of structural light measurement according to an embodiment of the invention.
Fig. 8 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Fig. 9 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Figure 10 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Figure 11 is the schematic flow sheet of the image processing method of some embodiments of the present invention.
Figure 12 is the schematic diagram of the image processing apparatus of some embodiments of the present invention.
Figure 13 is the schematic diagram of the electronic installation of some embodiments of the present invention.
Embodiment
Embodiments of the invention are described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end Same or similar label represents same or similar element or the element with same or like function.Below with reference to attached The embodiment of figure description is exemplary, it is intended to for explaining the present invention, and is not considered as limiting the invention.
Also referring to Fig. 1 to 2, the image processing method of embodiment of the present invention is used for electronic installation 1000.At image Reason method includes:
03:Obtain the three-dimensional scene image and depth image of active user;
05:Beyond scene image and depth image are handled with people's object area in split sence image and except people's object area Background area to obtain background area image;With
07:Predetermined three-dimensional image is merged to obtain merging image with background area image.
Also referring to Fig. 2 and Fig. 3, the image processing method of embodiment of the present invention can be by embodiment of the present invention Image processing apparatus 100 is realized.The image processing apparatus 100 of embodiment of the present invention is used for electronic installation 1000.Image procossing Device 100 includes imaging device 10 and processor 20.Step 03 can realize by imaging device 10, and step 05 and step 07 can be with Realized by processor 20.
In other words, imaging device 10 is used for the three-dimensional scene image and depth image for obtaining active user;Processor Beyond 20 can be used for processing scene image and depth image with people's object area in split sence image and except people's object area Merged to obtain background area image, and by predetermined three-dimensional image with background area image to obtain merging figure background area Picture.
Background area image is obtained by the scene image of three-dimensional after people's object area and background area segmentation, therefore, background Area image is also 3-D view.
In some embodiments, predetermined three-dimensional image includes three-dimensional virtual portrait, three-dimensional real person, three-dimensional At least one of animals and plants.Three-dimensional real person excludes active user itself.Wherein, three-dimensional virtual portrait can be three The animated character of dimension, such as Mario, Conan, major part son, RNB etc.;Three-dimensional real person can be 3-D view Personality, such as Hepburn Audery, handou sir, Harry Potter etc., three-dimensional animals and plants can be three-dimensional animations Animal or plant, such as Micky Mouse, Donald duck, pea shooter etc..
The image processing apparatus 100 of embodiment of the present invention can apply to the electronic installation of embodiment of the present invention 1000.In other words, the electronic installation 1000 of embodiment of the present invention includes the image processing apparatus of embodiment of the present invention 100。
In some embodiments, electronic installation 1000 includes mobile phone, tablet personal computer, notebook computer, Intelligent bracelet, intelligence Energy wrist-watch, intelligent helmet, intelligent glasses etc..
Image processing method, image processing apparatus 100 and the electronic installation 1000 of embodiment of the present invention are getting three After the scene image and depth image of dimension, figure and ground is split by depth information so that the three-dimensional personage area being partitioned into Domain and three-dimensional background area are more accurate, and will split obtained three-dimensional background area image and three-dimensional virtual people The real character of thing, animals and plants or three-dimensional, which is merged to obtain, merges image, in other words, is replaced with the image of non-present user People's object area where active user is changed, strengthens the entertaining sexual experience of user.User is not included further, since merging in image Actual persons picture, therefore the privacy of user can be protected to a certain extent.
Referring to Fig. 4, in some embodiments, step 03 obtains the three-dimensional scene image and depth map of active user The step of picture, includes:
031:Shoot the two dimensional image of active user;
032:To active user's projective structure light;
033:The structure light image that shooting is modulated through active user;
034:Phase information corresponding to each pixel of demodulation structure light image is to obtain depth image;With
035:Two dimensional image and the depth image are handled to obtain the scene image of three-dimensional.
Referring again to Fig. 3, in some embodiments, image processing apparatus 100 includes imaging device 10.Imaging device 10 Including visible image capturing first 11 and depth image acquisition component 12.Depth image acquisition component 12 includes structured light projector 121 With structure light video camera head 122.Step 031 can realize that step 032 can be by structured light projector by visible image capturing first 11 121 realize that step 033, step 034 and step 035 can be realized by structure light video camera head 122.
In other words, it is seen that light video camera head 11 can be used for the two dimensional image of shooting active user;Structured light projector 121 can For to active user's projective structure light;Structure light video camera head 122 can be used for the structure light image that shooting is modulated through active user, Phase information corresponding to each pixel of demodulation structure light image is to obtain depth image, and processing two dimensional image and the depth Image is spent to obtain the scene image of three-dimensional..
Specifically, it is seen that light video camera head 11 shoots the two dimensional image of active user, and two dimensional image is gray level image or colour Image.Structured light projector 121 by the project structured light of certain pattern to active user face and body on after, used currently The face at family and the surface of body can form the structure light image after being modulated by active user.The shooting warp of structure light video camera head 122 Structure light image after modulation, then structure light image is demodulated to obtain depth image.Wherein, the pattern of structure light can be with It is laser stripe, Gray code, sine streak, non-homogeneous speckle etc..Depth image characterizes each in the scene comprising active user People or the depth information of object.The scene domain of two dimensional image is consistent with the scene domain of depth image, and in two dimensional image Each pixel can be found in depth image to should pixel depth information.In this way, processor 20 can be according to depth image In depth information to knot light video camera head 122 shoot scene carry out three-dimensional modeling, model in conjunction with two dimensional image color letter Cease the scene progress color to three-dimensional modeling and fill up the colored scene image that can obtain three-dimensional.
Referring to Fig. 5, in some embodiments, phase corresponding to each pixel of step 034 demodulation structure light image The step of information is to obtain depth image includes:
0341:Phase information corresponding to each pixel in demodulation structure light image;
0342:Phase information is converted into depth information;With
0343:Depth image is generated according to depth information.
Referring again to Fig. 2, in some embodiments, step 0341, step 0342 and step 0343 can be by structures Light video camera head 122 is realized.
In other words, structure light video camera head 122 can be further used in demodulation structure light image phase corresponding to each pixel Position information, phase information is converted into depth information, and depth image is generated according to depth information.
Specifically, compared with non-modulated structure light, the phase information of the structure light after modulation is changed, and is being tied The structure light showed in structure light image is to generate the structure light after distortion, wherein, the phase information of change can characterize The depth information of object.Therefore, structure light video camera head 122 demodulates phase corresponding to each pixel in structure light image and believed first Breath, calculates depth information, so as to obtain final depth image further according to phase information.
In order that those skilled in the art be more apparent from according to structure light come gather active user face and The process of the depth image of body, illustrated below by taking a kind of widely used optical grating projection technology (fringe projection technology) as an example Its concrete principle.Wherein, optical grating projection technology belongs to sensu lato area-structure light.
As shown in Fig. 6 (a), when being projected using area-structure light, sine streak is produced by computer programming first, And sine streak is projected to measured object by structured light projector 121, recycle structure light video camera head 122 to shoot striped by thing Degree of crook after body modulation, then demodulates the curved stripes and obtains phase, then phase is converted into depth information to obtain Depth image.The problem of to avoid producing error or error coupler, needed before carrying out depth information collection using structure light to depth Image collection assembly 12 carries out parameter calibration, and demarcation includes geometric parameter (for example, structure light video camera head 122 and project structured light Relative position parameter between device 121 etc.) demarcation, the inner parameter and structured light projector 121 of structure light video camera head 122 The demarcation of inner parameter etc..
Specifically, the first step, computer programming produce sine streak.Need to obtain using the striped of distortion due to follow-up Phase, for example phase is obtained using four step phase-shifting methods, therefore the striped that four width phase differences are pi/2, then structure light are produced here The projector 121 projects the four spokes line timesharing on measured object (mask shown in Fig. 6 (a)), and structure light video camera head 122 gathers To the figure on such as Fig. 6 (b) left sides, while to read the striped of the plane of reference shown on the right of Fig. 6 (b).
Second step, carry out phase recovery.The bar graph that structure light video camera head 122 is modulated according to four width collected is (i.e. Structure light image) to calculate the phase diagram by phase modulation, now obtained be to block phase diagram.Because four step Phase-shifting algorithms obtain Result be that gained is calculated by arctan function, therefore the phase after structure light modulation is limited between [- π, π], that is, Say, the phase after modulation exceedes [- π, π], and it can restart again.Shown in the phase main value such as Fig. 6 (c) finally given.
Wherein, it is necessary to carry out the saltus step processing that disappears, it is continuous phase that will block phase recovery during phase recovery is carried out Position.As shown in Fig. 6 (d), the left side is the continuous phase bitmap modulated, and the right is to refer to continuous phase bitmap.
3rd step, subtract each other to obtain phase difference (i.e. phase information) by the continuous phase modulated and with reference to continuous phase, should Phase difference characterizes depth information of the measured object with respect to the plane of reference, then phase difference is substituted into the conversion formula (public affairs of phase and depth The parameter being related in formula is by demarcation), you can obtain the threedimensional model of the object under test as shown in Fig. 6 (e).
It should be appreciated that in actual applications, according to the difference of concrete application scene, employed in the embodiment of the present invention Structure light in addition to above-mentioned grating, can also be other arbitrary graphic patterns.
As a kind of possible implementation, the depth information of pattern light progress active user also can be used in the present invention Collection.
Specifically, the method that pattern light obtains depth information is that this spreads out using a diffraction element for being essentially flat board The relief diffraction structure that there are element particular phases to be distributed is penetrated, cross section is with two or more concavo-convex step embossment knots Structure.Substantially 1 micron of the thickness of substrate in diffraction element, each step it is highly non-uniform, the span of height can be 0.7 Micron~0.9 micron.Structure shown in Fig. 7 (a) is the local diffraction structure of the collimation beam splitting element of the present embodiment.Fig. 7 (b) is edge The unit of the cross sectional side view of section A-A, abscissa and ordinate is micron.The speckle pattern of pattern photogenerated has The randomness of height, and can with the difference of distance changing patterns.Therefore, depth information is being obtained using pattern light Before, it is necessary first to the speckle pattern in space is calibrated, for example, in the range of 0~4 meter of distance structure light video camera head 122, A reference planes are taken every 1 centimetre, then just save 400 width speckle images after demarcating, the spacing of demarcation is smaller, obtains Depth information precision it is higher.Then, structured light projector 121 is by pattern light projection to measured object (i.e. active user) On, the speckle pattern that the difference in height on measured object surface to project the pattern light on measured object changes.Structure light Camera 122 is shot project speckle pattern (i.e. structure light image) on measured object after, then by speckle pattern and demarcation early stage The 400 width speckle images preserved afterwards carry out computing cross-correlation one by one, and then obtain 400 width correlation chart pictures.Measured object in space Position where body can show peak value on correlation chart picture, above-mentioned peak value is superimposed and after interpolation arithmetic i.e. It can obtain the depth information of measured object.
Multi beam diffraction light is obtained after diffraction is carried out to light beam due to common diffraction element, but per beam diffraction light light intensity difference Greatly, it is also big to the risk of human eye injury.Re-diffraction even is carried out to diffraction light, the uniformity of obtained light beam is relatively low. Therefore, the effect projected using the light beam of common diffraction element diffraction to measured object is poor.Using collimation in the present embodiment Beam splitting element, the element not only have the function that to collimate uncollimated rays, also have the function that light splitting, i.e., through speculum The non-collimated light of reflection is emitted multi-beam collimation light beam, and the multi-beam collimation being emitted after collimating beam splitting element toward different angles The area of section approximately equal of light beam, flux of energy approximately equal, and then to carry out using the scatterplot light after the beam diffraction The effect of projection is more preferable.Meanwhile laser emitting light is dispersed to every light beam, the risk of injury human eye is reduce further, and dissipate Spot structure light is for other uniform structure lights of arrangement, when reaching same collection effect, the consumption of pattern light Electricity is lower.
Referring to Fig. 8, in some embodiments, step 05 handles the scene image and the depth image to split People's object area in the scene image and except the background area beyond people's object area to obtain background area image bag Include:
051:Identify the human face region in scene image;
052:Depth information corresponding with human face region is obtained from scene image or depth image;
053:The depth bounds of people's object area is determined according to the depth information of human face region;
054:The personage area for determining to be connected and fallen into depth bounds with human face region according to the depth bounds of people's object area Domain;
055:The background area image is determined according to people's object area and the scene image.
Referring again to Fig. 3, in some embodiments, step 051, step 052, step 053, step 054 and step 055 It can be realized by processor 20.
In other words, processor 20 can be further used for identifying the human face region in scene image, from scene image or depth Spend and depth information corresponding with human face region is obtained in image, the depth of people's object area is determined according to the depth information of human face region Scope, the people's object area for determining to be connected and fallen into depth bounds with human face region according to the depth bounds of people's object area, and The background area image is determined according to people's object area and the scene image.
Specifically, the human face region that the deep learning Model Identification trained can be used to go out in scene image first, with Afterwards, because the scene image of three-dimensional is in addition to comprising color information, also comprising depth information, therefore can be directly according to the scene of three-dimensional Image obtains the depth information of human face region, or, also people can be determined according to the corresponding relation of two dimensional image and depth image The depth information in face region.Because human face region includes the features such as nose, eyes, ear, lip, therefore, in human face region Each feature depth data corresponding in the scene image or depth image of three-dimensional is different, for example, in face face During depth image acquisition component 12, in the depth image that depth image acquisition component 12 is shot, depth data corresponding to nose May be smaller, and depth data corresponding to ear may be larger.Therefore, the depth information of above-mentioned human face region may be one Numerical value or a number range.Wherein, when the depth information of human face region is a numerical value, the numerical value can be by face The depth data in region averages to obtain;Or can be by being worth in being taken to the depth data of human face region.
Because people's object area includes human face region, in other words, people's object area is in some depth together with human face region In the range of, therefore, after processor 20 determines the depth information of human face region, it can be set according to the depth information of human face region The depth bounds of people's object area, the depth bounds extraction further according to people's object area fall into the depth bounds and with human face region phase People's object area of connection.After determining people's object area, the part in scene image except people's object area is the portion of background area Point, processor 20 will be extracted in scene image except the part outside people's object area, you can obtain background area image.
In this way, people's object area and background area can be partitioned into obtain background area from scene image according to depth information Area image.Because the image for obtaining the not factor such as illumination, colour temperature in by environment of depth information rings, therefore, the background extracted Area image is more accurate.
Referring to Fig. 9, in some embodiments, image processing method is further comprising the steps of:
061:Scene image is handled to obtain the whole audience edge image of scene image;With
062:According to whole audience edge image amendment background area image.
Referring again to Fig. 2, in some embodiments, step 061 and step 062 can be realized by processor 20.
In other words, processor 20 can also be used to handle scene image to obtain the whole audience edge image of scene image, with And according to whole audience edge image amendment background area image.
Processor 20 carries out edge extracting to obtain whole audience edge image to scene image first, wherein, whole audience edge graph Edge lines as in include the edge lines of background object in scene residing for active user and active user.Specifically, may be used Edge extracting is carried out to scene image by Canny operators.The core that Canny operators carry out the algorithm of edge extracting mainly includes The following steps:First, convolution is carried out to scene image to eliminate noise with 2D gaussian filterings template;Then, differential operator is utilized The Grad of the gray scale of each pixel, and the gradient direction of the gray scale according to each pixel of Grad calculating are obtained, passes through gradient Direction can find adjacent pixels of the respective pixel along gradient direction;Then, each pixel is traveled through, if the gray scale of some pixel Value is not maximum compared with the gray value of former and later two adjacent pixels on its gradient direction, then it is not side to think this pixel Edge point.In this way, the pixel that marginal position is in scene image is can determine that, so as to obtain the whole audience edge after edge extracting Image.
After processor 20 obtains whole audience edge image, background area image is modified further according to whole audience edge image, Specifically, people's object area is modified first with whole audience edge image, determined finally further according to revised people's object area Background area image.It is appreciated that people's object area is will to be connected and fall into the depth of setting in scene image with human face region Obtained after all pixels progress merger of scope, in some scenarios, it is understood that there may be some are connected and fallen into human face region Object in depth bounds.Therefore, whole audience edge graph can be used to be modified to people's object area to obtain more accurate personage Region, background area is determined further according to the higher people's object area of the degree of accuracy.In this way, the background area image finally given is also more Accurately.
Further, processor 20 can also carry out second-order correction to revised people's object area, for example, can be to revised People's object area carries out expansion process, expands people's object area to retain the edge details of people's object area.Then, processor 20 is further Background area is determined according to more accurate people's object area, then the precision of the background area image finally given is higher.
Referring to Fig. 10, in some embodiments, the image processing method of embodiment of the present invention also includes:
063:Scene image and depth image are handled to extract the action message of active user;With
064:Predetermined three-dimensional image is rendered according to action message so that predetermined three-dimensional image follows the action of active user;
Step 07 merges predetermined three-dimensional image with background area image to include the step of obtaining and merge image:
071:Predetermined three-dimensional image after rendering is merged with background area image to obtain merging image.
Referring again to Fig. 2, in some embodiments, step 063, step 064 and step 071 can be real by processor 20 It is existing.In other words, processor 20 can be used for processing scene image and depth image to extract the action message of active user, according to Action message renders predetermined three-dimensional image so that predetermined three-dimensional image follows the action of active user, and predetermined after rendering 3-D view is merged with background area image to obtain merging image.
Wherein, action message includes at least one of expression and limb action of active user.In other words, action letter Breath can be the expression of active user, or the limb action of active user can also be the expression and limbs of active user Action.
Specifically, in step 05, processor 20 has identified human face region, and has been partitioned into people's object area and background Region, therefore, when performing step 063, processor 20 identifies the expression of active user by handling human face region, and right People's object area is handled to obtain the information of active user's limb action.Wherein, the information of active user's limb action can be with Obtained by way of template matches.Processor 20 is matched people's object area with multiple personage's templates.People is matched first The head of object area;After the completion of being matched on head, then next limbs are carried out to remaining multiple personage's templates of fit heads Matching, i.e. upper part of the body trunk matching;After the completion of the matching of upper part of the body trunk, then head and upper part of the body trunk are matched Remaining multiple personage's templates carry out the matching of the matching, i.e. upper limb body and lower limb body of next limbs, so as to according to template matches Method determine the information of active user's limb action.Then, processor 20 according to the expression of the active user that will identify that and Limb action renders to predetermined three-dimensional image, enables the personage in predetermined three-dimensional image or animals and plants to follow imitation current The expression and limb action of user.Finally, the predetermined three-dimensional image after processor 20 will render merged with background area image with Obtain merging image.In this way, active user can be substituted by the personage of three-dimensional or animals and plants, the interest of image co-registration is lifted.
Refer to Figure 11, in some embodiments, step 07 predetermined three-dimensional image is merged with background area image with Obtaining merging image includes:
072:Compare predetermined three-dimensional image and the size of the size of people's object area;
073:When the size of predetermined three-dimensional image is more than the size of people's object area, reduces predetermined three-dimensional image and be filled into People's object area in scene image merges image to merge to obtain;With
074:When the size of predetermined three-dimensional image is less than the size of people's object area, amplifies predetermined three-dimensional image and be filled into People's object area in scene image merges image to merge to obtain, or predetermined three-dimensional image is filled into the personage in scene image Region, and utilize the space between people's object area adjoining pixel filling predetermined three-dimensional image and people's object area.
Referring again to Fig. 2, in some embodiments, step 072, step 073 and step 074 can be by processors 20 Realize.In other words, processor 20 can be additionally used in the size for comparing the size of predetermined three-dimensional image and people's object area, predetermined three When the size of dimension image is more than the size of people's object area, predetermined three-dimensional image and the people's object area being filled into scene image are reduced Obtain merging image with fusion, and when the size of predetermined three-dimensional image is less than the size of people's object area, amplify predetermined three-dimensional Image and people's object area for being filled into scene image merges image to merge to obtain, or predetermined three-dimensional image is filled into scene People's object area in image, and utilize the sky between people's object area adjoining pixel filling predetermined three-dimensional image and people's object area Gap.
Specifically, due between active user and visible image capturing first 11 collection distance be not it is fixed, therefore, scene The size of people's object area is nor fixed in image.In this way, background area image is being merged with predetermined three-dimensional image Before, it is necessary first to compare the size of people's object area and predetermined three-dimensional image.Wherein, size includes people's object area and made a reservation for The height and width of 3-D view., can basis when the width and height of predetermined three-dimensional image are bigger than the size of people object area The size of people's object area determines suitable height and the diminution value of width, and predetermined three-dimensional image is contracted according to diminution value It is small, so that predetermined three-dimensional image is filled into the part where people's object area in scene image.In the height of predetermined three-dimensional image When the height and width of people's object area are respectively less than with width, suitable height and width can be determined according to the size of people's object area Value of magnification, and the portion to be filled into scene image where people's object area is amplified according to value of magnification to predetermined three-dimensional image Point;Or predetermined three-dimensional image is directly filled into part in scene image where people's object area with original size, recycle Pixel around people's object area is filled up to the space between predetermined three-dimensional image and people's object area.In predetermined three-dimensional image Width be more than people's object area width, and the height of predetermined three-dimensional image be less than people's object area height when, can be according to personage area The height in domain suitably reduces the width of predetermined three-dimensional image, and suitably amplifies predetermined three-dimensional image according to the height of people's object area Highly, and by the predetermined three-dimensional image after adjustment size it is filled into the part in scene image where people's object area.Predetermined three Tie up image height be more than people's object area height, and the width of predetermined three-dimensional image be less than people's object area width when, can root The height of predetermined three-dimensional image is suitably reduced according to the height of people's object area, and suitably amplifies predetermined three according to the width of people's object area The width of image is tieed up, and the predetermined three-dimensional image after adjustment size is filled into the part in scene image where people's object area.
In some embodiments, predetermined three-dimensional image can be randomly selected by processor 20, or by active user certainly Row is selected.
Processor 20 is obtained after merging image, and merging image can be on the display 50 (shown in Figure 12) of electronic installation 1000 It has been shown that, can also be printed by the printer being connected with electronic installation 1000.
Also referring to Fig. 2 and Figure 12, embodiment of the present invention also proposes a kind of electronic installation 1000.Electronic installation 1000 Including image processing apparatus 100.Image processing apparatus 100 can utilize hardware and/or software to realize.Image processing apparatus 100 Including imaging device 10 and processor 20.
Imaging device 10 includes visible image capturing first 11 and depth image acquisition component 12.
Specifically, it is seen that light video camera head 11 includes imaging sensor 111 and lens 112, it is seen that light video camera head 11 can be used for The colour information of active user is caught to obtain scene image, wherein, imaging sensor 111 includes color filter lens array (such as Bayer filter arrays), the number of lens 112 can be one or more.Visible image capturing first 11 is obtaining scene image process In, each imaging pixel in imaging sensor 111 senses luminous intensity and wavelength information in photographed scene, generation one Group raw image data;Imaging sensor 111 sends this group of raw image data into processor 20, and processor 20 is to original View data obtains colored two dimensional image after carrying out the computings such as denoising, interpolation.Processor 20 can be in various formats to original Each image pixel in view data is handled one by one, for example, each image pixel can have the locating depth of 8,10,12 or 14 bits Degree, processor 20 can be handled each image pixel by identical or different bit depth.
Depth image acquisition component 12 includes structured light projector 121 and structure light video camera head 122, depth image collection group The depth information that part 12 can be used for catching active user is to obtain depth image.Structured light projector 121 is used to throw structure light Active user is incident upon, wherein, structured light patterns can be the speckle of laser stripe, Gray code, sine streak or random alignment Pattern etc..Structure light video camera head 122 includes imaging sensor 1221 and lens 1222, and the number of lens 1222 can be one or more It is individual.Imaging sensor 1221 is used for the structure light image that capturing structure light projector 121 is projected on active user.Structure light figure As can be sent by depth acquisition component 12 to processor 20 be demodulated, the processing such as phase recovery, phase information calculate to be to obtain The depth information of active user.
In some embodiments, it is seen that the function of light video camera head 11 and structure light video camera head 122 can be by a camera Realize, in other words, imaging device 10 only includes a camera and a structured light projector 121, and above-mentioned camera is not only Two dimensional image can be shot, can also shoot structure light image.
Except using structure light obtain depth image in addition to, can also by binocular vision method, based on differential time of flight (Time Of Flight, TOF) even depth obtains the depth image of active user as acquisition methods.
In addition, image processing apparatus 100 also includes memory 30.Memory 30 can be embedded in electronic installation 1000, The memory that can be independently of outside electronic installation 1000, and may include direct memory access (DMA) (Direct Memory Access, DMA) feature.The knot that the raw image data or depth image acquisition component 12 of first 11 collection of visible image capturing gather Structure light image related data, which can transmit, to be stored or is cached into memory 30.Processor 20 can be read from memory 30 Raw image data also can read structure light image related data to enter to be handled to obtain two dimensional image from memory 30 Row processing obtains depth image, can also read raw image data from memory 30 and from structure light image related data carries out Manage to obtain the colored scene image of three-dimensional.In addition, two dimensional image, three-dimensional scene image and depth image are also storable in In memory 30, calling is handled device 20 for processing at any time, for example, processor 20 calls scene image and depth image to be carried on the back Scene area is extracted, and the obtained background area image after extraction is carried out into fusion treatment to be merged with predetermined three-dimensional image Image.Wherein, predetermined three-dimensional image and merging image may be alternatively stored in memory 30.
Image processing apparatus 100 may also include display 50.Display 50 can obtain merging figure directly from processor 20 Picture, it can also be obtained from memory 30 and merge image.The display of display 50 merges image so that user watches, or is drawn by figure Hold up or graphics processor (Graphics Processing Unit, GPU) is further processed.Image processing apparatus 100 Also include encoder/decoder 60, encoder/decoder 60 can encoding and decoding two dimensional image, three-dimensional scene image, depth map Picture, predetermined three-dimensional image and the view data for merging image etc., the view data of coding can be saved in memory 30, and can To be shown before image is shown on display 50 by decoder decompresses.Encoder/decoder 60 can be by center Processor (Central Processing Unit, CPU), GPU or coprocessor are realized.In other words, encoder/decoder 60 Can be in central processing unit (Central Processing Unit, CPU), GPU and coprocessor any one or it is more Kind.
Image processing apparatus 100 also includes control logic device 40.Imaging device 10 imaging when, processor 20 can according into As the data that equipment obtains are analyzed to determine one or more control parameters of imaging device 10 (for example, time for exposure etc.) Image statistics.Processor 20 sends image statistics to control logic device 40, the control imaging of control logic device 40 Equipment 10 is imaged with the control parameter determined.Control logic device 40 may include to perform one or more routines (such as firmware) Processor and/or microcontroller.One or more routines can determine imaging device 10 according to the image statistics of reception Control parameter.
Figure 13 is referred to, the electronic installation 1000 of embodiment of the present invention includes one or more processors 20, memory 30 and one or more programs 31.Wherein one or more programs 31 are stored in memory 30, and are configured to by one Individual or multiple processors 20 perform.Program 31 includes being used to perform the finger of the image processing method of above-mentioned any one embodiment Order.
For example, program 31 includes being used for the instruction for performing the image processing method described in following steps:
03:Obtain the three-dimensional scene image and depth image of active user;
05:Beyond scene image and depth image are handled with people's object area in split sence image and except people's object area Background area to obtain background area image;With
07:Predetermined three-dimensional image is merged to obtain merging image with background area image.
For another example program 31 also includes being used for the instruction for performing the image processing method described in following steps:
051:Identify the human face region in scene image;
052:Depth information corresponding with human face region is obtained from scene image or depth image;
053:The depth bounds of people's object area is determined according to the depth information of human face region;
054:The personage area for determining to be connected and fallen into depth bounds with human face region according to the depth bounds of people's object area Domain;
055:The background area image is determined according to people's object area and the scene image.
The computer-readable recording medium of embodiment of the present invention includes being combined with the electronic installation 1000 that can be imaged making Computer program.Computer program can be performed by processor 20 to complete the image procossing of above-mentioned any one embodiment Method.
For example, computer program can be performed by processor 20 to complete the image processing method described in following steps:
03:Obtain the three-dimensional scene image and depth image of active user;
05:Beyond scene image and depth image are handled with people's object area in split sence image and except people's object area Background area to obtain background area image;With
07:Predetermined three-dimensional image is merged to obtain merging image with background area image.
051:Identify the human face region in scene image;
052:Depth information corresponding with human face region is obtained from scene image or depth image;
053:The depth bounds of people's object area is determined according to the depth information of human face region;
054:The personage area for determining to be connected and fallen into depth bounds with human face region according to the depth bounds of people's object area Domain;
055:The background area image is determined according to people's object area and the scene image.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description Point is contained at least one embodiment or example of the present invention.In this manual, to the schematic representation of above-mentioned term not Identical embodiment or example must be directed to.Moreover, specific features, structure, material or the feature of description can be with office Combined in an appropriate manner in one or more embodiments or example.In addition, in the case of not conflicting, the skill of this area Art personnel can be tied the different embodiments or example and the feature of different embodiments or example described in this specification Close and combine.
Any process or method described otherwise above description in flow chart or herein is construed as, and represents to include Module, fragment or the portion of the code of the executable instruction of one or more the step of being used to realize specific logical function or process Point, and the scope of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discuss suitable Sequence, including according to involved function by it is basic simultaneously in the way of or in the opposite order, carry out perform function, this should be of the invention Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for Instruction execution system, device or equipment (such as computer based system including the system of processor or other can be held from instruction The system of row system, device or equipment instruction fetch and execute instruction) use, or combine these instruction execution systems, device or set It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicate, propagate or pass Defeated program is for instruction execution system, device or equipment or the dress used with reference to these instruction execution systems, device or equipment Put.The more specifically example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wiring Connecting portion (electronic installation), portable computer diskette box (magnetic device), random access memory (RAM), read-only storage (ROM), erasable edit read-only storage (EPROM or flash memory), fiber device, and portable optic disk is read-only deposits Reservoir (CDROM).In addition, computer-readable medium, which can even is that, to print the paper of described program thereon or other are suitable Medium, because can then enter edlin, interpretation or if necessary with it for example by carrying out optical scanner to paper or other media His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each several part of the present invention can be realized with hardware, software, firmware or combinations thereof.Above-mentioned In embodiment, software that multiple steps or method can be performed in memory and by suitable instruction execution system with storage Or firmware is realized.If, and in another embodiment, can be with well known in the art for example, realized with hardware Any one of row technology or their combination are realized:With the logic gates for realizing logic function to data-signal Discrete logic, have suitable combinational logic gate circuit application specific integrated circuit, programmable gate array (PGA), scene Programmable gate array (FPGA) etc..
Those skilled in the art are appreciated that to realize all or part of step that above-described embodiment method carries Suddenly it is that by program the hardware of correlation can be instructed to complete, described program can be stored in a kind of computer-readable storage medium In matter, the program upon execution, including one or a combination set of the step of embodiment of the method.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing module, can also That unit is individually physically present, can also two or more units be integrated in a module.Above-mentioned integrated mould Block can both be realized in the form of hardware, can also be realized in the form of software function module.The integrated module is such as Fruit is realized in the form of software function module and as independent production marketing or in use, can also be stored in a computer In read/write memory medium.
Storage medium mentioned above can be read-only storage, disk or CD etc..Although have been shown and retouch above Embodiments of the invention are stated, it is to be understood that above-described embodiment is exemplary, it is impossible to be interpreted as the limit to the present invention System, one of ordinary skill in the art can be changed to above-described embodiment, change, replace and become within the scope of the invention Type.

Claims (16)

1. a kind of image processing method, for electronic installation, it is characterised in that described image processing method includes:
Obtain the three-dimensional scene image and depth image of active user;
The scene image and the depth image are handled to split people's object area in the scene image and except the people Background area beyond object area is to obtain background area image;With
Predetermined three-dimensional image is merged to obtain merging image with the background area image.
2. image processing method according to claim 1, it is characterised in that the three-dimensional scene for obtaining active user The step of image and depth image, includes:
Shoot the two dimensional image of the active user;
To active user's projective structure light;
The structure light image that shooting is modulated through the active user;With
Phase information corresponding to each pixel of the structure light image is demodulated to obtain the depth image;With
The two dimensional image and the depth image are handled to obtain the scene image of three-dimensional.
3. image processing method according to claim 2, it is characterised in that described to demodulate each of the structure light image The step of phase information corresponding to pixel is to obtain the depth image includes:
Demodulate phase information corresponding to each pixel in the structure light image;
The phase information is converted into depth information;With
The depth image is generated according to the depth information.
4. image processing method according to claim 1, it is characterised in that the predetermined three-dimensional image includes three-dimensional void At least one of anthropomorphic thing, three-dimensional real person, three-dimensional animals and plants, the three-dimensional real person eliminate described work as Preceding user itself.
5. image processing method according to claim 1, it is characterised in that described image processing method also includes:
The scene image and the depth image are handled to extract the action message of the active user;With
The predetermined three-dimensional image is rendered according to the action message so that the predetermined three-dimensional image follows the active user Action;
It is described to merge the predetermined three-dimensional image with the background area image to include the step of obtaining and merge image:
The predetermined three-dimensional image after rendering is merged with the background area image to obtain the merging image.
6. image processing method according to claim 5, it is characterised in that the action message includes the active user Expression and/or limb action.
7. image processing method according to claim 1, it is characterised in that described by predetermined three-dimensional image and the background Area image is merged to include the step of obtaining and merge image:
Compare the predetermined three-dimensional image and the size of the size of people's object area;
When the size of the predetermined three-dimensional image is more than the size of people's object area, reduces the predetermined three-dimensional image and fill out It is charged into people's object area in the scene image and merges image to merge to obtain;With
When the size of the predetermined three-dimensional image is less than the size of people's object area, amplifies the predetermined three-dimensional image and fill out It is charged into people's object area in the scene image and merges image to merge to obtain, or the predetermined three-dimensional image is filled into People's object area in the scene image, and utilize preset three-dimensional described in the adjoining pixel filling of people's object area Picture and the space between people's object area.
8. a kind of image processing apparatus, for electronic installation, it is characterised in that described image processing unit includes:
Imaging device, the imaging device are used for the three-dimensional scene image and depth image for obtaining active user;With
Processor, the processor are used for:
The scene image and the depth image are handled to split people's object area in the scene image and except the people Background area beyond object area is to obtain background area image;With
Predetermined three-dimensional image is merged to obtain merging image with the background area image.
9. image processing apparatus according to claim 8, it is characterised in that the imaging device includes visible image capturing head With depth image acquisition component, the depth image acquisition component includes structured light projector and structure light video camera head, it is described can See that light video camera head is used for the two dimensional image for shooting the active user;
The structured light projector is used for active user's projective structure light;
The structure light video camera head is used for:
The structure light image that shooting is modulated through the active user;
Phase information corresponding to each pixel of the structure light image is demodulated to obtain the depth image;With
The two dimensional image and the depth image are handled to obtain the scene image of three-dimensional.
10. image processing apparatus according to claim 9, it is characterised in that the structure light video camera head is further used for:
Demodulate phase information corresponding to each pixel in the structure light image;
The phase information is converted into depth information;With
The depth image is generated according to the depth information.
11. image processing apparatus according to claim 8, it is characterised in that the predetermined three-dimensional image includes three-dimensional At least one of virtual portrait, three-dimensional real person, three-dimensional animals and plants, the three-dimensional real person eliminates described Active user itself.
12. image processing apparatus according to claim 8, it is characterised in that the processor is additionally operable to:
The scene image and the depth image are handled to extract the action message of the active user;
The predetermined three-dimensional image is rendered according to the action message so that the predetermined three-dimensional image follows the active user Action;
The predetermined three-dimensional image after rendering is merged with the background area image to obtain the merging image.
13. image processing apparatus according to claim 12, it is characterised in that the action message includes the current use The expression and/or limb action at family.
14. image processing apparatus according to claim 8, it is characterised in that the processor is additionally operable to:
Compare the predetermined three-dimensional image and the size of the size of people's object area;
When the size of the predetermined three-dimensional image is more than the size of people's object area, reduces the predetermined three-dimensional image and fill out It is charged into people's object area in the scene image and merges image to merge to obtain;With
When the size of the predetermined three-dimensional image is less than the size of people's object area, amplifies the predetermined three-dimensional image and fill out It is charged into people's object area in the scene image and merges image to merge to obtain, or the predetermined three-dimensional image is filled into People's object area in the scene image, and utilize preset three-dimensional described in the adjoining pixel filling of people's object area Picture and the space between people's object area.
15. a kind of electronic installation, it is characterised in that the electronic installation includes:
One or more processors;
Memory;With
One or more programs, wherein one or more of programs are stored in the memory, and be configured to by One or more of computing devices, described program include being used at the image that perform claim is required described in 1 to 7 any one The instruction of reason method.
A kind of 16. computer-readable recording medium, it is characterised in that the meter being used in combination including the electronic installation with that can image Calculation machine program, the computer program can be executed by processor to complete the image procossing described in claim 1 to 7 any one Method.
CN201710812757.4A 2017-09-11 2017-09-11 Image processing method and device, electronic installation and computer-readable recording medium Pending CN107705243A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710812757.4A CN107705243A (en) 2017-09-11 2017-09-11 Image processing method and device, electronic installation and computer-readable recording medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710812757.4A CN107705243A (en) 2017-09-11 2017-09-11 Image processing method and device, electronic installation and computer-readable recording medium

Publications (1)

Publication Number Publication Date
CN107705243A true CN107705243A (en) 2018-02-16

Family

ID=61172437

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710812757.4A Pending CN107705243A (en) 2017-09-11 2017-09-11 Image processing method and device, electronic installation and computer-readable recording medium

Country Status (1)

Country Link
CN (1) CN107705243A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110060205A (en) * 2019-05-08 2019-07-26 北京迈格威科技有限公司 Image processing method and device, storage medium and electronic equipment
CN110675499A (en) * 2019-07-23 2020-01-10 电子科技大学 Three-dimensional modeling method based on binocular structured light three-dimensional scanning system
CN111145189A (en) * 2019-12-26 2020-05-12 成都市喜爱科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111223192A (en) * 2020-01-09 2020-06-02 北京华捷艾米科技有限公司 Image processing method and application method, device and equipment thereof

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101452582A (en) * 2008-12-18 2009-06-10 北京中星微电子有限公司 Method and device for implementing three-dimensional video specific action
US20150022642A1 (en) * 2013-07-16 2015-01-22 Texas Instruments Incorporated Super-Resolution in Structured Light Imaging
CN106502388A (en) * 2016-09-26 2017-03-15 惠州Tcl移动通信有限公司 A kind of interactive movement technique and head-wearing type intelligent equipment
CN106652291A (en) * 2016-12-09 2017-05-10 华南理工大学 Indoor simple monitoring and alarming system and method based on Kinect
CN106791347A (en) * 2015-11-20 2017-05-31 比亚迪股份有限公司 A kind of image processing method, device and the mobile terminal using the method
CN107025635A (en) * 2017-03-09 2017-08-08 广东欧珀移动通信有限公司 Processing method, processing unit and the electronic installation of image saturation based on the depth of field

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101452582A (en) * 2008-12-18 2009-06-10 北京中星微电子有限公司 Method and device for implementing three-dimensional video specific action
US20150022642A1 (en) * 2013-07-16 2015-01-22 Texas Instruments Incorporated Super-Resolution in Structured Light Imaging
CN106791347A (en) * 2015-11-20 2017-05-31 比亚迪股份有限公司 A kind of image processing method, device and the mobile terminal using the method
CN106502388A (en) * 2016-09-26 2017-03-15 惠州Tcl移动通信有限公司 A kind of interactive movement technique and head-wearing type intelligent equipment
CN106652291A (en) * 2016-12-09 2017-05-10 华南理工大学 Indoor simple monitoring and alarming system and method based on Kinect
CN107025635A (en) * 2017-03-09 2017-08-08 广东欧珀移动通信有限公司 Processing method, processing unit and the electronic installation of image saturation based on the depth of field

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李博 等: "《全国高等院校"十三五"规划教材 3D打印技术》", 30 August 2017, 中国轻工业出版社 *
王辉: "《数字化全息三维显示与检测》", 30 November 2013, 上海交通大学出版社 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110060205A (en) * 2019-05-08 2019-07-26 北京迈格威科技有限公司 Image processing method and device, storage medium and electronic equipment
CN110675499A (en) * 2019-07-23 2020-01-10 电子科技大学 Three-dimensional modeling method based on binocular structured light three-dimensional scanning system
CN110675499B (en) * 2019-07-23 2023-04-11 电子科技大学 Three-dimensional modeling method based on binocular structured light three-dimensional scanning system
CN111145189A (en) * 2019-12-26 2020-05-12 成都市喜爱科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111145189B (en) * 2019-12-26 2023-08-08 成都市喜爱科技有限公司 Image processing method, apparatus, electronic device, and computer-readable storage medium
CN111223192A (en) * 2020-01-09 2020-06-02 北京华捷艾米科技有限公司 Image processing method and application method, device and equipment thereof
CN111223192B (en) * 2020-01-09 2023-10-03 北京华捷艾米科技有限公司 Image processing method, application method, device and equipment thereof

Similar Documents

Publication Publication Date Title
CN107610077A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107742296A (en) Dynamic image generation method and electronic installation
CN107734267A (en) Image processing method and device
CN107707839A (en) Image processing method and device
CN107707835A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107509045A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107707831A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107707838A (en) Image processing method and device
CN107705243A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107590793A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107610078A (en) Image processing method and device
CN107613223A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107734264A (en) Image processing method and device
CN107644440A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107705278A (en) The adding method and terminal device of dynamic effect
CN107509043A (en) Image processing method and device
CN107454336A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107527335A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107610076A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107613228A (en) The adding method and terminal device of virtual dress ornament
CN107705277A (en) Image processing method and device
CN107592491A (en) Video communication background display methods and device
CN107613239A (en) Video communication background display methods and device
CN107622495A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107734265A (en) Image processing method and device, electronic installation and computer-readable recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20180216

RJ01 Rejection of invention patent application after publication