CN109840881A - A kind of 3D special efficacy image generating method, device and equipment - Google Patents

A kind of 3D special efficacy image generating method, device and equipment Download PDF

Info

Publication number
CN109840881A
CN109840881A CN201811519475.6A CN201811519475A CN109840881A CN 109840881 A CN109840881 A CN 109840881A CN 201811519475 A CN201811519475 A CN 201811519475A CN 109840881 A CN109840881 A CN 109840881A
Authority
CN
China
Prior art keywords
image
target image
special efficacy
target
effect processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811519475.6A
Other languages
Chinese (zh)
Other versions
CN109840881B (en
Inventor
王献冠
郭胜男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Orbbec Co Ltd
Original Assignee
Shenzhen Orbbec Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Orbbec Co Ltd filed Critical Shenzhen Orbbec Co Ltd
Priority to CN201811519475.6A priority Critical patent/CN109840881B/en
Publication of CN109840881A publication Critical patent/CN109840881A/en
Application granted granted Critical
Publication of CN109840881B publication Critical patent/CN109840881B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

A kind of 3D special efficacy image generating method include: receive include depth information original image;It identifies the target image for including in the original image, and separates the target image with background image;According to the target image and/or the depth information of background image, special effect processing is carried out to the target image;The target image after the special effect processing is merged with background image, obtains 3D special efficacy image.After the present invention obtains target image, special effect processing is quickly carried out to target image according to the depth information of target image and background image, is conducive to simplify special effect processing process, improves special effect processing efficiency.

Description

A kind of 3D special efficacy image generating method, device and equipment
Technical field
The application belongs to field of image processing more particularly to a kind of 3D special efficacy image generating method, device and equipment.
Background technique
Currently, the smart machines such as smart phone usually will be installed the social category Video Applications based on RGB camera, these societies It hands over class Video Applications to be typically based on colour imagery shot shooting video, is added by video progress later period of the image processing techniques to shooting Work processing, can show very rich colorful and interesting effect, meet the diversified demand that user uses smart machine, mention High satisfaction of the user using application.But it since traditional RGB camera can only generally acquire two dimensional image, is acquired To image lack three-dimensional information.For needing to use the certain effects of the three-dimensional information of image, need through the later period more Complicated processing can realize certain effects, and cause treatment effect often not true enough due to lacking three-dimensional information.
After Apple Inc. releases the smart machines such as the mobile phone with depth camera, just start to occur more and more on the market More smart machines with 3D sensing camera, these smart machines can perceive the three-dimensional letter of object by 3D camera Breath, so that depth data acquisition becomes simplified as.Be different from two dimensional image, depth image have recorded object and depth transducer it Between distance, these range informations feature the object in scene distribution.In view of the plurality of advantages of depth map, based on depth map Image processing techniques has attracted extensive attention, and the 3D special efficacy field such as be applied to video display, but current generation 3D special effect graph The Processing Algorithm of picture is complex, and degree-of-difficulty factor is higher, is unfavorable for improving image processing efficiency.
Summary of the invention
In view of this, the embodiment of the present application provides a kind of 3D special efficacy image generating method, device and equipment, it is existing to solve There is the Processing Algorithm for generating 3D special efficacy image in technology complex, degree-of-difficulty factor is higher, is unfavorable for improving image processing efficiency The problem of.
The first aspect of the embodiment of the present application provides a kind of 3D special efficacy image generating method, and the 3D special efficacy image generates Method includes:
Receive the original image including depth information;
It identifies the target image for including in the original image, and separates the target image with background image;
According to the target image and/or the depth information of background image, special effect processing is carried out to the target image;
The target image after the special effect processing is merged with background image, obtains 3D special efficacy image.
With reference to first aspect, in the first possible implementation of first aspect, it is described according to the target image and/ Or the depth information of background image, include: to the step of target image progress special effect processing
According to the depth value Z of the target image1With the depth value Z of background image2, calculate each pixel of target image Transparency P=Z1/Z2* 100%;
Transparent processing is carried out to the target image according to the transparency of each pixel calculated.
With reference to first aspect, in second of possible implementation of first aspect, the special effect processing further include:
According to the depth value of the target image, three-dimensional significance model S is constructed3D=α Edepth, according to the three-dimensional Significance model determines the profile of the target image, wherein EdepthFor the depth map of target image, α is auto-adaptive parameter;
Triangle gridding is constructed to the profile of the target image;
According to the three-dimensional significance model S3DWarping operations are carried out to triangle all in the triangle gridding.
With reference to first aspect, in the third possible implementation of first aspect, it is described according to the target image and/ Or the depth information of background image, to the target image carry out special effect processing the step of further include:
Obtain the target image depth value Z1With background image depth value Z2
According to the target image depth value Z1With background image depth value Z2The ratio between be used as zoom factor, according to the contracting It puts the factor and zooms in and out damage.
In conjunction with the third possible implementation of knot one side, in the 4th kind of possible implementation of first aspect, institute It states and operation is zoomed in and out according to the zoom factor includes:
Scaled matrix is determined according to the zoom factor, and the pixel of the target image is determined by the scaled matrix Three dimensional space coordinate after zooming in or out;
According to the corresponding interpolation method of type of interpolation pixel each in the target image, in conjunction with the amplification or Three dimensional space coordinate after diminution carries out interpolation, generates the image zoomed in or out.
With reference to first aspect, in the 5th kind of possible implementation of first aspect, include in the identification original image Target image the step of include:
According to preset depth value threshold range, the voxel group of target is extracted;
The initial profile exposure mask figure of target is determined according to the voxel group;
Denoising and/or smoothing processing are carried out to the edge pixel of the initial profile exposure mask figure, obtain filtered profile Exposure mask figure.
With reference to first aspect, described to pass through the special effect processing in the 6th kind of possible implementation of first aspect The step of target image afterwards is merged with background image, obtains 3D special efficacy image include:
By three-dimensional point cloud method for registering, the pixel of target image and background image after special effect processing is matched Fusion;
Texture information is added to fused image, obtains 3D special efficacy image.
The second aspect of the embodiment of the present application provides a kind of 3D special efficacy video generation device, and the 3D special efficacy image generates Device includes:
Original image receiving unit, for receiving the original image including depth information;
Image separative unit, the target image for including in the original image for identification, and make the target image with Background image separation;
Special effect processing unit, for the depth information according to the target image and/or background image, to the target figure As carrying out special effect processing;
Integrated unit obtains 3D for that will merge by the target image after the special effect processing with background image Special efficacy image.
The third aspect of the embodiment of the present application provides a kind of 3D special efficacy image forming apparatus, including memory, processor And the computer program that can be run in the memory and on the processor is stored, the processor executes the meter It is realized when calculation machine program as described in any one of first aspect the step of 3D special efficacy image generating method.
The fourth aspect of the embodiment of the present application provides a kind of computer readable storage medium, the computer-readable storage Media storage has computer program, realizes that 3D is special as described in any one of first aspect when the computer program is executed by processor The step of imitating image generating method.
Existing beneficial effect is the embodiment of the present application compared with prior art: including the original of depth information by receiving Image, after separating the target image and background image for including in original image, according to the depth of target image and background image Depth information carries out special effect processing to target image, then the target image after special effect processing is merged with background image, obtains 3D special efficacy image.After the application is by obtaining target image, the depth information according to target image and background image is quickly right Target image carries out special effect processing, is conducive to simplify special effect processing process, improves special effect processing efficiency.
Detailed description of the invention
It in order to more clearly explain the technical solutions in the embodiments of the present application, below will be to embodiment or description of the prior art Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only some of the application Embodiment for those of ordinary skill in the art without any creative labor, can also be according to these Attached drawing obtains other attached drawings.
Fig. 1 is a kind of implementation process schematic diagram of 3D special efficacy image generating method provided by the embodiments of the present application;
Fig. 2 is a kind of implementation process schematic diagram of 3D rendering transparent processing method provided by the embodiments of the present application;
Fig. 3 is a kind of implementation process schematic diagram of the distortion processing method of 3D rendering provided by the embodiments of the present application;
Fig. 4 is a kind of implementation process schematic diagram of the scaling method of 3D rendering provided by the embodiments of the present application;
Fig. 5 is a kind of schematic diagram of 3D special efficacy video generation device provided by the embodiments of the present application;
Fig. 6 is the schematic diagram of 3D special efficacy image forming apparatus provided by the embodiments of the present application.
Specific embodiment
In being described below, for illustration and not for limitation, the tool of such as particular system structure, technology etc is proposed Body details, so as to provide a thorough understanding of the present application embodiment.However, it will be clear to one skilled in the art that there is no these specific The application also may be implemented in the other embodiments of details.In other situations, it omits to well-known system, device, electricity The detailed description of road and method, so as not to obscure the description of the present application with unnecessary details.
In order to illustrate technical solution described herein, the following is a description of specific embodiments.
Fig. 1 is a kind of implementation process schematic diagram of 3D special efficacy image generating method provided by the embodiments of the present application, is described in detail such as Under:
In step s101, the original image including depth information is received;
Specifically, herein described original image, can be single image, or include in the video of acquisition Image sequence.The original image including depth information can be acquired by the 3D camera of smart machine.The 3D camera shooting Head is not limited to specific type, is able to satisfy the depth image and color image for obtaining target image.With 3D camera Equipment can be the smart machines such as mobile phone, PAD, PC.RGBD image, example more can be flexibly acquired using portable device Such as, user can carry out a using after mobile phone photographic subjects RGBD image according to herein described 3D special efficacy image generating method The processing operation of property.
In step s 102, it identifies the target image for including in the original image, and makes the target image and background Image separation;
Herein described target image can be people, or can also specific object.For the ease of to target image into Original image is carried out separating treatment, obtains target image and background image after separation by row special effect processing, the application.Wherein, know The step of target image for including in other original image, may include:
A determines the characteristics of image of target image;
B carries out characteristic matching according to identified characteristics of image in the original image, according to characteristic matching result Determine target image.
For example, when presetting target is people portrait feature can be set, by set portrait feature and original graph As being matched, according to matching result, the image-region in original image including portrait feature is determined, by identified image district It domain, can be with the target image of isolated portrait and the not no background image of portrait as target image.
It certainly, can be in conjunction with the depth information to target due to including depth information in herein described original image Image is separated.For example, when target image is human body, it can be using rarefaction representation method, intensive representation and space gold Word tower extraction method etc. is detected in the way of feature extraction.Due to the particularity of human body, so that human body pixel differs markedly from Other pixels can identify the edge pixel between human body and other objects.In the depth map of original image, each pixel There is corresponding depth value, if one group of adjacent pixel has similar depth value, can determine that the pixel belongs to same classification Mark object.
By taking human body as an example, human body is little if it is static or motion amplitude, then all pixels of human region with take the photograph As the distance of head be it is similar, i.e., depth value is close, at this moment can be each when human body image region with set depth value threshold range It, can be by these adjacent block of pixels and edge pixel when the depth value of a adjacent pixel is no more than scheduled threshold range Block is labeled as belonging to the initial voxel group of human body, these initial voxel groups just constitute the initial profile exposure mask of human body Figure.
After the initial profile exposure mask figure for obtaining human body, it is also necessary to such as edge pixel etc. is handled, including denoising, Smoothing processing etc., to obtain final profile mask figure, i.e., filtered profile mask figure.Due to the original of 3D camera acquisition Beginning image can have the defects of noise, can using the processing such as smoothing filter, sort method filter, by noise reduction process, Depth image is enabled to retain reliable depth information value.After carrying out removing dryness smoothing processing using filter, it can also use The methods of expansion, corrosion are further processed, to reach filter noise, accurate the purpose of extracting human region.Expansive working can make The boundary of human body expands outwardly, and the lesser cavity of inside of human body can be filled.Etching operation is used to remove the marginal point of human body, eliminates Tiny noise may thereby determine that the corresponding accurate depth value information of the edge pixel of initial profile exposure mask figure, then to first Beginning voxel group obtains final voxel group after being filtered, and then obtains the final profile exposure mask figure of human body, realizes people Body and background separation.
In one embodiment, multiple continuous depth map sequences or processing human body are handled and the feelings such as blocks by other objects When condition, the above-mentioned final voxel group for belonging to human body can be first identified, when human body occurs in next width depth map, lead to It crosses and tracks voxel group and can determine the target portrait on multiple continuous depth maps, this can be applied to video portrait Processing.Equally when human body is blocked, the filtered profile mask figure of human body can also be quickly determined by voxel group, To carry out FIG pull handle to original image according to the filtered profile mask figure, target image and background image are obtained.
In step s 103, according to the target image and/or the depth information of background image, to the target image into Row special effect processing;
It in this application, may include transparence, distortion and scaling processing to the special effect processing mode of the target image One or more of.Wherein, the step of carrying out transparency process to the target image can be as shown in Figure 2, comprising:
In step s 201, according to the depth value Z of the target image1With the depth value Z of background image2, calculate target figure The transparency P=Z of each pixel of picture1/Z2* 100%;
In step S202, transparent processing is carried out to target image according to the transparency of each pixel calculated.
When carrying out transparency processing to human body, if carrying out depth write-in and color mixing simultaneously, it is difficult to will increase calculating Degree can allow some complex mesh to generate the saturating of mistake because lacking depth value to influence processing speed, if closing depth information Obvious results fruit, such as in foreground target, human body is blocked by other objects for being located at the front, if without three-dimensional information, by human body When transparence, shelter may also be simultaneously by transparence.Therefore, in order to more really show the effect of hollow man, it is necessary to refer to Depth value information carries out transparency process.For example, in the case where depth write-in can be now switched in fact using time-sharing handling method It carries out transparency process: opening depth write-in, but do not export color, depth information is written in depth buffered, is then carried out just Normal transparency process, the transparency process can be carried out according to the percentage of target image and the depth value of background image.It is false If the depth value that human body scratches figure image is Z1, the depth value of background is Z2, it is possible thereby to calculate transparency are as follows: P=Z1/Z2* 100%, human body can be made to realize preferable transparent effect according to transparency P.Moreover, because the depth information of each object pixel It has been learned in advance, transparent place can be carried out according to the depth information of pixel scale when carrying out transparency process again in this way Reason.
Fig. 3 is the implementation process schematic diagram that a kind of pair of target image provided by the embodiments of the present application carries out distortion processing, in detail It states as follows:
In step S301, according to the depth value of the target image, three-dimensional significance model S is constructed3D=α Edepth, The profile of the target image is determined according to the three-dimensional significance model, wherein EdepthFor the depth map of target image, α is Auto-adaptive parameter;
In step s 302, triangle gridding is constructed to the profile of the target image;
It can use Delaunay (delaunay) trigonometry and triangle gridding constructed to the profile of target image.Triangle gridding packet Containing several triangles, Delaunay trigonometry utilizes the property of discrete point, immediate three discrete points is linked to be triangle While, ensure that no matter to calculate since which point can all obtain identical triangular mesh, and image lattice is made to tend to be simple Change, is convenient for subsequent processing.
In step S303, according to the three-dimensional significance model S3DTriangle all in the triangle gridding is carried out Warping operations.
According to three-dimensional significance model S3DTriangle all in triangle gridding is distorted, i.e., each triangle is pressed According to distortion matrix corresponding value target image is translated, rotates, be bent etc. and is operated.Due to being added to depth information and triangle Grid had both considered the size of the significance of each point as constraint, it is contemplated that the integrality at objects in images edge, Accordingly even when the significance of two points on triangle same side is not of uniform size, final distortion result will not make this Situations such as side is broken, to ensure that the robustness of scalloping.
Fig. 4 is a kind of implementation process schematic diagram of the scaling processing of 3D rendering provided by the embodiments of the present application, and details are as follows:
In step S401, the target image depth value Z is obtained1With background image depth value Z2
The amplification and diminution of target image can be with the depth value informations of reference background image, if background image distance 3D takes the photograph Picture is distant, and depth value is bigger, and amplified foreground image will be obviously bigger than background at this time, otherwise amplified mesh Logo image will not seem lofty with respect to background image.It therefore, can be by foreground and background picture when zooming in or out operation The depth value of element is as calculating parameter.For example, being Z by the depth value that depth map obtains human body1, the depth value of background is Z2, and Z2>Z1
In step S402, according to the target image depth value Z1With background image depth value Z2The ratio between as scaling because Son zooms in and out operation, may include:
A determines scaled matrix according to the zoom factor, determines that the pixel of target image passes through according to the scaled matrix Cross the three dimensional space coordinate after zooming in or out;
It, can be according to Z when determining zoom factor according to depth value information and scale target image according to zoom factor1And Z2 Calculate amplification factor are as follows: M1=Z2/Z1, reduce the factor are as follows: M2=Z1/Z2, then further find out respectively according to the zoom factor Matrix is zoomed in or out, according to amplification matrix or reducing matrix finally would know that each key point pixel after zooming in or out Three dimensional space coordinate, to form the space nodes of target image.
B, according to the corresponding interpolation method of type of interpolation pixel each in the target image, in conjunction with the amplification Or the three dimensional space coordinate after reducing carries out interpolation, generates the image zoomed in or out.
According to the corresponding interpolation method of type of interpolation pixel each in the target image of scaling, to each interpolation Pixel carries out interpolation, generates the image that final high-resolution zooms in or out.Using interpolation algorithm to the sky of target image Intermediate node carries out interpolation, to obtain final zooming effect.
In step S104, the target image after the special effect processing is merged with background image, obtains 3D Special efficacy image.
It, can be by the target image handled well and background after the special effect processings such as above-mentioned transparence, distortion, scaling Image carries out fusion superposition, or can also be rendered using texture maps, to realize final 3D effect.Step needs pair Each pixel of target image and background image is matched, and can pass through three-dimensional point cloud registration Algorithm (such as ICP algorithm) Registration fusion is carried out to each local depth image.After cloud completes registration fusion, also three-dimensional point cloud can finally be drawn At three-dimensional grid, being formed includes the grid model for pushing up the elements such as points, edges, faces, polygon, to simplify render process.It realizes After the grid expression of threedimensional model, in order to visualize, need that texture information is added to fused image, to finally obtain Whole three-D grain image.User can be in smart machines such as mobile phones, or can also be on other displays at direct viewing 3D effect after reason.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present application constitutes any limit It is fixed.
Fig. 5 is a kind of structural schematic diagram of 3D special efficacy video generation device provided by the embodiments of the present application, and details are as follows:
The 3D special efficacy video generation device includes:
Original image receiving unit 501, for receiving the original image including depth information;
Image separative unit 502, the target image for including in the original image for identification, and make the target image It is separated with background image;
Special effect processing unit 503, for the depth information according to the target image and/or background image, to the mesh Logo image carries out special effect processing;
Integrated unit 504 is obtained for that will merge by the target image after the special effect processing with background image 3D special efficacy image.
3D special efficacy video generation device described in Fig. 5, it is corresponding with 3D special efficacy image generating method described in Fig. 1.
Fig. 6 is the schematic diagram for the 3D special efficacy image forming apparatus that one embodiment of the application provides.As shown in fig. 6, the implementation The 3D special efficacy image forming apparatus 6 of example includes: processor 60, memory 61 and is stored in the memory 61 and can be in institute State the computer program 62 run on processor 60, such as 3D special efficacy image generating program.The processor 60 executes the meter The step in above-mentioned each 3D special efficacy image generating method embodiment is realized when calculation machine program 62.Alternatively, the processor 60 is held The function of each module/unit in above-mentioned each Installation practice is realized when the row computer program 62.
Illustratively, the computer program 62 can be divided into one or more module/units, it is one or Multiple module/units are stored in the memory 61, and are executed by the processor 60, to complete the application.Described one A or multiple module/units can be the series of computation machine program instruction section that can complete specific function, which is used for Implementation procedure of the computer program 62 in the 3D special efficacy image forming apparatus 6 is described.For example, the computer program 62 can be divided into:
Original image receiving unit, for receiving the original image including depth information;
Image separative unit, the target image for including in the original image for identification, and make the target image with Background image separation;
Special effect processing unit, for the depth information according to the target image and/or background image, to the target figure As carrying out special effect processing;
Integrated unit obtains 3D for that will merge by the target image after the special effect processing with background image Special efficacy image.
The 3D special efficacy image forming apparatus 6 can be desktop PC, notebook, palm PC and cloud server Deng calculating equipment.The 3D special efficacy image forming apparatus may include, but be not limited only to, processor 60, memory 61.This field skill Art personnel are appreciated that Fig. 6 is only the example of 3D special efficacy image forming apparatus 6, do not constitute and set to the generation of 3D special efficacy image Standby 6 restriction may include components more more or fewer than diagram, perhaps combine certain components or different components, such as The 3D special efficacy image forming apparatus can also include input-output equipment, network access equipment, bus etc..
Alleged processor 60 can be central processing unit (Central Processing Unit, CPU), can also be Other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field- Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic, Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processor Deng.
The memory 61 can be the internal storage unit of the 3D special efficacy image forming apparatus 6, such as 3D special effect graph As the hard disk or memory of generating device 6.The memory 61 is also possible to the external storage of the 3D special efficacy image forming apparatus 6 The plug-in type hard disk being equipped in equipment, such as the 3D special efficacy image forming apparatus 6, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card, flash card (Flash Card) etc..Further, described to deposit Reservoir 61 can also both including the 3D special efficacy image forming apparatus 6 internal storage unit and also including External memory equipment.Institute Memory 61 is stated for other program sum numbers needed for storing the computer program and the 3D special efficacy image forming apparatus According to.The memory 61 can be also used for temporarily storing the data that has exported or will export.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different Functional unit, module are completed, i.e., the internal structure of described device is divided into different functional unit or module, more than completing The all or part of function of description.Each functional unit in embodiment, module can integrate in one processing unit, can also To be that each unit physically exists alone, can also be integrated in one unit with two or more units, it is above-mentioned integrated Unit both can take the form of hardware realization, can also realize in the form of software functional units.In addition, each function list Member, the specific name of module are also only for convenience of distinguishing each other, the protection scope being not intended to limit this application.Above system The specific work process of middle unit, module, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, is not described in detail or remembers in some embodiment The part of load may refer to the associated description of other embodiments.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed Scope of the present application.
In embodiment provided herein, it should be understood that disclosed device/terminal device and method, it can be with It realizes by another way.For example, device described above/terminal device embodiment is only schematical, for example, institute The division of module or unit is stated, only a kind of logical function partition, there may be another division manner in actual implementation, such as Multiple units or components can be combined or can be integrated into another system, or some features can be ignored or not executed.Separately A bit, shown or discussed mutual coupling or direct-coupling or communication connection can be through some interfaces, device Or the INDIRECT COUPLING or communication connection of unit, it can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated module/unit be realized in the form of SFU software functional unit and as independent product sale or In use, can store in a computer readable storage medium.Based on this understanding, the application realizes above-mentioned implementation All or part of the process in example method, can also instruct relevant hardware to complete, the meter by computer program Calculation machine program can be stored in a computer readable storage medium, the computer program when being executed by processor, it can be achieved that on The step of stating each embodiment of the method.Wherein, the computer program includes computer program code, the computer program Code can be source code form, object identification code form, executable file or certain intermediate forms etc..Computer-readable Jie Matter may include: can carry the computer program code any entity or device, recording medium, USB flash disk, mobile hard disk, Magnetic disk, CD, computer storage, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It should be noted that described The content that computer-readable medium includes can carry out increasing appropriate according to the requirement made laws in jurisdiction with patent practice Subtract, such as in certain jurisdictions, according to legislation and patent practice, computer-readable medium do not include be electric carrier signal and Telecommunication signal.
Embodiment described above is only to illustrate the technical solution of the application, rather than its limitations;Although referring to aforementioned reality Example is applied the application is described in detail, those skilled in the art should understand that: it still can be to aforementioned each Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified Or replacement, the spirit and scope of each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution should all Comprising within the scope of protection of this application.

Claims (10)

1. a kind of 3D special efficacy image generating method characterized by comprising
Receive the original image including depth information;
It identifies the target image for including in the original image, and separates the target image with background image;
According to the target image and/or the depth information of background image, special effect processing is carried out to the target image;
The target image after the special effect processing is merged with background image, obtains 3D special efficacy image.
2. 3D special efficacy image generating method according to claim 1, which is characterized in that the special effect processing includes:
According to the depth value Z of the target image1With the depth value Z of background image2, calculate each pixel of the target image Transparency P=Z1/Z2* 100%;
Transparent processing is carried out to the target image according to the transparency of each pixel calculated.
3. 3D special efficacy image generating method according to claim 1, which is characterized in that the special effect processing further include:
According to the depth value of the target image, three-dimensional significance model S is constructed3D=α Edepth, according to the three-dimensional significance Model determines the profile of the target image, wherein EdepthFor the depth map of target image, α is auto-adaptive parameter;
Triangle gridding is constructed to the profile of the target image;
According to the three-dimensional significance model S3DWarping operations are carried out to triangle all in the triangle gridding.
4. 3D special efficacy image generating method according to claim 1, which is characterized in that the special effect processing further include:
Obtain the target image depth value Z1With background image depth value Z2
By the target image depth value Z1With background image depth value Z2The ratio between be used as zoom factor, according to the zoom factor Zoom in and out operation.
5. 3D special efficacy image generating method according to claim 4, which is characterized in that it is described according to the zoom factor into Row zoom operations include:
Scaled matrix is determined according to the zoom factor, determines that the pixel of the target image passes through by the scaled matrix Three dimensional space coordinate after zooming in or out;
According to the corresponding interpolation method of type of interpolation pixel each in the target image, zoomed in or out in conjunction with described Three dimensional space coordinate afterwards carries out interpolation, generates the image zoomed in or out.
6. 3D special efficacy image generating method according to claim 1, which is characterized in that include in the identification original image Target image the step of include:
According to preset depth value threshold range, the voxel group of target is extracted;
The initial profile exposure mask figure of target is determined according to the voxel group;
Denoising and/or smoothing processing are carried out to the edge pixel of the initial profile exposure mask figure, obtain filtered profile mask Figure.
7. 3D special efficacy image generating method according to claim 1, which is characterized in that described to pass through the special effect processing The step of target image afterwards is merged with background image, obtains 3D special efficacy image include:
By three-dimensional point cloud method for registering, matching is carried out to the pixel of target image and background image after special effect processing and is melted It closes;
Texture information is added to fused image, obtains 3D special efficacy image.
8. a kind of 3D special efficacy video generation device characterized by comprising
Original image receiving unit, for receiving the original image including depth information;
Image separative unit, the target image for including in the original image for identification, and make the target image and background Image separation;
Special effect processing unit, for the depth information according to the target image and/or background image, to the target image into Row special effect processing;
Integrated unit obtains 3D special efficacy for that will merge by the target image after the special effect processing with background image Image.
9. a kind of 3D special efficacy image forming apparatus, including memory, processor and storage are in the memory and can be in institute State the computer program run on processor, which is characterized in that the processor is realized when executing the computer program as weighed Benefit requires the step of any one of 1 to 7 3D special efficacy image generating method.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists In realization 3D special efficacy image generating method as described in any one of claim 1 to 7 when the computer program is executed by processor The step of.
CN201811519475.6A 2018-12-12 2018-12-12 3D special effect image generation method, device and equipment Active CN109840881B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811519475.6A CN109840881B (en) 2018-12-12 2018-12-12 3D special effect image generation method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811519475.6A CN109840881B (en) 2018-12-12 2018-12-12 3D special effect image generation method, device and equipment

Publications (2)

Publication Number Publication Date
CN109840881A true CN109840881A (en) 2019-06-04
CN109840881B CN109840881B (en) 2023-05-05

Family

ID=66883159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811519475.6A Active CN109840881B (en) 2018-12-12 2018-12-12 3D special effect image generation method, device and equipment

Country Status (1)

Country Link
CN (1) CN109840881B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110298813A (en) * 2019-06-28 2019-10-01 北京金山安全软件有限公司 Method and device for processing picture and electronic equipment
CN111526282A (en) * 2020-03-26 2020-08-11 香港光云科技有限公司 Method and device for shooting with adjustable depth of field based on flight time
CN112037121A (en) * 2020-08-19 2020-12-04 北京字节跳动网络技术有限公司 Picture processing method, device, equipment and storage medium
CN112188058A (en) * 2020-09-29 2021-01-05 努比亚技术有限公司 Video shooting method, mobile terminal and computer storage medium
WO2021031506A1 (en) * 2019-08-22 2021-02-25 北京市商汤科技开发有限公司 Image processing method and apparatus, electronic device, and storage medium
CN112544070A (en) * 2020-03-02 2021-03-23 深圳市大疆创新科技有限公司 Video processing method and device
CN112804516A (en) * 2021-04-08 2021-05-14 北京世纪好未来教育科技有限公司 Video playing method and device, readable storage medium and electronic equipment
CN113112608A (en) * 2021-04-20 2021-07-13 厦门汇利伟业科技有限公司 Method for automatically establishing three-dimensional model from object graph
WO2022036633A1 (en) * 2020-08-20 2022-02-24 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for image registration
WO2022089168A1 (en) * 2020-10-26 2022-05-05 腾讯科技(深圳)有限公司 Generation method and apparatus and playback method and apparatus for video having three-dimensional effect, and device
CN115082366A (en) * 2021-03-12 2022-09-20 中国移动通信集团广东有限公司 Image synthesis method and system
WO2023179346A1 (en) * 2022-03-25 2023-09-28 北京字跳网络技术有限公司 Special effect image processing method and apparatus, electronic device, and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110141237A1 (en) * 2009-12-15 2011-06-16 Himax Technologies Limited Depth map generation for a video conversion system
JP2013162465A (en) * 2012-02-08 2013-08-19 Jvc Kenwood Corp Image processing device, image processing method, and image processing program
JP2013223008A (en) * 2012-04-13 2013-10-28 Canon Inc Image processing device and method
CN103677828A (en) * 2013-12-10 2014-03-26 华为技术有限公司 Coverage drawing method, drawing engine and terminal equipment
CN105245774A (en) * 2015-09-15 2016-01-13 努比亚技术有限公司 Picture processing method and terminal
CN105374019A (en) * 2015-09-30 2016-03-02 华为技术有限公司 A multi-depth image fusion method and device
CN106254854A (en) * 2016-08-19 2016-12-21 深圳奥比中光科技有限公司 The preparation method of 3-D view, Apparatus and system
CN107154032A (en) * 2017-04-20 2017-09-12 腾讯科技(深圳)有限公司 A kind of image processing method and device
CN107230199A (en) * 2017-06-23 2017-10-03 歌尔科技有限公司 Image processing method, device and augmented reality equipment
CN107610041A (en) * 2017-08-16 2018-01-19 南京华捷艾米软件科技有限公司 Video portrait based on 3D body-sensing cameras scratches drawing method and system
CN107610077A (en) * 2017-09-11 2018-01-19 广东欧珀移动通信有限公司 Image processing method and device, electronic installation and computer-readable recording medium
CN108322644A (en) * 2018-01-18 2018-07-24 努比亚技术有限公司 A kind of image processing method, mobile terminal and computer readable storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110141237A1 (en) * 2009-12-15 2011-06-16 Himax Technologies Limited Depth map generation for a video conversion system
JP2013162465A (en) * 2012-02-08 2013-08-19 Jvc Kenwood Corp Image processing device, image processing method, and image processing program
JP2013223008A (en) * 2012-04-13 2013-10-28 Canon Inc Image processing device and method
CN103677828A (en) * 2013-12-10 2014-03-26 华为技术有限公司 Coverage drawing method, drawing engine and terminal equipment
CN105245774A (en) * 2015-09-15 2016-01-13 努比亚技术有限公司 Picture processing method and terminal
CN105374019A (en) * 2015-09-30 2016-03-02 华为技术有限公司 A multi-depth image fusion method and device
CN106254854A (en) * 2016-08-19 2016-12-21 深圳奥比中光科技有限公司 The preparation method of 3-D view, Apparatus and system
CN107154032A (en) * 2017-04-20 2017-09-12 腾讯科技(深圳)有限公司 A kind of image processing method and device
CN107230199A (en) * 2017-06-23 2017-10-03 歌尔科技有限公司 Image processing method, device and augmented reality equipment
CN107610041A (en) * 2017-08-16 2018-01-19 南京华捷艾米软件科技有限公司 Video portrait based on 3D body-sensing cameras scratches drawing method and system
CN107610077A (en) * 2017-09-11 2018-01-19 广东欧珀移动通信有限公司 Image processing method and device, electronic installation and computer-readable recording medium
CN108322644A (en) * 2018-01-18 2018-07-24 努比亚技术有限公司 A kind of image processing method, mobile terminal and computer readable storage medium

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110298813A (en) * 2019-06-28 2019-10-01 北京金山安全软件有限公司 Method and device for processing picture and electronic equipment
WO2021031506A1 (en) * 2019-08-22 2021-02-25 北京市商汤科技开发有限公司 Image processing method and apparatus, electronic device, and storage medium
CN112419328A (en) * 2019-08-22 2021-02-26 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN112544070A (en) * 2020-03-02 2021-03-23 深圳市大疆创新科技有限公司 Video processing method and device
CN111526282A (en) * 2020-03-26 2020-08-11 香港光云科技有限公司 Method and device for shooting with adjustable depth of field based on flight time
CN112037121A (en) * 2020-08-19 2020-12-04 北京字节跳动网络技术有限公司 Picture processing method, device, equipment and storage medium
WO2022037634A1 (en) * 2020-08-19 2022-02-24 北京字节跳动网络技术有限公司 Picture processing method and apparatus, device, and storage medium
WO2022036633A1 (en) * 2020-08-20 2022-02-24 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for image registration
CN112188058A (en) * 2020-09-29 2021-01-05 努比亚技术有限公司 Video shooting method, mobile terminal and computer storage medium
WO2022089168A1 (en) * 2020-10-26 2022-05-05 腾讯科技(深圳)有限公司 Generation method and apparatus and playback method and apparatus for video having three-dimensional effect, and device
CN115082366A (en) * 2021-03-12 2022-09-20 中国移动通信集团广东有限公司 Image synthesis method and system
CN112804516B (en) * 2021-04-08 2021-07-06 北京世纪好未来教育科技有限公司 Video playing method and device, readable storage medium and electronic equipment
CN112804516A (en) * 2021-04-08 2021-05-14 北京世纪好未来教育科技有限公司 Video playing method and device, readable storage medium and electronic equipment
CN113112608A (en) * 2021-04-20 2021-07-13 厦门汇利伟业科技有限公司 Method for automatically establishing three-dimensional model from object graph
WO2023179346A1 (en) * 2022-03-25 2023-09-28 北京字跳网络技术有限公司 Special effect image processing method and apparatus, electronic device, and storage medium

Also Published As

Publication number Publication date
CN109840881B (en) 2023-05-05

Similar Documents

Publication Publication Date Title
CN109840881A (en) A kind of 3D special efficacy image generating method, device and equipment
CN112581629B (en) Augmented reality display method, device, electronic equipment and storage medium
CN109064390B (en) Image processing method, image processing device and mobile terminal
WO2020119527A1 (en) Human action recognition method and apparatus, and terminal device and storage medium
CN115699114B (en) Method and apparatus for image augmentation for analysis
CN109561296A (en) Image processing apparatus, image processing method, image processing system and storage medium
JP7387202B2 (en) 3D face model generation method, apparatus, computer device and computer program
CN109242961A (en) A kind of face modeling method, apparatus, electronic equipment and computer-readable medium
CN110163831B (en) Method and device for dynamically displaying object of three-dimensional virtual sand table and terminal equipment
CN109308734B (en) 3D character generation method and device, equipment and storage medium thereof
CN111583381B (en) Game resource map rendering method and device and electronic equipment
CN104010180B (en) Method and device for filtering three-dimensional video
CN110047122A (en) Render method, apparatus, electronic equipment and the computer readable storage medium of image
KR20130089649A (en) Method and arrangement for censoring content in three-dimensional images
CN111652791B (en) Face replacement display method, face replacement live broadcast device, electronic equipment and storage medium
CN115439607A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN104574358A (en) Method and apparatus for scene segmentation from focal stack images
CN110062157A (en) Render method, apparatus, electronic equipment and the computer readable storage medium of image
CN112802081A (en) Depth detection method and device, electronic equipment and storage medium
CN108776800A (en) A kind of image processing method, mobile terminal and computer readable storage medium
CN111583398A (en) Image display method and device, electronic equipment and computer readable storage medium
CN104243970A (en) 3D drawn image objective quality evaluation method based on stereoscopic vision attention mechanism and structural similarity
CN110267079B (en) Method and device for replacing human face in video to be played
CN113989434A (en) Human body three-dimensional reconstruction method and device
CN111652025B (en) Face processing and live broadcasting method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 11-13 / F, joint headquarters building, high tech Zone, 63 Xuefu Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000

Applicant after: Obi Zhongguang Technology Group Co.,Ltd.

Address before: 12 / F, joint headquarters building, high tech Zone, 63 Xuefu Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000

Applicant before: SHENZHEN ORBBEC Co.,Ltd.

GR01 Patent grant
GR01 Patent grant