CN109840881B - 3D special effect image generation method, device and equipment - Google Patents

3D special effect image generation method, device and equipment Download PDF

Info

Publication number
CN109840881B
CN109840881B CN201811519475.6A CN201811519475A CN109840881B CN 109840881 B CN109840881 B CN 109840881B CN 201811519475 A CN201811519475 A CN 201811519475A CN 109840881 B CN109840881 B CN 109840881B
Authority
CN
China
Prior art keywords
image
target image
special effect
target
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811519475.6A
Other languages
Chinese (zh)
Other versions
CN109840881A (en
Inventor
王献冠
郭胜男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orbbec Inc
Original Assignee
Orbbec Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orbbec Inc filed Critical Orbbec Inc
Priority to CN201811519475.6A priority Critical patent/CN109840881B/en
Publication of CN109840881A publication Critical patent/CN109840881A/en
Application granted granted Critical
Publication of CN109840881B publication Critical patent/CN109840881B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The 3D special effect image generation method comprises the following steps: receiving an original image including depth information; identifying a target image included in the original image and separating the target image from a background image; performing special effect processing on the target image according to the depth information of the target image and/or the background image; and fusing the target image subjected to the special effect processing with a background image to obtain a 3D special effect image. After the target image is acquired, the special effect processing is rapidly carried out on the target image according to the depth information of the target image and the background image, thereby being beneficial to simplifying the special effect processing process and improving the special effect processing efficiency.

Description

3D special effect image generation method, device and equipment
Technical Field
The application belongs to the field of image processing, and particularly relates to a method, a device and equipment for generating a 3D special effect image.
Background
At present, intelligent devices such as smart phones and the like are generally provided with social video applications based on RGB cameras, the social video applications are generally based on videos shot by color cameras, and the shot videos are subjected to post-processing treatment through an image processing technology, so that very colorful and interesting effects can be displayed, the diversified requirements of users on using the intelligent devices are met, and the satisfaction degree of the users on using the applications is improved. However, since conventional RGB cameras generally can only capture two-dimensional images, the captured images lack three-dimensional information. For a specific effect requiring the use of three-dimensional information of an image, the specific effect needs to be achieved through a later complicated process, and the processing effect tends to be unrealistic due to the lack of three-dimensional information.
After apple companies push out intelligent devices such as mobile phones with depth cameras, more and more intelligent devices with 3D sensing cameras begin to appear on the market, and the intelligent devices can sense three-dimensional information of objects through the 3D cameras, so that depth data acquisition is simplified. Unlike a two-dimensional image, a depth image records the distance between an object and a depth sensor, and these distance information characterize the distribution of the object in the scene. In view of many advantages of the depth map, the image processing technology based on the depth map has attracted a great deal of attention and has been applied to the field of 3D special effects such as film and television, but the current processing algorithm for generating the 3D special effect image is complex, has higher difficulty coefficient, and is unfavorable for improving the image processing efficiency.
Disclosure of Invention
In view of this, the embodiments of the present application provide a method, an apparatus, and a device for generating a 3D special effect image, so as to solve the problem that in the prior art, a processing algorithm for generating a 3D special effect image is complex, a difficulty coefficient is high, and it is not beneficial to improve image processing efficiency.
A first aspect of an embodiment of the present application provides a 3D special effect image generating method, where the 3D special effect image generating method includes:
receiving an original image including depth information;
identifying a target image included in the original image and separating the target image from a background image;
performing special effect processing on the target image according to the depth information of the target image and/or the background image;
and fusing the target image subjected to the special effect processing with a background image to obtain a 3D special effect image.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the step of performing special effect processing on the target image according to depth information of the target image and/or the background image includes:
according to the depth value Z of the target image 1 Depth value Z with background image 2 Calculating the transparency p=z of each pixel of the target image 1 /Z 2 *100%;
And performing transparent processing on the target image according to the calculated transparency of each pixel.
With reference to the first aspect, in a second possible implementation manner of the first aspect, the special effect processing further includes:
constructing a three-dimensional saliency model S according to the depth value of the target image 3D =α·E depth Determining the target according to the three-dimensional saliency modelContour of image, wherein E depth The depth map of the target image is obtained, and alpha is an adaptive parameter;
constructing a triangular mesh for the outline of the target image;
according to the three-dimensional saliency model S 3D And performing twisting operation on all triangles in the triangular mesh.
With reference to the first aspect, in a third possible implementation manner of the first aspect, the step of performing special effect processing on the target image according to depth information of the target image and/or the background image further includes:
acquiring the depth value Z of the target image 1 And background image depth value Z 2
According to the depth value Z of the target image 1 And background image depth value Z 2 And scaling the damage according to the scaling factor by taking the ratio as the scaling factor.
With reference to the third possible implementation manner of the first aspect, in a fourth possible implementation manner of the first aspect, the performing a scaling operation according to the scaling factor includes:
determining a scaling matrix according to the scaling factor, and determining three-dimensional space coordinates of the pixel points of the target image after being enlarged or reduced through the scaling matrix;
and according to an interpolation mode corresponding to the type of each pixel point to be interpolated in the target image, carrying out interpolation by combining the three-dimensional space coordinates after enlargement or reduction, and generating an enlarged or reduced image.
With reference to the first aspect, in a fifth possible implementation manner of the first aspect, the step of identifying a target image included in the original image includes:
extracting a three-dimensional pixel group of the target according to a preset depth value threshold range;
determining an initial outline mask map of the target according to the three-dimensional pixel group;
and denoising and/or smoothing the edge pixels of the initial contour mask map to obtain a filtered contour mask map.
With reference to the first aspect, in a sixth possible implementation manner of the first aspect, the step of fusing the target image after the special effect processing with a background image to obtain a 3D special effect image includes:
matching and fusing the pixel points of the target image and the background image after special effect processing by a three-dimensional point cloud registration method;
and adding texture information into the fused image to obtain the 3D special effect image.
A second aspect of the embodiments of the present application provides a 3D special effect image generating apparatus, the 3D special effect image generating apparatus including:
an original image receiving unit for receiving an original image including depth information;
an image separation unit for identifying a target image included in the original image and separating the target image from a background image;
the special effect processing unit is used for carrying out special effect processing on the target image according to the depth information of the target image and/or the background image;
and the fusion unit is used for fusing the target image subjected to the special effect processing with the background image to obtain a 3D special effect image.
A third aspect of the embodiments of the present application provides a 3D special effect image generating apparatus, including a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the 3D special effect image generating method according to any one of the first aspects when the computer program is executed.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium storing a computer program, which when executed by a processor implements the steps of the 3D special effect image generation method according to any one of the first aspects.
Compared with the prior art, the embodiment of the application has the beneficial effects that: after an original image comprising depth information is received and a target image and a background image included in the original image are separated, special effect processing is carried out on the target image according to the depth information of the target image and the depth information of the background image, and then the target image after special effect processing is fused with the background image, so that a 3D special effect image is obtained. According to the method and the device, after the target image is acquired, special effect processing is conducted on the target image according to the depth information of the target image and the background image, so that the special effect processing process is simplified, and the special effect processing efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required for the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic implementation flow diagram of a 3D special effect image generating method according to an embodiment of the present application;
fig. 2 is a schematic implementation flow chart of a 3D image transparency processing method according to an embodiment of the present application;
fig. 3 is a schematic implementation flow diagram of a method for processing distortion of a 3D image according to an embodiment of the present application;
fig. 4 is a schematic implementation flow diagram of a scaling processing method for a 3D image according to an embodiment of the present application;
fig. 5 is a schematic diagram of a 3D special effect image generating apparatus according to an embodiment of the present application;
fig. 6 is a schematic diagram of a 3D special effect image generating apparatus provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to illustrate the technical solutions described in the present application, the following description is made by specific examples.
Fig. 1 is a schematic implementation flow chart of a 3D special effect image generating method according to an embodiment of the present application, which is described in detail below:
in step S101, an original image including depth information is received;
specifically, the original image described in the present application may be a single image, or may be a sequence of images included in the acquired video. The original image including the depth information may be acquired by a 3D camera of the smart device. The 3D camera is not limited to specific types, and can acquire depth images and color images of the target image. The device with the 3D camera can be intelligent devices such as a mobile phone, a PAD, a PC and the like. The RGBD image can be acquired more flexibly by using the portable device, for example, after a user shoots the target RGBD image by using a mobile phone, personalized processing operation can be performed according to the 3D special effect image generation method.
In step S102, a target image included in the original image is identified, and the target image is separated from a background image;
the target image described herein may be a person, or may also be a specific object. In order to facilitate special effect processing on the target image, the original image is subjected to separation processing, and the target image and the background image are obtained after separation. Wherein, the step of identifying the target image included in the original image may include:
a, determining image characteristics of a target image;
and B, performing feature matching in the original image according to the determined image features, and determining a target image according to a feature matching result.
For example, when the target is a person, the person feature may be set, the set person feature may be matched with the original image, an image area including the person feature in the original image may be determined according to the matching result, and the determined image area may be used as the target image, so that the target image of the person and the background image without the person may be separated.
Of course, since the original image described in the present application includes depth information, the target image may be separated in combination with the depth information. For example, when the target image is a human body, detection may be performed by feature extraction using sparse representation, dense representation, spatial pyramid extraction, or the like. Because of the specificity of the human body, the pixels of the human body are obviously different from other pixels, and the edge pixels between the human body and other objects can be identified. In the depth map of the original image, each pixel has a corresponding depth value, and if a group of adjacent pixels have similar depth values, it can be determined that the pixel belongs to a similar object.
Taking a human body as an example, if the human body is static or the motion amplitude is not large, the distances between all pixels of the human body area and the camera are similar, namely the depth values are similar, a depth value threshold range can be set at this time, and when the depth values of all adjacent pixels of the human body image area do not exceed a preset threshold range, the adjacent pixel blocks and the edge pixel blocks can be marked as initial three-dimensional pixel groups belonging to the human body, and the initial three-dimensional pixel groups form an initial outline mask map of the human body.
After the initial contour mask map of the human body is obtained, for example, edge pixels and the like are required to be processed, including denoising, smoothing and the like, so as to obtain a final contour mask map, namely, a filtered contour mask map. Because the original image acquired by the 3D camera has the defects of noise and the like, a smoothing filter, a statistical ordering filter and the like can be adopted for processing, and the depth image can keep a reliable depth information value through noise reduction processing. After the filter is adopted for the drying and smoothing treatment, the method of expansion, corrosion and the like can be adopted for further treatment so as to achieve the purposes of filtering noise, accurately extracting human body areas and the like. The expansion operation can expand the boundary of the human body outwards and can fill the smaller cavity in the human body. The corrosion operation is used for removing edge points of a human body and eliminating tiny noise, so that accurate depth value information corresponding to edge pixels of an initial contour mask image can be determined, then the initial three-dimensional pixel group is filtered to obtain a final three-dimensional pixel group, further the final contour mask image of the human body is obtained, and the separation of the human body and the background is realized.
In one embodiment, when processing a plurality of continuous depth map sequences or processing the situation that a human body is blocked by other objects, the final three-dimensional pixel group belonging to the human body can be identified first, and when the human body appears in the next depth map, the target human image on the plurality of continuous depth maps can be determined by tracking the three-dimensional pixel group, which can be applied to video human image processing. And when the human body is shielded, the filtered outline mask image of the human body can be rapidly determined through the three-dimensional pixel group, so that the original image is subjected to image matting processing according to the filtered outline mask image, and a target image and a background image are obtained.
In step S103, special effect processing is performed on the target image according to the depth information of the target image and/or the background image;
in the present application, the special effect processing manner on the target image may include one or more of transparency, warping, and scaling. The step of performing the transparency process on the target image may, as shown in fig. 2, include:
in step S201, according to the depth value Z of the target image 1 Depth value Z with background image 2 Calculating the transparency p=z of each pixel of the target image 1 /Z 2 *100%;
In step S202, a transparency process is performed on the target image according to the calculated transparency of each pixel.
When transparency processing is performed on a human body, if depth writing and color mixing are performed simultaneously, calculation difficulty is increased, processing speed is affected, if depth information is closed, some complex grids can generate an erroneous transparent effect due to lack of depth values, for example, in a foreground object, the human body is blocked by other objects positioned in front of the foreground object, and if three-dimensional information is not provided, a blocking object can be simultaneously transparent when the human body is transparent. Therefore, in order to more truly exhibit the effect of a transparent person, reference must be made to depthAnd performing transparency processing on the degree value information. For example, the transparency processing in the case of the open depth writing may be implemented by a time-sharing processing method: and starting depth writing, but not outputting color, writing the depth information into a depth buffer, and then performing normal transparency processing, wherein the transparency processing can be performed according to the percentage of the depth values of the target image and the background image. Assume that the depth value of the human body matting image is Z 1 The depth value of the background is Z 2 From this, the transparency can be calculated as: p=z 1 /Z 2 *100%, the human body can realize better transparent effect according to the transparency P. Further, since the depth information of each target pixel is known in advance, the transparency process can be performed in accordance with the depth information at the pixel level when the transparency process is performed again.
Fig. 3 is a schematic implementation flow chart of performing a warping process on a target image according to an embodiment of the present application, which is described in detail below:
in step S301, a three-dimensional saliency model S is constructed according to the depth value of the target image 3D =α·E depth Determining the outline of the target image according to the three-dimensional saliency model, wherein E depth The depth map of the target image is obtained, and alpha is an adaptive parameter;
in step S302, a triangular mesh is constructed for the outline of the target image;
a triangular mesh may be constructed for the contours of the target image using Delaunay trigonometry. The triangle mesh comprises a plurality of triangles, and the Delaunay triangle method utilizes the property of discrete points to connect the three closest discrete points into a triangle, and ensures that the same triangle mesh can be obtained no matter which point is calculated, so that the image mesh tends to be simplified, and the subsequent processing is convenient.
In step S303, according to the three-dimensional saliency model S 3D And performing twisting operation on all triangles in the triangular mesh.
According to a three-dimensional saliency model S 3D Twisting all triangles in the triangular mesh, namely each triangle is according to the value corresponding to the twisting matrixAnd performing operations such as translation, rotation, bending and the like on the target image. As depth information and triangular meshes are added as constraints, the saliency of each point is considered, and the integrity of the edges of objects in the image is considered, so that even if the saliency of two points on the same side of a triangle is inconsistent, the final distortion result can not cause the side to break and the like, thereby ensuring the robustness of image distortion.
Fig. 4 is a schematic implementation flow chart of a scaling process of a 3D image according to an embodiment of the present application, which is described in detail below:
in step S401, the target image depth value Z is acquired 1 And background image depth value Z 2
The zooming-in and zooming-out of the target image can refer to the depth value information of the background image, if the background image is far away from the 3D camera, the depth value is larger, the amplified foreground image is obviously larger than the background at the moment, and otherwise, the amplified target image is not abrupt relative to the background image. Accordingly, in performing the zoom-in or zoom-out operation, the depth values of the foreground and background pixels may be taken as the calculation parameters. For example, the depth value Z of the human body is obtained through the depth map 1 The depth value of the background is Z 2 And Z is 2 >Z 1
In step S402, according to the target image depth value Z 1 And background image depth value Z 2 Scaling the ratio as a scaling factor may include:
a, determining a scaling matrix according to the scaling factor, and determining three-dimensional space coordinates of the pixel points of the target image after being enlarged or reduced according to the scaling matrix;
determining a scaling factor based on the depth value information and scaling the target image by the scaling factor may be based on Z 1 And Z 2 The amplification factor is calculated as: m is M 1 =Z 2 /Z 1 The reduction factor is: m is M 2 =Z 1 /Z 2 Then further respectively calculating an enlarged or reduced matrix according to the zoom factor, and finally obtaining each key according to the enlarged matrix or the reduced matrixAnd the three-dimensional space coordinates of the point pixels after being enlarged or reduced are formed, so that space nodes of the target image are formed.
And B, interpolating according to an interpolation mode corresponding to the type of each pixel point to be interpolated in the target image and combining the three-dimensional space coordinates after enlarging or shrinking to generate an enlarged or shrinking image.
And interpolating each pixel point to be interpolated according to an interpolation mode corresponding to the type of each pixel point to be interpolated in the scaled target image, so as to generate a final high-resolution enlarged or reduced image. And interpolating the spatial nodes of the target image by using an interpolation algorithm to obtain a final scaling effect.
In step S104, the target image after the special effect processing is fused with a background image, so as to obtain a 3D special effect image.
After the special effects such as transparentization, distortion, scaling and the like are processed, the processed target image and the background image can be fused and overlapped, or the texture map can be used for rendering, so that the final 3D effect is realized. The step needs to match each pixel point of the target image and the background image, and registration fusion can be carried out on each local depth image through a three-dimensional point cloud registration algorithm (such as an ICP algorithm). After the point cloud is registered and fused, the three-dimensional point cloud can be finally drawn into a three-dimensional grid to form a grid model comprising elements such as vertexes, edges, faces, polygons and the like, so that the rendering process is simplified. After the grid expression of the three-dimensional model is realized, texture information is required to be added to the fused image for visualization, so that a complete three-dimensional texture image is finally obtained. The user can directly watch the processed 3D effect on intelligent equipment such as a mobile phone or other displays.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
Fig. 5 is a schematic structural diagram of a 3D special effect image generating device according to an embodiment of the present application, which is described in detail below:
the 3D special effect image generation device comprises:
an original image receiving unit 501 for receiving an original image including depth information;
an image separation unit 502 for identifying a target image included in the original image and separating the target image from a background image;
a special effect processing unit 503, configured to perform special effect processing on the target image according to depth information of the target image and/or the background image;
and a fusion unit 504, configured to fuse the target image after the special effect processing with a background image, so as to obtain a 3D special effect image.
The 3D special effect image generating apparatus shown in fig. 5 corresponds to the 3D special effect image generating method shown in fig. 1.
Fig. 6 is a schematic diagram of a 3D special effect image generating apparatus according to an embodiment of the present application. As shown in fig. 6, the 3D special effects image generation apparatus 6 of this embodiment includes: a processor 60, a memory 61 and a computer program 62, such as a 3D special effects image generation program, stored in said memory 61 and executable on said processor 60. The processor 60, when executing the computer program 62, implements the steps of the various 3D special effects image generation method embodiments described above. Alternatively, the processor 60, when executing the computer program 62, performs the functions of the modules/units of the apparatus embodiments described above.
By way of example, the computer program 62 may be partitioned into one or more modules/units that are stored in the memory 61 and executed by the processor 60 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions for describing the execution of the computer program 62 in the 3D special effects image generating apparatus 6. For example, the computer program 62 may be partitioned into:
an original image receiving unit for receiving an original image including depth information;
an image separation unit for identifying a target image included in the original image and separating the target image from a background image;
the special effect processing unit is used for carrying out special effect processing on the target image according to the depth information of the target image and/or the background image;
and the fusion unit is used for fusing the target image subjected to the special effect processing with the background image to obtain a 3D special effect image.
The 3D special effect image generating device 6 may be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud server, etc. The 3D special effects image generating device may include, but is not limited to, a processor 60, a memory 61. It will be appreciated by those skilled in the art that fig. 6 is merely an example of a 3D special effects image generating device 6 and does not constitute a limitation of the 3D special effects image generating device 6, and may include more or less components than illustrated, or may combine certain components, or different components, e.g. the 3D special effects image generating device may further include an input-output device, a network access device, a bus, etc.
The processor 60 may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may be an internal storage unit of the 3D special effects image generating apparatus 6, for example, a hard disk or a memory of the 3D special effects image generating apparatus 6. The memory 61 may also be an external storage device of the 3D special effect image generating device 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the 3D special effect image generating device 6. Further, the memory 61 may also include both an internal storage unit and an external storage device of the 3D special effects image generation apparatus 6. The memory 61 is used for storing the computer program and other programs and data required for the 3D special effect image generating apparatus. The memory 61 may also be used for temporarily storing data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each method embodiment described above. . Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium may include content that is subject to appropriate increases and decreases as required by jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is not included as electrical carrier signals and telecommunication signals.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (4)

1. A method for generating a 3D special effect image, comprising:
receiving an original image including depth information;
identifying a target image included in the original image and separating the target image from a background image;
performing special effect processing on the target image according to the depth information of the target image and/or the background image;
fusing the target image subjected to the special effect processing with a background image to obtain a 3D special effect image;
the special effect processing comprises the following steps:
according to the depth value Z of the target image 1 Depth value Z with background image 2 Calculating individual pixels of the target imageTransparency p=z of (1) 1 /Z 2 *100%;
According to the calculated transparency of each pixel, carrying out transparent processing on the target image according to the depth information of the pixel level;
the special effect processing further includes:
constructing a three-dimensional saliency model S according to the depth value of the target image 3D =α·E depth Determining the outline of the target image according to the three-dimensional saliency model, wherein E depth The depth map of the target image is obtained, and alpha is an adaptive parameter;
constructing a triangular mesh for the outline of the target image;
according to the three-dimensional saliency model S 3D Performing twisting operation on all triangles in the triangular mesh;
the special effect processing further includes:
acquiring the depth value Z of the target image 1 And background image depth value Z 2
Depth value Z of the target image 1 Depth value Z with background image 2 The ratio is used as a scaling factor, and scaling operation is carried out according to the scaling factor;
the scaling operation according to the scaling factor comprises:
determining a scaling matrix according to the scaling factor, and determining three-dimensional space coordinates of the pixel points of the target image after being enlarged or reduced through the scaling matrix;
according to the interpolation mode corresponding to the type of each pixel point to be interpolated in the target image, carrying out interpolation by combining the three-dimensional space coordinates after enlargement or reduction to generate an enlarged or reduced image;
the step of identifying the target image included in the original image includes:
extracting a three-dimensional pixel group of the target according to a preset depth value threshold range;
determining an initial outline mask map of the target according to the three-dimensional pixel group;
denoising and/or smoothing the edge pixels of the initial contour mask map to obtain a filtered contour mask map;
the step of fusing the target image after the special effect processing with a background image to obtain a 3D special effect image comprises the following steps:
matching and fusing the pixel points of the target image and the background image after special effect processing by a three-dimensional point cloud registration method;
and adding texture information into the fused image to obtain the 3D special effect image.
2. A 3D special effect image generating apparatus, comprising:
an original image receiving unit for receiving an original image including depth information;
an image separation unit for identifying a target image included in the original image and separating the target image from a background image;
the special effect processing unit is used for carrying out special effect processing on the target image according to the depth information of the target image and/or the background image;
the fusion unit is used for fusing the target image subjected to the special effect processing with a background image to obtain a 3D special effect image;
the special effect processing unit includes:
a transparency calculating subunit for calculating a depth value Z according to the target image 1 Depth value Z with background image 2 Calculating the transparency p=z of each pixel of the target image 1 /Z 2 *100%;
A transparent processing subunit, configured to perform transparent processing on the target image according to the calculated transparency of each pixel and depth information at a pixel level;
a contour determination subunit, configured to construct a three-dimensional saliency model S according to the depth value of the target image 3D =α·E depth Determining the outline of the target image according to the three-dimensional saliency model, wherein E depth The depth map of the target image is obtained, and alpha is an adaptive parameter;
a triangular mesh construction subunit, configured to construct a triangular mesh for the outline of the target image;
a distortion operation subunit for generating a three-dimensional saliency model S 3D Performing twisting operation on all triangles in the triangular mesh;
a depth acquisition subunit, configured to acquire the target image depth value Z 1 And background image depth value Z 2
A scaling operation subunit for scaling the target image depth value Z 1 Depth value Z with background image 2 The ratio is used as a scaling factor, and scaling operation is carried out according to the scaling factor;
the scaling operation subunit includes:
the three-dimensional space coordinate determining module is used for determining a scaling matrix according to the scaling factor, and determining three-dimensional space coordinates of the pixel points of the target image after being enlarged or reduced through the scaling matrix;
the image generation module is used for carrying out interpolation by combining the three-dimensional space coordinates after the enlargement or reduction according to the interpolation mode corresponding to the type of each pixel point to be interpolated in the target image to generate an enlarged or reduced image;
the target image separation unit includes:
a three-dimensional pixel group extraction subunit, configured to extract a three-dimensional pixel group of the target according to a preset depth value threshold range;
an initial contour mask map determining subunit, configured to determine an initial contour mask map of the target according to the voxel group;
the denoising subunit is used for denoising and/or smoothing the edge pixels of the initial contour mask image to obtain a filtered contour mask image;
the fusion unit includes:
the matching fusion subunit is used for matching and fusing the pixel points of the target image and the background image after the special effect processing by a three-dimensional point cloud registration method;
and the texture adding subunit is used for adding texture information to the fused image to obtain a 3D special effect image.
3. A 3D special effect image generating device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the 3D special effect image generating method according to claim 1 when executing the computer program.
4. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the steps of the 3D special effects image generation method of claim 1.
CN201811519475.6A 2018-12-12 2018-12-12 3D special effect image generation method, device and equipment Active CN109840881B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811519475.6A CN109840881B (en) 2018-12-12 2018-12-12 3D special effect image generation method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811519475.6A CN109840881B (en) 2018-12-12 2018-12-12 3D special effect image generation method, device and equipment

Publications (2)

Publication Number Publication Date
CN109840881A CN109840881A (en) 2019-06-04
CN109840881B true CN109840881B (en) 2023-05-05

Family

ID=66883159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811519475.6A Active CN109840881B (en) 2018-12-12 2018-12-12 3D special effect image generation method, device and equipment

Country Status (1)

Country Link
CN (1) CN109840881B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110298813A (en) * 2019-06-28 2019-10-01 北京金山安全软件有限公司 Method and device for processing picture and electronic equipment
CN112419328B (en) * 2019-08-22 2023-08-04 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
WO2021174389A1 (en) * 2020-03-02 2021-09-10 深圳市大疆创新科技有限公司 Video processing method and apparatus
CN111526282A (en) * 2020-03-26 2020-08-11 香港光云科技有限公司 Method and device for shooting with adjustable depth of field based on flight time
CN112037121A (en) * 2020-08-19 2020-12-04 北京字节跳动网络技术有限公司 Picture processing method, device, equipment and storage medium
CN116322902A (en) * 2020-08-20 2023-06-23 上海联影医疗科技股份有限公司 Image registration system and method
CN112188058A (en) * 2020-09-29 2021-01-05 努比亚技术有限公司 Video shooting method, mobile terminal and computer storage medium
CN112272295B (en) * 2020-10-26 2022-06-10 腾讯科技(深圳)有限公司 Method for generating video with three-dimensional effect, method for playing video, device and equipment
CN115082366B (en) * 2021-03-12 2024-07-19 中国移动通信集团广东有限公司 Image synthesis method and system
CN112804516B (en) * 2021-04-08 2021-07-06 北京世纪好未来教育科技有限公司 Video playing method and device, readable storage medium and electronic equipment
CN113112608A (en) * 2021-04-20 2021-07-13 厦门汇利伟业科技有限公司 Method for automatically establishing three-dimensional model from object graph
CN114677386A (en) * 2022-03-25 2022-06-28 北京字跳网络技术有限公司 Special effect image processing method and device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013162465A (en) * 2012-02-08 2013-08-19 Jvc Kenwood Corp Image processing device, image processing method, and image processing program
JP2013223008A (en) * 2012-04-13 2013-10-28 Canon Inc Image processing device and method
CN105374019A (en) * 2015-09-30 2016-03-02 华为技术有限公司 A multi-depth image fusion method and device
CN106254854A (en) * 2016-08-19 2016-12-21 深圳奥比中光科技有限公司 The preparation method of 3-D view, Apparatus and system
CN107154032A (en) * 2017-04-20 2017-09-12 腾讯科技(深圳)有限公司 A kind of image processing method and device
CN107610041A (en) * 2017-08-16 2018-01-19 南京华捷艾米软件科技有限公司 Video portrait based on 3D body-sensing cameras scratches drawing method and system
CN108322644A (en) * 2018-01-18 2018-07-24 努比亚技术有限公司 A kind of image processing method, mobile terminal and computer readable storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8610758B2 (en) * 2009-12-15 2013-12-17 Himax Technologies Limited Depth map generation for a video conversion system
CN103677828B (en) * 2013-12-10 2017-02-22 华为技术有限公司 Coverage drawing method, drawing engine and terminal equipment
CN105245774B (en) * 2015-09-15 2018-12-21 努比亚技术有限公司 A kind of image processing method and terminal
CN107230199A (en) * 2017-06-23 2017-10-03 歌尔科技有限公司 Image processing method, device and augmented reality equipment
CN107610077A (en) * 2017-09-11 2018-01-19 广东欧珀移动通信有限公司 Image processing method and device, electronic installation and computer-readable recording medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013162465A (en) * 2012-02-08 2013-08-19 Jvc Kenwood Corp Image processing device, image processing method, and image processing program
JP2013223008A (en) * 2012-04-13 2013-10-28 Canon Inc Image processing device and method
CN105374019A (en) * 2015-09-30 2016-03-02 华为技术有限公司 A multi-depth image fusion method and device
CN106254854A (en) * 2016-08-19 2016-12-21 深圳奥比中光科技有限公司 The preparation method of 3-D view, Apparatus and system
CN107154032A (en) * 2017-04-20 2017-09-12 腾讯科技(深圳)有限公司 A kind of image processing method and device
CN107610041A (en) * 2017-08-16 2018-01-19 南京华捷艾米软件科技有限公司 Video portrait based on 3D body-sensing cameras scratches drawing method and system
CN108322644A (en) * 2018-01-18 2018-07-24 努比亚技术有限公司 A kind of image processing method, mobile terminal and computer readable storage medium

Also Published As

Publication number Publication date
CN109840881A (en) 2019-06-04

Similar Documents

Publication Publication Date Title
CN109840881B (en) 3D special effect image generation method, device and equipment
CN109064428B (en) Image denoising processing method, terminal device and computer readable storage medium
WO2020119527A1 (en) Human action recognition method and apparatus, and terminal device and storage medium
WO2020001168A1 (en) Three-dimensional reconstruction method, apparatus, and device, and storage medium
CN107851321B (en) Image processing method and dual-camera system
US20110148868A1 (en) Apparatus and method for reconstructing three-dimensional face avatar through stereo vision and face detection
CN111311482B (en) Background blurring method and device, terminal equipment and storage medium
US8902229B2 (en) Method and system for rendering three dimensional views of a scene
CN110163831B (en) Method and device for dynamically displaying object of three-dimensional virtual sand table and terminal equipment
EP2259224A2 (en) Image processing apparatus, image processing method, and program
CN107563974B (en) Image denoising method and device, electronic equipment and storage medium
CN104010180B (en) Method and device for filtering three-dimensional video
CN111724481A (en) Method, device, equipment and storage medium for three-dimensional reconstruction of two-dimensional image
CN111583381B (en) Game resource map rendering method and device and electronic equipment
CN108961283A (en) Based on the corresponding image distortion method of feature and device
CN111275824A (en) Surface reconstruction for interactive augmented reality
CN111311481A (en) Background blurring method and device, terminal equipment and storage medium
CN113379815A (en) Three-dimensional reconstruction method and device based on RGB camera and laser sensor and server
CN113570725A (en) Three-dimensional surface reconstruction method and device based on clustering, server and storage medium
WO2024002064A1 (en) Method and apparatus for constructing three-dimensional model, and electronic device and storage medium
EP3497667A1 (en) Apparatus, method, and computer program code for producing composite image
CN112102169A (en) Infrared image splicing method and device and storage medium
KR20120118462A (en) Concave surface modeling in image-based visual hull
CN112884817B (en) Dense optical flow calculation method, dense optical flow calculation device, electronic device, and storage medium
CN114723796A (en) Three-dimensional point cloud generation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 11-13 / F, joint headquarters building, high tech Zone, 63 Xuefu Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000

Applicant after: Obi Zhongguang Technology Group Co.,Ltd.

Address before: 12 / F, joint headquarters building, high tech Zone, 63 Xuefu Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000

Applicant before: SHENZHEN ORBBEC Co.,Ltd.

GR01 Patent grant
GR01 Patent grant