CN107623832A - Video background replacement method, device and mobile terminal - Google Patents
Video background replacement method, device and mobile terminal Download PDFInfo
- Publication number
- CN107623832A CN107623832A CN201710824539.2A CN201710824539A CN107623832A CN 107623832 A CN107623832 A CN 107623832A CN 201710824539 A CN201710824539 A CN 201710824539A CN 107623832 A CN107623832 A CN 107623832A
- Authority
- CN
- China
- Prior art keywords
- user
- image
- background
- video
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Processing Or Creating Images (AREA)
Abstract
This application discloses a kind of video background replacement method, device and mobile terminal, wherein, this method includes:Obtain the current video picture with the second user of the first user video call;When it is determined that three user in addition to second user in current video picture be present, the three-dimensional background image of scene where obtaining the first user, and obtain the depth image of the first user;Processing three-dimensional background image and the depth image of the first user obtain personage's area image to extract people's object area of first user in three-dimensional background image;The video background of the first user is replaced according to default video background and personage's area image.Thus, in video call process, the automatic replacement of video background is realized so that scene where the 3rd user can not watch the first user, avoid the first individual subscriber privacy compromised, improve user experience.
Description
Technical field
The application is related to communication technical field, more particularly to a kind of video background replacement method, device and mobile terminal.
Background technology
With the development of scientific and technological level, the function of the terminal such as mobile phone, tablet personal computer is become stronger day by day.It is for example, increasing
Terminal is configured with camera, and user can shoot photo, video recording, Video chat etc. by camera.
The process of Video chat is carried out by camera and other side in user, can not only show that user draws in video pictures
Face, the picture of environment where user can be also shown, by some individual privacies that user may be related in the scene, therefore,
During video, usual user only wants to the picture of environment where other side sees oneself, i.e. is not intended to user and sees oneself
Video background, and it is not intended to the picture of environment where other unrelated users see oneself.
However, during two-party video, when user has found that user needs hand when having other people in other side's video pictures
Dynamic to close video calling, for a user, inconvenient for operation, if turned off not in time, video pictures may be seen
.Therefore, when how to avoid video, the individual privacy information of user is compromised, the personal secrets for protecting user, improves and uses
Family is experienced, significant.
The content of the invention
The purpose of the application is intended at least solve one of above-mentioned technical problem to a certain extent.
Therefore, first purpose of the application is to propose a kind of video background replacement method, this method is in video calling
During, realize the automatic replacement of video background so that the 3rd user can not watch scene where the first user, avoid the
The individual privacy such as scene is compromised where one user, improves user experience.
Second purpose of the application is to propose a kind of video background alternative.
The 3rd purpose of the application is to propose a kind of computer-readable recording medium.
The 4th purpose of the application is to propose a kind of mobile terminal.
The 5th purpose of the application is to propose a kind of computer program.
The video background replacement method of the application first aspect embodiment, including:Obtain and the call of the first user video
The current video picture of second user;It is determined that the 3rd use in addition to the second user in the current video picture be present
During family, the three-dimensional background image of scene where obtaining first user, and obtain the depth image of first user;Processing
The three-dimensional background image and the depth image of first user are to extract first user in the three-dimensional background image
In people's object area and obtain personage's area image;Used according to default video background and personage's area image described first
The video background at family is replaced.
According to the video background replacement method of the embodiment of the present application, in the first user and the process of second user video calling
In, when it is determined that three users be present in second user video pictures, the three-dimensional background image of scene where obtaining the first user,
And the depth image of the first user is obtained, and the depth image of three-dimensional background image and the first user is handled to extract the first user
People's object area in three-dimensional background image and obtain personage's area image, and according to default video background and personage's administrative division map
As being replaced to the video background of the first user.Thus, in video call process, realize the automatic of video background and replace
Change so that scene where the 3rd user can not watch the first user, the individual privacy such as scene is let out where avoiding the first user
Dew, improves user experience.
The video background alternative of the application second aspect embodiment, including:First acquisition module, for obtaining and the
The current video picture of the second user of one user video call;Image capture module, for it is determined that the current video is drawn
When three user in addition to the second user in face be present, the three-dimensional background figure of scene where obtaining first user
Picture, and obtain the depth image of first user;First processing module, for handling the three-dimensional background image and described
The depth image of one user obtains personage area to extract people's object area of first user in the three-dimensional background image
Area image;Replacement module, for being carried on the back according to default video background and personage's area image to the video of first user
Scape is replaced.
According to the video background alternative of the embodiment of the present application, in the first user and the process of second user video calling
In, when it is determined that three users be present in second user video pictures, the three-dimensional background image of scene where obtaining the first user,
And the depth image of the first user is obtained, and the depth image of three-dimensional background image and the first user is handled to extract the first user
People's object area in three-dimensional background image and obtain personage's area image, and according to default video background and personage's administrative division map
As being replaced to the video background of the first user.Thus, in video call process, realize the automatic of video background and replace
Change so that scene where the 3rd user can not watch the first user, the individual privacy such as scene is let out where avoiding the first user
Dew, improves user experience.
The application third aspect embodiment provides one or more non-volatile meters for including computer executable instructions
Calculation machine readable storage medium storing program for executing, when the computer executable instructions are executed by one or more processors so that the processing
Device performs the video background replacement method of the application first aspect embodiment.
The mobile terminal of the application fourth aspect embodiment, the mobile terminal includes memory and processor, described to deposit
Computer-readable instruction is stored in reservoir, when the instruction is by the computing device so that this Shen of the computing device
Please first aspect embodiment video background replacement method.
According to the mobile terminal of the embodiment of the present application, during the first user and second user video calling, true
When determining to exist in second user video pictures three users, the three-dimensional background image of scene where obtaining the first user, and obtain
The depth image of first user, and the depth image of three-dimensional background image and the first user is handled to extract the first user in three-dimensional
People's object area in background image and obtain personage's area image, and according to default video background and personage's area image to
The video background of one user is replaced.Thus, in video call process, the automatic replacement of video background is realized so that
Scene where 3rd user can not watch the first user, the individual privacy such as scene is compromised where avoiding the first user, improves
User experience.
The aspect embodiment of the application the 5th provides a kind of computer program product, when in the computer program product
When instruction processing unit performs, the video background replacement method of the application first aspect embodiment is performed.
The aspect and advantage that the application adds will be set forth in part in the description, and will partly become from the following description
Obtain substantially, or recognized by the practice of the application.
Brief description of the drawings
The above-mentioned and/or additional aspect of the application and advantage will become from the following description of the accompanying drawings of embodiments
Substantially and it is readily appreciated that, wherein,
Fig. 1 is the flow chart according to the video background replacement method of the application one embodiment;
Fig. 2 is the refined flow chart for obtaining the depth image of the first user;
Fig. 3 is phase information corresponding to each pixel of demodulation structure light image to obtain the depth image of the first user
Refined flow chart;
Fig. 4 (a) to Fig. 4 (e) is the schematic diagram of a scenario according to the structural light measurement of the application one embodiment;
Fig. 5 (a) and Fig. 5 (b) is the schematic diagram of a scenario according to the structural light measurement of the application one embodiment;
Fig. 6 is to handle the depth image of three-dimensional background image and the first user to extract the first user in three-dimensional background image
In people's object area and obtain the refined flow chart of personage's area image;
Fig. 7 is the flow chart according to the video background replacement method of the application another embodiment;
Fig. 8 is the structural representation according to the video background alternative of the embodiment of the present application of the application one embodiment
Figure;
Fig. 9 is the structural representation according to the video background alternative of the embodiment of the present application of the application another embodiment
Figure;
Figure 10 is shown according to the structure of the video background alternative of the embodiment of the present application of the application another embodiment
It is intended to;
Figure 11 is shown according to the structure of the video background alternative of the embodiment of the present application of the application further embodiment
It is intended to;
Figure 12 is the schematic diagram according to the image processing circuit of the application one embodiment.
Embodiment
Embodiments herein is described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end
Same or similar label represents same or similar element or the element with same or like function.Below with reference to attached
The embodiment of figure description is exemplary, it is intended to for explaining the application, and it is not intended that limitation to the application.
Below with reference to the accompanying drawings the video background replacement method, device, mobile terminal and computer of the embodiment of the present application are described
Readable storage medium storing program for executing.
Fig. 1 is the flow chart according to the video background replacement method of the application one embodiment.The video back of the body of the embodiment
The application of scape replacement method is in the terminal.Wherein, terminal, which can include mobile phone, tablet personal computer, intelligent wearable equipment etc., has respectively
The hardware device of kind operating system.
As shown in figure 1, the video background replacement method comprises the following steps:
S11, obtain the current video picture with the second user of the first user video call.
Specifically, during the first user and second user video calling, the terminal of the first user is shown from video
The current video picture of second user is obtained in interface.
For example, the first user is user A, second user is user B, during user A and user's B video callings,
User A terminal obtains the current video picture of user B in video display interface.
S12, when it is determined that three user in addition to second user in current video picture be present, obtain the first user institute
In the three-dimensional background image of scene, and obtain the depth image of the first user.
As a kind of exemplary embodiment, after the current video picture with second user is obtained, face can be passed through
Identification method determines to whether there is the 3rd user except second user in current video picture.Specifically, can be to current video
Picture carries out recognition of face, to obtain the face recognition result of current video picture, and is judged currently according to face recognition result
Other faces before removing second user face whether are included in video pictures, are removed if it is judged that being included in current video picture
Other faces before second user face, it is determined that the 3rd user in addition to second user in current video picture be present.
In order to accurately obtain the first user place three-dimensional background image of scene and the depth image of the first user, as one
The exemplary embodiment of kind, the three-dimensional background image of scene where the first user can be obtained by structure light, and obtain first
The depth image of user.
Wherein, for three-dimensional background image for that can be coloured image, the depth image of the first user includes the first user's
Depth information.The scene domain of three-dimensional background image is consistent with the scene domain of depth image, and each in three-dimensional background image
Individual pixel can be found in depth image to should pixel depth information.
, can be to scene simulation structure light where the first user as a kind of exemplary embodiment, and obtain first and use
The structure light image of scene where family, and the structure light image of scene according to where the first user obtains scene where the first user
Depth image, and depth image according to scene where the first user and color information generate the first user where scene
Three-dimensional background image.
, can be with as shown in Fig. 2 obtain the process of the depth image of the first user as a kind of exemplary embodiment
Including:
S21, to first user's projective structure light.
S22, shoot the structure light image modulated through the first user.
S23, phase information corresponding to each pixel of demodulation structure light image is to obtain the depth image of the first user.
Specifically, during the first user is by terminal and second user video, the knot in the terminal of the first user
Structure light projector can be to first user's transmittance structure light, and then, the structure light video camera head in terminal can be shot to be adjusted through the first user
The structure light image of system, and phase information corresponding to each pixel of demodulation structure light image is to obtain the depth of the first user
Image.
Specifically, structured light projector is by the face and body of the project structured light of certain pattern to the first user
Afterwards, the structure light image that can be formed after being modulated by the first user in the face of the first user and the surface of body.Structure light images
Structure light image after head shooting is modulated, then structure light image is demodulated to obtain the depth image of the first user.
Wherein, the pattern of structure light can be laser stripe, Gray code, sine streak, non-homogeneous speckle etc..
As a kind of exemplary embodiment, as shown in figure 3, phase corresponding to each pixel of demodulation structure light image
Information can be included with obtaining the process of the depth image of the first user:
S31, phase information corresponding to each pixel in demodulation structure light image.
S32, phase information is converted into depth information.
S33, the depth image of the first user is generated according to depth information.
Specifically, compared with non-modulated structure light, the phase information of the structure light after modulation is changed, and is being tied
The structure light showed in structure light image is to generate the structure light after distortion, wherein, the phase information of change can characterize
The depth information of object.Therefore, structure light video camera head demodulates phase information corresponding to each pixel in structure light image first,
Depth information is calculated further according to phase information, so as to obtain the depth image of the first user.
In order that those skilled in the art be more apparent from according to structure light come gather the first user face and
The process of the depth image of body, illustrated below by taking a kind of widely used optical grating projection technology (fringe projection technology) as an example
Its concrete principle.Wherein, optical grating projection technology belongs to sensu lato area-structure light.
As shown in Fig. 4 (a), when being projected using area-structure light, sine streak is produced by computer programming first,
And sine streak is projected to measured object by structured light projector, recycle structure light video camera head shooting striped to be modulated by object
Degree of crook afterwards, then demodulate the curved stripes and obtain phase, then phase is converted into depth information to obtain depth map
Picture.The problem of to avoid producing error or error coupler, need to image structure light before carrying out depth information collection using structure light
Head carries out parameter calibration with structured light projector, and demarcation includes geometric parameter (for example, structure light video camera head and structured light projector
Between relative position parameter etc.) demarcation, the inner parameter of structure light video camera head and the inner parameter of structured light projector
Demarcation etc..
Specifically, the first step, computer programming produce sine streak.Need to obtain using the striped of distortion due to follow-up
Phase, for example phase is obtained using four step phase-shifting methods, therefore produce four width phase differences here and beStriped, then structure light throw
Emitter projects the four spokes line timesharing on measured object (mask shown in Fig. 4 (a)), and structure light video camera head is collected such as Fig. 4
(b) figure on the left side, while to read the striped of the plane of reference shown on the right of Fig. 4 (b).
Second step, carry out phase recovery.Bar graph (the i.e. structure that structure light video camera head is modulated according to four width collected
Light image) to calculate the phase diagram by phase modulation, now obtained be to block phase diagram.Because the knot that four step Phase-shifting algorithms obtain
Fruit is to calculate gained by arctan function, therefore the phase after structure light modulation is limited between [- π, π], that is to say, that every
Phase after modulation exceedes [- π, π], and it can restart again.Shown in the phase main value such as Fig. 4 (c) finally given.
Wherein, it is necessary to carry out the saltus step processing that disappears, it is continuous phase that will block phase recovery during phase recovery is carried out
Position.As shown in Fig. 4 (d), the left side is the continuous phase bitmap modulated, and the right is to refer to continuous phase bitmap.
3rd step, subtract each other to obtain phase difference (i.e. phase information) by the continuous phase modulated and with reference to continuous phase, should
Phase difference characterizes depth information of the measured object with respect to the plane of reference, then phase difference is substituted into the conversion formula (public affairs of phase and depth
The parameter being related in formula is by demarcation), you can obtain the threedimensional model of the object under test as shown in Fig. 4 (e).
It should be appreciated that in actual applications, according to the difference of concrete application scene, employed in the embodiment of the present application
Structure light in addition to above-mentioned grating, can also be other arbitrary graphic patterns.
As a kind of possible implementation, the depth information of pattern light the first user of progress also can be used in the application
Collection.
Specifically, the method that pattern light obtains depth information is that this spreads out using a diffraction element for being essentially flat board
The relief diffraction structure that there are element particular phases to be distributed is penetrated, cross section is with two or more concavo-convex step embossment knots
Structure.Substantially 1 micron of the thickness of substrate in diffraction element, each step it is highly non-uniform, the span of height can be 0.5
Micron~0.9 micron.Structure shown in Fig. 5 (a) is the local diffraction structure of the collimation beam splitting element of the present embodiment.Fig. 5 (b) is edge
The unit of the cross sectional side view of section A-A, abscissa and ordinate is micron.The speckle pattern of pattern photogenerated has
The randomness of height, and can with the difference of distance changing patterns.Therefore, depth information is being obtained using pattern light
Before, it is necessary first to the speckle pattern in space is calibrated, for example, in the range of 0~4 meter of distance structure light video camera head, often
A reference planes are taken every 1 centimetre, then just save 400 width speckle images after demarcating, the spacing of demarcation is smaller, acquisition
The precision of depth information is higher.Then, structured light projector is by pattern light projection to measured object (i.e. the first user), quilt
The speckle pattern that the difference in height on survey thing surface to project the pattern light on measured object changes.Structure light video camera head
After shooting projects the speckle pattern (i.e. structure light image) on measured object, then preserved after speckle pattern and early stage are demarcated
400 width speckle images carry out computing cross-correlation one by one, and then obtain 400 width correlation chart pictures.In space where testee
Position can show peak value on correlation chart picture, above-mentioned peak value is superimposed and can obtain after interpolation arithmetic by
Survey the depth information of thing.
Most diffraction lights are obtained after diffraction is carried out to light beam due to common diffraction element, but per beam diffraction light light intensity difference
Greatly, it is also big to the risk of human eye injury.Re-diffraction even is carried out to diffraction light, the uniformity of obtained light beam is relatively low.
Therefore, the effect projected using the light beam of common diffraction element diffraction to measured object is poor.Using collimation in the present embodiment
Beam splitting element, the element not only have the function that to collimate uncollimated rays, also have the function that light splitting, i.e., through speculum
The non-collimated light of reflection projects multi-beam collimation light beam, and the multi-beam collimation being emitted after collimating beam splitting element toward different angles
The area of section approximately equal of light beam, flux of energy approximately equal, and then to carry out using the scatterplot light after the beam diffraction
The effect of projection is more preferable.Meanwhile laser emitting light is dispersed to every light beam, the risk of injury human eye is reduce further, and dissipate
Spot structure light is for other uniform structure lights of arrangement, when reaching same collection effect, the consumption of pattern light
Electricity is lower.
S13, the depth image of three-dimensional background image and the first user is handled to extract the first user in three-dimensional background image
In people's object area and obtain personage's area image.
In one embodiment of the application, as shown in fig. 6, processing three-dimensional background image and the depth image of the first user
The process of personage's area image is obtained to extract people object area of first user in three-dimensional background image, can be included:
S61, identify the human face region in three-dimensional background image.
S62, depth information corresponding with human face region is obtained from the depth image of the first user.
S63, the depth bounds of people's object area is determined according to the depth information of human face region.
S64, the personage area for determining to be connected and fallen into depth bounds with human face region according to the depth bounds of people's object area
Domain is to obtain personage's area image.
The human face region gone out using the deep learning Model Identification trained in three-dimensional background image, then according to three-dimensional
The corresponding relation of background image and depth image can determine that the depth information of human face region.Due to human face region include nose,
The features such as eyes, ear, lip, therefore, depth data of each feature corresponding in depth image in human face region is
Different, for example, in face face structure light video camera head, in the depth image that structure light video camera head is shot, nose is corresponding
Depth data may be smaller, and depth data corresponding to ear may be larger.Therefore, the depth information of above-mentioned human face region
May be a numerical value or a number range.Wherein, when the depth information of human face region is a numerical value, the numerical value can
By averaging to obtain to the depth data of human face region;Or can be by taking intermediate value to the depth data of human face region
Obtain.
Because people's object area includes human face region, in other words, people's object area is in some depth together with human face region
In the range of, therefore, after the depth information of human face region is determined, personage area can be set according to the depth information of human face region
The depth bounds in domain, what the depth bounds extraction further according to people's object area fell into the depth bounds and was connected with human face region
People's object area is to obtain personage's area image.
In this way, personage's area image can be extracted from three-dimensional background image according to depth information.Due to depth information
Obtain not in by environment the factor such as illumination, colour temperature image ring, therefore, the personage's area image extracted is more accurate.
S14, the video background of the first user is replaced according to default video background and personage's area image.
Wherein, default video background can be the video background or user given tacit consent in terminal according to demand at end
The video background pre-set in end.
Wherein, default video background can be with the solid background of pre-set color or virtual scene background, the embodiment
Default video background is not construed as limiting.
, wherein it is desired to understand, default video background can be two-dimensional background image or three-dimensional background image, the reality
Example is applied to be not construed as limiting this.
Specifically, can be by default video background and personage after personage's area image is obtained from three-dimensional background image
Area image is merged, to generate the new video background of the first user.Now, first in the video display interface of video calling
The new video background of the first user is shown in the video pictures of user.Thus, in video call process, video is realized
The automatic replacement of background so that scene where the 3rd user can not watch the first user, scene etc. where avoiding the first user
Individual privacy is compromised, protects the individual privacy of the first user, improves user experience.
The video background replacement method that the embodiment of the present application provides, in the first user and the process of second user video calling
In, when it is determined that three users be present in second user video pictures, the three-dimensional background image of scene where obtaining the first user,
And the depth image of the first user is obtained, and the depth image of three-dimensional background image and the first user is handled to extract the first user
People's object area in three-dimensional background image and obtain personage's area image, and according to default video background and personage's administrative division map
As being replaced to the video background of the first user.Thus, in video call process, realize the automatic of video background and replace
Change so that scene where the 3rd user can not watch the first user, the individual privacy such as scene is let out where avoiding the first user
Dew, improves user experience.
In one embodiment of the application, in order to accurately determine in current video picture whether there is except second user it
The 3rd outer user, as shown in fig. 7, this method can also include:
S71, obtain the three-dimensional face model of second user.
Wherein, three-dimensional face model is the terminal from second user by being established to second user projective structure light.
Specifically, during video calling, the terminal of the first user is by receiving the transmission of second user terminal
The three-dimensional face model of second user, to obtain the three-dimensional face model of second user.
Wherein, the terminal of second user is (following that the terminal of second user is referred to as into second user end in order to facilitate describing
End) process of three-dimensional face model that second user is established by structure light is:Structured light projector in second user terminal
To second user projective structure, then, structure light video camera head shoots the structure light image of second user, and according to structure light image
The depth image of second user is determined, afterwards, according to the depth image of second user, generates the three-dimensional face mould of second user
Type.
S72, recognition of face is carried out to current video picture, to obtain the human face region in current video picture.
S73, according to the three-dimensional face model of second user, judge that the human face region in current video picture whether there is and remove
Other human face regions outside the human face region of second user, if it is not, then performing step S74.
S74, determine the 3rd user for having in addition to second user in current video picture.
As a kind of exemplary embodiment, the human face region in current video picture is obtained, can determine whether to work as forward sight
Whether the face characteristic information in human face region in frequency picture is matched with the three-dimensional face model of second user, if
It is not, it is determined that other human face regions in addition to the human face region of second user in current video picture be present.If it is,
Determine to only exist the human face region of second user in current video picture.
On the basis of above-described embodiment, in one embodiment of the application, in order to intelligently recover the original of user
The video background come, avoids user from recovering the trouble of original video background manually, according to default video background and personage area
After area image is replaced to the video background of the first user, second user is only existed in current video picture when detecting
When, the video background of the first user is switched to original video background.
That is, the embodiment is by the video background of the first user after default video background is switched to, when examining again
Measure when only existing second user in the current video picture of second user, i.e. swarming into second user video is detected
When person (i.e. the 3rd user) leaves, the video background of the first user can intelligently be switched to original video background again.By
This, may be such that user without recovering original video background manually, improves user experience.
On the basis of above-described embodiment, in one embodiment of the application, it is rapidly switched in order to facilitate user
Video background originally, it is being replaced to the video background of the first user according to default video background and personage's area image
Afterwards, when detect only exist second user in current video picture when, display whether to switch to the video background of the first user
Video background originally recovers the prompt message of video background, and refers in the confirmation for the recovery video background for receiving the first user
When making, the video background of the first user is switched to original video background.
In order to realize above-described embodiment, the application also proposed a kind of video background alternative of the embodiment of the present application.
Fig. 8 is the structural representation according to the video background alternative of the embodiment of the present application of the application one embodiment
Figure.
As shown in figure 8, the video background alternative of the embodiment of the present application can include the first acquisition module 110, figure
As acquisition module 120, first processing module 130 and replacement module 140, wherein:
First acquisition module 110 is used to obtain the current video picture with the second user of the first user video call.
Image capture module 120 is used for it is determined that the 3rd user in addition to second user in current video picture be present
When, the three-dimensional background image of scene where obtaining the first user, and obtain the depth image of the first user.
First processing module 130 is used to handle the depth image of three-dimensional background image and the first user to extract the first user
People's object area in three-dimensional background image and obtain personage's area image.
Replacement module 140 is used to carry out the video background of the first user according to default video background and personage's area image
Replace.
In one embodiment of the application, on the basis of shown in Fig. 8, as shown in figure 9, the image capture module 120
Structured light projector 121 and structure light video camera head 122 can be included, wherein:
Structured light projector 121 is used for first user's projective structure light.
Structure light video camera head 122 is used to shoot the structure light image modulated through the first user;And demodulation structure light image
Each pixel corresponding to phase information to obtain the depth image of the first user.
As a kind of exemplary embodiment, structure light video camera head 122 is specifically used for:Adjust each picture in structure light image
Phase information corresponding to element, and phase information is converted into depth information, and the depth of the first user is generated according to depth information
Spend image.
In one embodiment of the application, first processing module 130 is specifically used for:Identify the people in three-dimensional background image
Face region;Depth information corresponding with human face region is obtained from the depth image of the first user;According to the depth of human face region
Information determines the depth bounds of people's object area;Determine to be connected with human face region according to the depth bounds of people's object area and fall into depth
In the range of people's object area to obtain personage's area image.
In one embodiment of the application, in order to accurately determine in the current video picture of second user with the presence or absence of the
Three users, on the basis of shown in Fig. 8, as shown in Figure 10, the device can also include the second acquisition module 150, identification module
160 and judge module 170, wherein:
Second acquisition module 150 is used for the three-dimensional face model for obtaining second user.
Wherein, three-dimensional face model is the terminal from second user by being established to second user projective structure light.
Identification module 160 is used to carry out recognition of face to current video picture, to obtain the face in current video picture
Region.
Judge module 170 is used for the three-dimensional face model according to second user, judges the face area in current video picture
Domain is with the presence or absence of other human face regions in addition to the human face region of second user.
As a kind of exemplary embodiment, judge module 170 is specifically used for judging the face in current video picture
Whether the face characteristic information in region is matched with the three-dimensional face model of second user, if it is not, then determining to work as
Other human face regions in addition to the human face region of second user in preceding video pictures be present.If it is, determine current video
The human face region of second user is only existed in picture.
Wherein, the human face region that image capture module 120 is additionally operable in current video picture is judged, which exists, removes second
During other human face regions outside the human face region of user, it is determined that the in addition to second user in current video picture be present
Three users, and perform obtain the first user where scene three-dimensional background image the step of.
, wherein it is desired to illustrate, the second acquisition module 150, identification mould in the device embodiment shown in above-mentioned Figure 10
The structure of block 160 and judge module 170 is further included in the device embodiment of earlier figures 9, and this application is not construed as limiting.
In one embodiment of the application, video background is switched to original video background in order to facilitate user,
On the basis of shown in Fig. 8, as shown in figure 11, the device can also include:
Second processing module 180 be used for when detect only exist second user in current video picture when, by the first user
Video background switch to original video background;Or when detect only exist second user in current video picture when, show
Show that whether the video background of the first user is switched into original video background recovers the prompt message of video background, and receiving
To the recovery video background of the first user confirmation instruction when, the video background of the first user is switched into original video and carried on the back
Scape.
, wherein it is desired to illustrate, the structure of the Second processing module 180 in the device embodiment shown in above-mentioned Figure 11 is also
It may be embodied in earlier figures 9- Figure 10 device embodiment, this application be not construed as limiting.
, wherein it is desired to explanation, the foregoing explanation to video background replacement method embodiment are also applied for the reality
The video background alternative of example is applied, its realization principle is similar, and here is omitted.
According to the video background alternative of the embodiment of the present application, in the first user and the process of second user video calling
In, when it is determined that three users be present in second user video pictures, the three-dimensional background image of scene where obtaining the first user,
And the depth image of the first user is obtained, and the depth image of three-dimensional background image and the first user is handled to extract the first user
People's object area in three-dimensional background image and obtain personage's area image, and according to default video background and personage's administrative division map
As being replaced to the video background of the first user.Thus, in video call process, realize the automatic of video background and replace
Change so that scene where the 3rd user can not watch the first user, the individual privacy such as scene is let out where avoiding the first user
Dew, protects the individual privacy of the first user, improves user experience.
In order to realize above-described embodiment, the application also proposes a kind of mobile terminal.
A kind of mobile terminal, include the video background alternative of the application second aspect embodiment.
According to the mobile terminal of the embodiment of the present application, lead to during the first user and second user video calling,
When determining to exist in second user video pictures three users, the three-dimensional background image of scene where obtaining the first user, and obtain
The depth image of the first user is taken, and handles the depth image of three-dimensional background image and the first user to extract the first user three
Tie up people's object area in background image and obtain personage's area image, and according to default video background and personage's area image pair
The video background of first user is replaced.Thus, in video call process, the automatic replacement of video background is realized, is made
The first user place scene can not be watched by obtaining the 3rd user, and the individual privacy such as scene is compromised where avoiding the first user, protects
The individual privacy of the first user has been protected, has improved user experience.
The embodiment of the present application additionally provides a kind of computer-readable recording medium, and one or more can perform comprising computer
The non-volatile computer readable storage medium storing program for executing of instruction, when computer executable instructions are executed by one or more processors,
So that the video background replacement method that computing device is foregoing.
In order to realize above-described embodiment, the application also proposes a kind of mobile terminal.
Above-mentioned mobile terminal includes image processing circuit, and image processing circuit can utilize hardware and/or component software
Realize, it may include define the various processing units of ISP (Image Signal Processing, picture signal processing) pipeline.Figure
12 be the schematic diagram according to the image processing circuit of the application one embodiment.As shown in figure 12, for purposes of illustration only, only show with
The various aspects of the related image processing techniques of the embodiment of the present application.
As shown in figure 12, the image processing circuit of mobile terminal 1200 includes imaging device 10, ISP processors 30 and control
Logic device 40.Imaging device 10 may include image capture module 120.
Specifically, image capture module 120 can include structured light projector 121 and structure light video camera head 122.Structure light
The projector 121 is by scene where structured light projection to the first user and the first user.Wherein, the structured light patterns can be laser
Striped, Gray code, sine streak or, speckle pattern of random alignment etc..Structure light video camera head 122 can include image and pass
Sensor 1221 and lens 1222.Wherein, the number of lens 1222 can be one or more.Imaging sensor 1221, which is used to catch, to be tied
Structure light projector 121 is projected to the structure light image on the first user.Structure light image can be sent by image capture module 120 to
The processing such as ISP processors 30 are demodulated, phase recovery, phase information calculate are to obtain the depth information of the first user.
Wherein, above-mentioned imaging sensor 1221 be additionally operable to capturing structure light projector 121 be projected to the first user institute it is on the scene
Structure light image in scape on testee, and structure light image is sent to ISP processors 30, by ISP processors 30 to knot
Structure light image is demodulated the depth information for obtaining measured object.Meanwhile imaging sensor 1221 can also catch the color of measured object
Multimedia message.It is of course also possible to catch the structure light image and color information of measured object respectively by two imaging sensors 1221.
Wherein, by taking pattern light as an example, ISP processors 30 are demodulated to structure light image, are specifically included, from the knot
The speckle image of measured object is gathered in structure light image, by the speckle image of measured object with entering with reference to speckle image according to pre-defined algorithm
Row view data calculates, and obtains each speckle point of speckle image on measured object relative to reference to the reference speckle in speckle image
The displacement of point.The depth value of each speckle point of speckle image is calculated using trigonometry conversion, and according to the depth
It is worth to the depth information of measured object.
It is, of course, also possible to obtain the depth image by the method for binocular vision or based on jet lag TOF method
Information etc., is not limited herein, as long as can obtain or belong to this by the method for the depth information that measured object is calculated
The scope that embodiment includes.
After the color information that ISP processors 30 receive the measured object that imaging sensor 1221 captures, it can be tested
View data corresponding to the color information of thing is handled.ISP processors 30 are analyzed view data can be used for obtaining
It is determined that and/or imaging device 10 one or more control parameters image statistics.Imaging sensor 1221 may include color
Color filter array (such as Bayer filters), imaging sensor 1221 can obtain is caught with each imaging pixel of imaging sensor 1221
The luminous intensity and wavelength information caught, and the one group of raw image data that can be handled by ISP processors 30 is provided.
ISP processors 30 handle raw image data pixel by pixel in various formats.For example, each image pixel can have
There is the bit depth of 8,10,12 or 14 bits, ISP processors 30 can carry out one or more image procossing behaviour to raw image data
Make, image statistics of the collection on view data.Wherein, image processing operations can be by identical or different bit depth precision
Carry out.
ISP processors 30 can also receive pixel data from imaging sensor 1221.Imaging sensor 1221 can be memory
Independent private memory in the part of device, storage device or electronic equipment, and may include DMA (Direct
Memory Access, direct direct memory access (DMA)) feature.
When receiving raw image data, ISP processors 30 can carry out one or more image processing operations.
After ISP processors 30 get color information and the depth information of measured object, it can be merged, obtain three
Tie up image.Wherein, quilt accordingly can be extracted by least one of appearance profile extracting method or contour feature extracting method
Survey the feature of thing.Such as pass through active shape model method ASM, active appearance models method AAM, PCA PCA, discrete remaining
The methods of string converter technique DCT, the feature of measured object is extracted, is not limited herein.To be extracted respectively from depth information again by
The feature for surveying thing and the feature that measured object is extracted from color information carry out registration and Fusion Features processing.What is herein referred to melts
Conjunction processing can be that the feature that will be extracted in depth information and color information directly combines or by different images
Identical feature combines after carrying out weight setting, it is possibility to have other amalgamation modes, finally according to the feature after fusion, generation three
Tie up image.
The view data of 3-D view can be transmitted to video memory 20, to carry out other place before shown
Reason.ISP processors 30 from the reception processing data of video memory 20, and to processing data carry out original domain in and RGB and
Image real time transfer in YCbCr color spaces.The view data of 3-D view may be output to display 60, so that user watches
And/or further handled by graphics engine or GPU (Graphics Processing Unit, graphics processor).In addition, ISP
The output of processor 30 also can be transmitted to video memory 20, and display 60 can read view data from video memory 20.
In one embodiment, video memory 20 can be configured as realizing one or more frame buffers.In addition, ISP processors 30
Output can be transmitted to encoder/decoder 50, so as to encoding/decoding image data.The view data of coding can be saved, and
Decompressed before being shown in the equipment of display 60.Encoder/decoder 50 can be realized by CPU or GPU or coprocessor.
The image statistics that ISP processors 30 determine, which can be transmitted, gives the unit of control logic device 40.Control logic device 40 can
Processor and/or microcontroller including performing one or more routines (such as firmware), one or more routines can be according to reception
Image statistics, determine the control parameter of imaging device 10.
It it is below the step of realizing video background replacement method with image processing techniques in Figure 12:
S1', obtain the current video picture with the second user of the first user video call;
S2', the first user institute is obtained when it is determined that three user in addition to second user in current video picture be present
In the three-dimensional background image of scene, and obtain the depth image of the first user;
S3', the depth image of three-dimensional background image and the first user is handled to extract the first user in three-dimensional background image
In people's object area and obtain personage's area image;
S4', the video background of the first user is replaced according to default video background and personage's area image.
, wherein it is desired to explanation, the foregoing explanation to video background replacement method embodiment also use the implementation
The mobile terminal of example, its realization principle is similar, and here is omitted.
A kind of computer program product, when the instruction processing unit in computer program product performs, perform foregoing regard
Frequency background replacement method.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show
The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description
Point is contained at least one embodiment or example of the application.In this manual, to the schematic representation of above-mentioned term not
Identical embodiment or example must be directed to.Moreover, specific features, structure, material or the feature of description can be with office
Combined in an appropriate manner in one or more embodiments or example.In addition, in the case of not conflicting, the skill of this area
Art personnel can be tied the different embodiments or example and the feature of different embodiments or example described in this specification
Close and combine.
In addition, term " first ", " second " are only used for describing purpose, and it is not intended that instruction or hint relative importance
Or the implicit quantity for indicating indicated technical characteristic.Thus, define " first ", the feature of " second " can be expressed or
Implicitly include at least one this feature.In the description of the present application, " multiple " are meant that at least two, such as two, three
It is individual etc., unless otherwise specifically defined.
Any process or method described otherwise above description in flow chart or herein is construed as, and represents to include
Module, fragment or the portion of the code of the executable instruction of one or more the step of being used to realize specific logical function or process
Point, and the scope of the preferred embodiment of the application includes other realization, wherein can not press shown or discuss suitable
Sequence, including according to involved function by it is basic simultaneously in the way of or in the opposite order, carry out perform function, this should be by the application
Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use
In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for
Instruction execution system, device or equipment (such as computer based system including the system of processor or other can be held from instruction
The system of row system, device or equipment instruction fetch and execute instruction) use, or combine these instruction execution systems, device or set
It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicate, propagate or pass
Defeated program is for instruction execution system, device or equipment or the dress used with reference to these instruction execution systems, device or equipment
Put.The more specifically example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wiring
Connecting portion (electronic installation), portable computer diskette box (magnetic device), random access memory (RAM), read-only storage
(ROM), erasable edit read-only storage (EPROM or flash memory), fiber device, and portable optic disk is read-only deposits
Reservoir (CDROM).In addition, computer-readable medium can even is that can the paper of print routine thereon or other suitable be situated between
Matter, because can then enter edlin, interpretation or if necessary with other for example by carrying out optical scanner to paper or other media
Suitable method is handled electronically to obtain program, is then stored in computer storage.
It should be appreciated that each several part of the application can be realized with hardware, software, firmware or combinations thereof.Above-mentioned
In embodiment, software that multiple steps or method can be performed in memory and by suitable instruction execution system with storage
Or firmware is realized.If, and in another embodiment, can be with well known in the art for example, realized with hardware
Any one of row technology or their combination are realized:With the logic gates for realizing logic function to data-signal
Discrete logic, have suitable combinational logic gate circuit application specific integrated circuit, programmable gate array (PGA), scene
Programmable gate array (FPGA) etc..
Those skilled in the art are appreciated that to realize all or part of step that above-described embodiment method carries
Suddenly be can by program come instruct correlation hardware complete, program can be stored in a kind of computer-readable recording medium
In, the program upon execution, including one or a combination set of the step of embodiment of the method.
In addition, each functional unit in each embodiment of the application can be integrated in a processing module, can also
That unit is individually physically present, can also two or more units be integrated in a module.Above-mentioned integrated mould
Block can both be realized in the form of hardware, can also be realized in the form of software function module.If integrated module with
The form of software function module realize and be used as independent production marketing or in use, can also be stored in one it is computer-readable
Take in storage medium.
Storage medium mentioned above can be read-only storage, disk or CD etc..Although have been shown and retouch above
Embodiments herein is stated, it is to be understood that above-described embodiment is exemplary, it is impossible to be interpreted as the limit to the application
System, one of ordinary skill in the art can be changed to above-described embodiment, change, replace and become within the scope of application
Type.
Claims (12)
1. a kind of video background replacement method, it is characterised in that the described method comprises the following steps:
Obtain the current video picture with the second user of the first user video call;
When it is determined that three user in addition to the second user in the current video picture be present, obtain described first and use
The three-dimensional background image of scene where family, and obtain the depth image of first user;
The depth image of the three-dimensional background image and first user is handled to extract first user in the three-dimensional
People's object area in background image and obtain personage's area image;
The video background of first user is replaced according to default video background and personage's area image.
2. the method as described in claim 1, it is characterised in that the depth image for obtaining first user, including:
To the first user projective structure light;
The structure light image that shooting is modulated through first user;And
Phase information corresponding to each pixel of the structure light image is demodulated to obtain the depth image of first user.
3. the method as described in claim 1, it is characterised in that the processing three-dimensional background image and first user
Depth image obtain personage's area image to extract people object area of first user in the three-dimensional background image,
Including:
Identify the human face region in the three-dimensional background image;
Depth information corresponding with the human face region is obtained from the depth image of first user;
The depth bounds of people's object area is determined according to the depth information of the human face region;
The people for determining to be connected and fall into the depth bounds with the human face region according to the depth bounds of people's object area
Object area is to obtain personage's area image.
4. the method as described in claim 1, it is characterised in that methods described also includes:
The three-dimensional face model of the second user is obtained, wherein, the three-dimensional face model is led to by the terminal of second user
Cross to the second user projective structure light and establish;
Recognition of face is carried out to the current video picture, to obtain the human face region in the current video picture;
According to the three-dimensional face model of the second user, judge that the human face region in the current video picture whether there is and remove
Other human face regions outside the human face region of the second user;
If judging there is its in addition to the human face region of the second user in the human face region in the current video picture
His human face region, it is determined that the 3rd user in addition to the second user in the current video picture be present, and perform institute
The step of stating the three-dimensional background image of scene where obtaining first user.
5. the method as described in claim any one of 1-4, it is characterised in that preset video background and the people in the basis
After object area image is replaced to the video background of first user, methods described also includes:
When detect only exist the second user in the current video picture when, the video background of first user is cut
Shift to original video background;Or
When detect only exist the second user in the current video picture when, display whether regarding first user
Frequency background switches to the prompt message that original video background recovers video background, and in the recovery for receiving first user
During the confirmation instruction of video background, the video background of first user is switched into original video background.
A kind of 6. video background alternative, it is characterised in that including:
First acquisition module, for obtaining the current video picture with the second user of the first user video call;
Image capture module, for it is determined that the 3rd user in addition to the second user in the current video picture be present
When, the three-dimensional background image of scene where obtaining first user, and obtain the depth image of first user;
First processing module, for handling the depth image of the three-dimensional background image and first user to extract described
People object area of one user in the three-dimensional background image and obtain personage's area image;
Replacement module, for being entered according to default video background and personage's area image to the video background of first user
Row is replaced.
7. device as claimed in claim 6, it is characterised in that described image acquisition module includes structured light projector and structure
Light video camera head, wherein:
The structured light projector, for the first user projective structure light;
Structure light video camera head, for shooting the structure light image modulated through first user;And the demodulation structure light figure
As each pixel corresponding to phase information to obtain the depth image of first user.
8. device as claimed in claim 6, it is characterised in that the first processing module, be specifically used for:
Identify the human face region in the three-dimensional background image;
Depth information corresponding with the human face region is obtained from the depth image of first user;
The depth bounds of people's object area is determined according to the depth information of the human face region;
The people for determining to be connected and fall into the depth bounds with the human face region according to the depth bounds of people's object area
Object area is to obtain personage's area image.
9. device as claimed in claim 6, it is characterised in that described device also includes:
Second acquisition module, for obtaining the three-dimensional face model of the second user, wherein, the three-dimensional face model be by
The terminal of second user to the second user projective structure light by establishing;
Identification module, for carrying out recognition of face to the current video picture, to obtain the people in the current video picture
Face region;
Judge module, for the three-dimensional face model according to the second user, judge the face in the current video picture
Region is with the presence or absence of other human face regions in addition to the human face region of the second user;
Wherein, described image acquisition module, it is additionally operable to human face region in the current video picture is judged and exists remove institute
When stating other human face regions outside the human face region of second user, it is determined that exist in the current video picture except described the
The 3rd user outside two users, and perform obtain first user where scene three-dimensional background image the step of.
10. the device as described in claim any one of 6-9, it is characterised in that described device also includes:
Second processing module, for when detect only exist the second user in the current video picture when, by described
The video background of one user switches to original video background;Or institute is only existed in the current video picture when detecting
When stating second user, display whether that the video background of first user is switched into original video background recovers video background
Prompt message, and receive first user recovery video background confirmation instruction when, by first user's
Video background switches to original video background.
11. one or more includes the non-volatile computer readable storage medium storing program for executing of computer executable instructions, when the calculating
When machine executable instruction is executed by one or more processors so that the computing device such as any one of claim 1 to 5
Described video background replacement method.
12. a kind of mobile terminal, including memory and processor, computer-readable instruction is stored in the memory, it is described
When instruction is by the computing device so that video background of the computing device as any one of claim 1 to 5
Replacement method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710824539.2A CN107623832A (en) | 2017-09-11 | 2017-09-11 | Video background replacement method, device and mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710824539.2A CN107623832A (en) | 2017-09-11 | 2017-09-11 | Video background replacement method, device and mobile terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107623832A true CN107623832A (en) | 2018-01-23 |
Family
ID=61089508
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710824539.2A Pending CN107623832A (en) | 2017-09-11 | 2017-09-11 | Video background replacement method, device and mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107623832A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108765529A (en) * | 2018-05-04 | 2018-11-06 | 北京比特智学科技有限公司 | Video generation method and device |
CN110266994A (en) * | 2019-06-26 | 2019-09-20 | 广东小天才科技有限公司 | A kind of video call method, video conversation apparatus and terminal |
CN110298862A (en) * | 2018-03-21 | 2019-10-01 | 广东欧珀移动通信有限公司 | Method for processing video frequency, device, computer readable storage medium and computer equipment |
CN111614930A (en) * | 2019-02-22 | 2020-09-01 | 浙江宇视科技有限公司 | Video monitoring method, system, equipment and computer readable storage medium |
CN112615979A (en) * | 2020-12-07 | 2021-04-06 | 江西欧迈斯微电子有限公司 | Image acquisition method, image acquisition apparatus, electronic apparatus, and storage medium |
CN113411537A (en) * | 2021-06-25 | 2021-09-17 | Oppo广东移动通信有限公司 | Video call method, device, terminal and storage medium |
CN115482308A (en) * | 2022-11-04 | 2022-12-16 | 平安银行股份有限公司 | Image processing method, computer device, and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102663810A (en) * | 2012-03-09 | 2012-09-12 | 北京航空航天大学 | Full-automatic modeling approach of three dimensional faces based on phase deviation scanning |
CN104378553A (en) * | 2014-12-08 | 2015-02-25 | 联想(北京)有限公司 | Image processing method and electronic equipment |
CN104982029A (en) * | 2012-12-20 | 2015-10-14 | 微软技术许可有限责任公司 | CAmera With Privacy Modes |
CN105793857A (en) * | 2013-12-12 | 2016-07-20 | 微软技术许可有限责任公司 | Access tracking and restriction |
CN105872448A (en) * | 2016-05-31 | 2016-08-17 | 宇龙计算机通信科技(深圳)有限公司 | Display method and device of video images in video calls |
-
2017
- 2017-09-11 CN CN201710824539.2A patent/CN107623832A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102663810A (en) * | 2012-03-09 | 2012-09-12 | 北京航空航天大学 | Full-automatic modeling approach of three dimensional faces based on phase deviation scanning |
CN104982029A (en) * | 2012-12-20 | 2015-10-14 | 微软技术许可有限责任公司 | CAmera With Privacy Modes |
CN105793857A (en) * | 2013-12-12 | 2016-07-20 | 微软技术许可有限责任公司 | Access tracking and restriction |
CN104378553A (en) * | 2014-12-08 | 2015-02-25 | 联想(北京)有限公司 | Image processing method and electronic equipment |
CN105872448A (en) * | 2016-05-31 | 2016-08-17 | 宇龙计算机通信科技(深圳)有限公司 | Display method and device of video images in video calls |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110298862A (en) * | 2018-03-21 | 2019-10-01 | 广东欧珀移动通信有限公司 | Method for processing video frequency, device, computer readable storage medium and computer equipment |
CN108765529A (en) * | 2018-05-04 | 2018-11-06 | 北京比特智学科技有限公司 | Video generation method and device |
CN111614930A (en) * | 2019-02-22 | 2020-09-01 | 浙江宇视科技有限公司 | Video monitoring method, system, equipment and computer readable storage medium |
CN110266994A (en) * | 2019-06-26 | 2019-09-20 | 广东小天才科技有限公司 | A kind of video call method, video conversation apparatus and terminal |
CN112615979A (en) * | 2020-12-07 | 2021-04-06 | 江西欧迈斯微电子有限公司 | Image acquisition method, image acquisition apparatus, electronic apparatus, and storage medium |
CN112615979B (en) * | 2020-12-07 | 2022-03-15 | 江西欧迈斯微电子有限公司 | Image acquisition method, image acquisition apparatus, electronic apparatus, and storage medium |
CN113411537A (en) * | 2021-06-25 | 2021-09-17 | Oppo广东移动通信有限公司 | Video call method, device, terminal and storage medium |
CN115482308A (en) * | 2022-11-04 | 2022-12-16 | 平安银行股份有限公司 | Image processing method, computer device, and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107623817B (en) | Video background processing method, device and mobile terminal | |
CN107623832A (en) | Video background replacement method, device and mobile terminal | |
CN107592490A (en) | Video background replacement method, device and mobile terminal | |
CN107734267B (en) | Image processing method and device | |
CN107529096A (en) | Image processing method and device | |
CN107682607A (en) | Image acquiring method, device, mobile terminal and storage medium | |
CN107610077A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107493428A (en) | Filming control method and device | |
CN107734264B (en) | Image processing method and device | |
WO2019047985A1 (en) | Image processing method and device, electronic device, and computer-readable storage medium | |
CN107707835A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107509043B (en) | Image processing method, image processing apparatus, electronic apparatus, and computer-readable storage medium | |
CN107509045A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107707831A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107707838A (en) | Image processing method and device | |
CN107705278A (en) | The adding method and terminal device of dynamic effect | |
CN107610078A (en) | Image processing method and device | |
CN107592491B (en) | Video communication background display method and device | |
CN107613239B (en) | Video communication background display method and device | |
CN107682656B (en) | Background image processing method, electronic device, and computer-readable storage medium | |
CN107622496A (en) | Image processing method and device | |
CN107613228A (en) | The adding method and terminal device of virtual dress ornament | |
CN107613383A (en) | Video volume adjusting method, device and electronic installation | |
CN107610076A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107622192A (en) | Video pictures processing method, device and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180123 |