CN113362434A - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113362434A
CN113362434A CN202110597268.8A CN202110597268A CN113362434A CN 113362434 A CN113362434 A CN 113362434A CN 202110597268 A CN202110597268 A CN 202110597268A CN 113362434 A CN113362434 A CN 113362434A
Authority
CN
China
Prior art keywords
target
texture feature
point
texture
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110597268.8A
Other languages
Chinese (zh)
Inventor
武珊珊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202110597268.8A priority Critical patent/CN113362434A/en
Publication of CN113362434A publication Critical patent/CN113362434A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings

Abstract

The disclosure relates to an image processing method, an image processing device, electronic equipment and a storage medium, relates to the technical field of computers, and aims to solve the problem that the application range of a special effect plug-in is limited. The method comprises the following steps: respectively acquiring a first video frame in a first video stream and a second video frame in a second video stream; and then responding to the selection operation of the target account, determining a first object in the first video frame as a target object, identifying key points of the target object to obtain corresponding texture feature information, and performing texture feature replacement on a second object in the second video stream by adopting the texture feature information to obtain a target video stream containing a target texture feature replacement image of the second object. Therefore, in two video streams, the facial texture features of the same object can be presented, the body dynamic features of different objects can be presented, and various special effects can be presented in different application scenes, so that the problem that the special effect plug-in is only limited to be used in a single application scene is solved.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
With the development of multimedia technology, shooting multimedia files through an intelligent terminal has gradually become an activity that people become daily. Accordingly, when the shot multimedia file is edited on the intelligent terminal, a corresponding special effect is usually added to the multimedia file in order to present a more attractive multimedia file.
In the prior art, when a multimedia file is edited, an intelligent terminal generally adds a corresponding special effect to the multimedia file to be shot by calling a special effect plug-in. However, the application range of the special effect plug-in is limited because the special effect plug-in can only be applied in a single scene.
Therefore, it is necessary to design a new image processing method to solve the above problems.
Disclosure of Invention
The embodiment of the disclosure provides an image processing method and device, electronic equipment and a storage medium, which are used for solving the problem that the application range of a special effect plug-in is limited in the prior art.
The specific technical scheme provided by the embodiment of the disclosure is as follows:
in a first aspect, an image processing method includes:
respectively acquiring a first video frame in a first video stream and a second video frame in a second video stream, wherein the first video frame comprises a first object, the second video frame comprises a second object, and the first object and the second object have the same category attribute;
in response to a selection operation of a target account, determining that the first object is a target object, wherein the target object is an object for replacing texture feature information of the second object;
performing key point identification on the target object to obtain texture feature information corresponding to the target object;
and performing texture feature replacement on the second object by adopting the texture feature information to obtain a target texture feature replacement image for the second object.
Optionally, after the obtaining a first video frame in a first video stream and a second video frame in a second video stream respectively, when the first video stream includes a plurality of first objects and the second video stream includes a plurality of second objects, determining that the first object is a target object in response to a selection operation of a target account, including:
in response to a selection operation of the target account, determining one of the plurality of first objects as a target object;
performing texture feature replacement on the second object by using the texture feature information to obtain a target texture feature replacement image for the second object, including:
and performing texture feature replacement on at least one second object in the plurality of second objects by adopting the texture feature information to obtain a corresponding target texture feature replacement image.
Optionally, the performing key point identification on the target object to obtain texture feature information corresponding to the target object includes:
performing key point identification on the target object to obtain each first key point of the target object;
respectively determining first extension points respectively associated with the first key points by adopting a central point extension algorithm;
and connecting the first key points with the first extension points to obtain a first grid of the target object, and using the texture feature information of the region to which the first grid belongs as the texture feature information of the target object.
Optionally, before the performing texture feature replacement on the second object by using the texture feature information, the method further includes:
performing key point identification on the second object to obtain each second key point of the second object;
respectively determining second expansion points respectively associated with the second key points by adopting a central point expansion algorithm;
and connecting the second key points and the second extension points to obtain a second grid of the second object, and using the texture feature information of the region to which the second grid belongs as the texture feature information of the second object.
Optionally, one target keypoint is one of the first keypoints, and one target extension point associated with the one target keypoint is one of the first extension points, or one target keypoint is one of the second keypoints, and one target extension point associated with the one target keypoint is one of the second extension points;
the method further comprises obtaining the respective target keypoints by:
based on preset association relations among all target key points, obtaining all feature central points and edge points associated with all the feature central points from all the target key points, wherein one feature central point represents the center of a feature area, and one edge point associated with one feature central point represents the perimeter of the feature area;
for each feature central point, respectively executing the following operations: and acquiring a first line segment between one feature central point of the feature central points and the corresponding edge point, expanding the first line segment outwards based on a preset threshold value to acquire a second line segment, and outputting an end point forming the second line segment as a target expansion point of the feature central point.
Optionally, the preset threshold is determined by:
determining the distance between one feature central point in each feature central point and the edge point corresponding to the adjacent feature central point;
and determining the selectable interval of the preset threshold value and the preset threshold value based on the ratio of the distance to the length of the first line segment.
Optionally, the performing texture feature replacement on the second object by using the texture feature information to obtain a target texture feature replacement image for the second object includes:
respectively acquiring corresponding relations between the first key points and the second key points;
and performing texture feature replacement on the second object based on the corresponding relation and the texture feature information of the target object to obtain a target texture feature replacement image of the second object.
Optionally, after obtaining the target texture feature replacement image for the second object, the method further includes:
for each video frame in the second video stream, performing the following operations:
replacing the second object contained in one of the video frames by using the target texture characteristic replacement image to obtain a corresponding target video frame;
and splicing the obtained target video frames according to a time sequence to obtain a target video stream.
Optionally, the first video stream is captured by a first camera, and the second video stream is captured by a second camera, where the first camera is a front camera and the second camera is a rear camera; or, the first camera device is a rear camera device, and the second camera device is a front camera device;
then after obtaining the target video stream, further comprising:
respectively displaying the first video stream and the target video stream in different windows; alternatively, the first and second electrodes may be,
and displaying the first video stream and the target video stream in different areas in the same window.
In a second aspect, an image processing apparatus includes:
an obtaining unit, configured to obtain a first video frame in a first video stream and a second video frame in a second video stream, respectively, where the first video frame includes a first object, the second video frame includes a second object, and the first object and the second object have the same category attribute;
a determining unit, configured to determine, in response to a selection operation of a target account, that the first object is a target object, where the target object is an object used to replace texture feature information of the second object;
the identification unit is used for identifying key points of the target object to obtain texture feature information corresponding to the target object;
and the replacing unit is used for replacing the texture characteristics of the second object by adopting the texture characteristic information to obtain a target texture characteristic replacing image aiming at the second object.
Optionally, after the first video frame in the first video stream and the second video frame in the second video stream are respectively obtained, when the first video stream includes a plurality of first objects and the second video stream includes a plurality of second objects, the first object is determined to be a target object in response to a selection operation of a target account, and the determining unit is configured to:
in response to a selection operation of the target account, determining one of the plurality of first objects as a target object;
performing texture feature replacement on the second object by using the texture feature information to obtain a target texture feature replacement image for the second object, including:
and performing texture feature replacement on at least one second object in the plurality of second objects by adopting the texture feature information to obtain a corresponding target texture feature replacement image.
Optionally, the identifying unit is configured to perform key point identification on the target object to obtain texture feature information corresponding to the target object, and is configured to:
performing key point identification on the target object to obtain each first key point of the target object;
respectively determining first extension points respectively associated with the first key points by adopting a central point extension algorithm;
and connecting the first key points with the first extension points to obtain a first grid of the target object, and using the texture feature information of the region to which the first grid belongs as the texture feature information of the target object.
Optionally, before performing texture feature replacement on the second object by using the texture feature information, the replacement unit is further configured to:
performing key point identification on the second object to obtain each second key point of the second object;
respectively determining second expansion points respectively associated with the second key points by adopting a central point expansion algorithm;
and connecting the second key points and the second extension points to obtain a second grid of the second object, and using the texture feature information of the region to which the second grid belongs as the texture feature information of the second object.
Optionally, one target keypoint is one of the first keypoints, and one target extension point associated with the one target keypoint is one of the first extension points, or one target keypoint is one of the second keypoints, and one target extension point associated with the one target keypoint is one of the second extension points;
the identifying unit is further configured to obtain each target keypoint by:
based on preset association relations among all target key points, obtaining all feature center points and edge points associated with all the feature center points from all the target key points, wherein one feature center point represents the center of a feature area, and one edge point associated with one feature center point represents the perimeter of the feature area;
for each feature central point, respectively executing the following operations: and acquiring a first line segment between one feature central point of the feature central points and the corresponding edge point, expanding the first line segment outwards based on a preset threshold value to acquire a second line segment, and outputting an end point forming the second line segment as a target expansion point of the feature central point.
Optionally, the preset threshold is determined by:
determining the distance between one feature central point in each feature central point and the edge point corresponding to the adjacent feature central point;
and determining the selectable interval of the preset threshold value and the preset threshold value based on the ratio of the distance to the length of the first line segment.
Optionally, the texture feature information is adopted to perform texture feature replacement on the second object, so as to obtain a target texture feature replacement image for the second object, and the replacement unit is configured to:
respectively acquiring corresponding relations between the first key points and the second key points;
and performing texture feature replacement on the second object based on the corresponding relation and the texture feature information of the target object to obtain a target texture feature replacement image of the second object.
Optionally, after obtaining the target texture feature replacement image for the second object, the replacing unit is further configured to:
for each video frame in the second video stream, performing the following operations:
replacing the second object contained in one of the video frames by using the target texture characteristic replacement image to obtain a corresponding target video frame;
and splicing the obtained target video frames according to a time sequence to obtain a target video stream.
Optionally, the first video stream is captured by a first camera, and the second video stream is captured by a second camera, where the first camera is a front camera and the second camera is a rear camera; or, the first camera device is a rear camera device, and the second camera device is a front camera device;
then, after obtaining the target video stream, the replacing unit is further configured to:
respectively displaying the first video stream and the target video stream in different windows; alternatively, the first and second electrodes may be,
and displaying the first video stream and the target video stream in different areas in the same window.
In a third aspect, a computer device comprises:
a memory for storing executable instructions;
a processor configured to read and execute executable instructions stored in the memory to implement the method of any of the first aspect.
In a fourth aspect, a computer-readable storage medium, wherein instructions, when executed by a processor, enable the processor to perform the method of any of the first aspect.
In a fifth aspect, a computer program product comprises executable instructions that, when executed by a processor, are capable of implementing the method of any one of the first aspect.
In the embodiment of the disclosure, a first video frame in a first video stream and a second video frame in a second video stream are respectively obtained; determining a first object in the first video frame as a target object in response to the selection operation of the target account, wherein the target object is an object for replacing the texture feature information of the second object; identifying key points of the target object to obtain corresponding textural feature information, and performing textural feature replacement on a second object in the second video stream by adopting the textural feature information to obtain a target textural feature replacement image for the second object; therefore, the obtained texture feature information of the target object is accurately replaced on the second object through the identified key points, so that the obtained target texture feature replacement image is more real and more incongruous; therefore, in the two video streams, the facial texture characteristics of the same object are presented, the body dynamic characteristics of different objects are presented, various special effect effects can be presented in different application scenes, an application scene is added to the target account, and interestingness and entertainment are improved.
Drawings
Fig. 1A and 1B are schematic diagrams of an application scenario in an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart illustrating an image processing method according to an embodiment of the disclosure;
FIG. 3 is a schematic diagram of an application scenario in an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a scenario for determining a target object according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram illustrating obtaining target key points for a standard face image according to an embodiment of the present disclosure;
fig. 6A and 6B are schematic flow diagrams illustrating obtaining a target extension point in the embodiment of the present disclosure;
fig. 6C is a schematic flow chart illustrating obtaining textural feature information of a target object according to an embodiment of the disclosure;
FIG. 7 is a schematic diagram of obtaining target extension points for a standard face image in an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of obtaining target extension points for a standard face image in an embodiment of the present disclosure;
FIG. 9 is a schematic diagram of obtaining target extension points for a standard face image in an embodiment of the present disclosure;
FIG. 10 is a schematic diagram of obtaining target extension points for a standard face image in an embodiment of the present disclosure;
FIG. 11 is a schematic diagram of obtaining each target extension point for a standard face image in an embodiment of the present disclosure;
FIG. 12 is a schematic diagram of a grid obtained for a standard face image in an embodiment of the present disclosure;
FIG. 13 is a schematic flow chart illustrating a process of obtaining a target texture feature replacement image according to an embodiment of the present disclosure;
14A, 14B, 14C are schematic diagrams of an application scenario in an embodiment of the present disclosure;
FIG. 15 is a schematic diagram of a logic architecture of an image processing apparatus according to an embodiment of the present disclosure;
fig. 16 is a block diagram of an embodiment of an image processing apparatus according to the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some embodiments, but not all embodiments, of the technical solutions of the present disclosure. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments described in the present disclosure without any creative effort belong to the protection scope of the technical solution of the present disclosure.
The terms "first," "second," and the like in the description and in the claims of the present disclosure and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or described herein.
The image processing method provided by the embodiment of the disclosure can be executed by an electronic device with image processing capability, and the electronic device can be various possible terminal devices or a server, for example. The terminal device may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart television, a smart wearable device, and the like, but is not limited thereto; the server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like. In application software installed in electronic equipment, an object corresponding to the image processing method provided by the embodiment of the disclosure can be embedded to perform a texture feature replacement function, and a user can shoot and obtain the facial texture feature of the same object and the shape dynamic features of different objects in two video streams by using the function, so that the entertainment is enhanced.
In order to solve the problem that the application range of a special effect plug-in is limited in the prior art, in the embodiment of the disclosure, a first video frame in a first video stream and a second video frame in a second video stream are respectively obtained; determining a first object in the first video frame as a target object in response to the selection operation of the target account, wherein the target object is an object for replacing the texture feature information of the second object; and performing key point identification on the target object to obtain corresponding texture feature information, and performing texture feature replacement on the second object in the second video stream by adopting the texture feature information to obtain a target texture feature replacement image for the second object, so as to replace the image based on the target texture feature.
Then, replacing images by adopting the target texture features, respectively replacing the texture features of a second object in each video frame of a second video stream to obtain corresponding target video frames, and splicing each video frame according to a time sequence to obtain a target video stream.
In the following description of the preferred embodiments of the present disclosure, reference is made to the accompanying drawings, which are included to provide a further understanding of the disclosure, and it is to be understood that the preferred embodiments described herein are for the purpose of illustration and explanation only and are not intended to limit the disclosure, and that the features of the embodiments and examples of the disclosure may be combined with each other without conflict.
In the embodiment of the disclosure, a multimedia file shooting interface is jumped to in response to a first click operation of a target account on a main interface of an application program; and responding to a second click operation of the target account on the multimedia file shooting interface, and presenting the image of the video to be shot in the corresponding mode.
For example, referring to FIG. 1A, target account A is taken as an example.
Assume that target account a has clicked on the main interface of application a.
Then in response to a first click operation by the target account a on the main interface of the application a, a jump is made to the capture multimedia file interface a.
Assume, as shown in FIG. 1B, that target account A is still being used as an example.
Assume that the target account a performs a second click operation on the capture multimedia file interface a.
And responding to a second click operation of the target account A on the shooting multimedia file interface A, and presenting the acquired first video stream and the acquired second video stream on the shooting multimedia file interface A.
Assuming that the shooting multimedia file interface a is divided into two windows, namely a window 1 and a window 2, the acquired first video stream is presented in the window 1, and the acquired second video stream is presented in the window 2.
Referring to fig. 2, in the embodiment of the present disclosure, a specific flow of an image processing method is as follows:
step 200: respectively acquiring a first video frame in a first video stream and a second video frame in a second video stream, wherein the first video frame comprises a first object, the second video frame comprises a second object, and the first object and the second object have the same category attribute.
In the disclosed embodiment, a first video stream is captured by a first camera and a second video stream is captured by a second camera, wherein the first camera is a front camera and the second camera is a rear camera; alternatively, the first image pickup device is a rear image pickup device and the second image pickup device is a front image pickup device.
First, a first video frame containing a first object is obtained from a first video stream, and a second video frame containing a second object is obtained from a second video stream, wherein the first object and the second object have the same category attribute.
Optionally, in the embodiment of the present disclosure, the first video frame and the second video frame may have the same shooting time or different shooting times.
For example, referring to FIG. 3, target account A is still used as an example.
It is assumed that the first video stream and the second video stream are respectively captured by different image capturing devices on the same electronic apparatus a.
Assume that the acquired first video stream is presented in window 1, the acquired second video stream is presented in window 2, and the first object contained in the first video stream is light and the second object contained in the second video stream is light.
Acquiring a frame of video frame from the first video stream, and recording the frame as a first video frame, wherein the first video frame comprises a first object, namely Xiaoming;
and acquiring a frame of video frame from the second video stream, and recording the frame as a second video frame, wherein the second video frame comprises a second object, namely the small fragrance.
Alternatively, the first video frame and the second video frame may have the same shooting time or different shooting times.
Specifically, in response to a third click operation of the target account, in the multimedia file shooting interface a, a multimedia file with an advance or a delay is obtained by adjusting preset shooting conditions of the first video stream or/and the second video stream, so that the entertainment is increased.
Step 210: and in response to the selection operation of the target account, determining that the first object is the target object, wherein the target object is an object for replacing the texture feature information of the second object.
In the embodiment of the disclosure, the target account performs a selection operation in the multimedia file shooting interface, determines the first object as an object for replacing texture feature information of the second object, and then takes the first object as the target object.
For example, referring to FIG. 4, target account A is still used as an example.
Assuming that the target account is on the shooting multimedia file interface, a selection operation (e.g., clicking on the screen) is performed, and the first object (e.g., xiaoming) is selected as the target object.
A first object (i.e., xiaoming) in the first video frame is determined as a target object in response to a selection operation of the target account, the target object being an object for replacing texture feature information of a second object in the second video frame.
Step 220: and identifying key points of the target object to obtain texture feature information corresponding to the target object.
In the embodiment of the present disclosure, before the step 220 is executed, first, a keypoint identification algorithm and a center point expansion algorithm are introduced.
1. Key point identification algorithm
In the embodiment of the present disclosure, in the face key point recognition algorithm, a total of 101 target key points are recognized, which can be distinguished by using labels 0 to 100.
As shown in fig. 5, fig. 5 is a schematic diagram of a standard face image according to an embodiment of the present disclosure, in which 101 key points of each target in a face are marked. The number of target key points for marking the face contour is 19, and the labels are 0-18 respectively; 20 target key points for marking the outline of the eyebrow part, 10 right eyebrows, 19-28 marks, 10 left eyebrows, 29-38 marks; 26 target key points for marking eye outlines (including eyeball) are provided, 12 target key points for marking the outlines of the left eyes are provided, the number of the target key points is 39-50, and the number of the target key points for marking the center of the eyeball is 1, and the number of the target key points is 95; the number of the outlines of the right eyes is 12, the labels are 51-62 respectively, and the number of the target key points marking the position of the center of an eyeball is 1, and the label is 96; 13 target key points for marking the contour of the nose are respectively 63-74, 4 target key points for marking the nose bridge are respectively 97-100; the number of target key points for marking the mouth outline is 20, and the labels are 75-94 respectively; among them, the white target key points are the main key points for marking the main positions, such as eyeball, nose tip, mouth, etc.
In the embodiment of the present disclosure, these main key points are recorded as feature center points, the feature center points represent centers of feature areas, and edge points associated with the feature center points represent perimeters of the feature areas, for example, a target key point with a label of 96, which marks a position in the center of the right eye eyeball, is a feature center point, and edge points associated with the feature center point are, in turn, target key points with labels of 51 to 62, respectively, which mark outlines of the right eye.
2. Center point expansion algorithm
In the embodiment of the present disclosure, referring to fig. 6A, each target extension point for a standard face image may be obtained through steps 600 to 610.
Step 600: based on preset association relations among all target key points, all feature center points and edge points associated with all the feature center points are obtained from all the target key points, wherein one feature center point represents the center of a feature area, and one edge point associated with one feature center point represents the perimeter of the feature area.
For example, referring to fig. 5, 12 outlines for marking the left eye are shown, the number of the outlines is 39-50, and the number of the target key points for marking the center of the eyeball is 1, and the number of the target key points is 95.
Based on preset association relations among all target key points, acquiring target key points with the target key point label of 96 as the feature central point of the eye and the label of 39-50 as each target key point of the outline of the left eye, namely, each edge point associated with the feature central line point is each target key point with the label of 39-50 from the target key points marking the left eye.
Step 610: for each feature central point, the following operations are respectively executed: the method comprises the steps of obtaining a first line segment between one feature central point and a corresponding edge point in each feature central point, expanding the first line segment outwards based on a preset threshold value to obtain a second line segment, and outputting an end point forming the second line segment as a target expansion point of the feature central point.
In the embodiment of the disclosure, one feature central point of each feature central point is obtained, and a connection line is connected with the associated edge point to obtain the first line segment.
For example, referring to fig. 7, an example is given in which a target key point (reference numeral 95) for marking the center point of the left eye eyeball and one target key point (reference numeral 39) for marking the outline of the left eye are expanded to the outside.
The target key point marked with the mark number 95 is connected with the target key point marked with the mark number 39 to obtain a first line segment, which is marked as a line segment 1.
In the embodiment of the disclosure, after each target key point is obtained based on a key identification algorithm, according to a preset association relationship between each target key point, an identified face region is refined and divided into triangles of which the size is as small as possible, so that drastic change of the area of a triangle caused by change of the fixed point position of the triangle is weakened, then, each obtained target key point is connected with each obtained target expansion point, and further, more detailed and comprehensive texture feature information is obtained, so that the replacement texture feature in the target texture feature replacement image is more real and more natural.
In the embodiment of the present disclosure, referring to fig. 6B, the step 700 to the step 710 are performed to determine a preset threshold, and then the first line segment is expanded outwards based on the preset threshold, so as to obtain a second line segment.
Specifically, the specific process for determining the preset threshold value is as follows:
step 700: and determining the distance between one feature central point of each feature central point and the edge point corresponding to the adjacent feature central point.
For example, referring to fig. 8, the target key point (labeled 95) for marking the center point of the left eye eyeball and one target key point (labeled 39) for marking the contour of the left eye are expanded to the outside.
The target key point (i.e., one feature center point) of the mark number 95 is connected with the target key point (i.e., the edge point corresponding to the feature center point (face contour)) of the mark number 0 to obtain the distance between the target key point (i.e., one feature center point) of the mark number 95 and the target key point of the mark number 0.
Step 710: and determining the selectable interval of the preset threshold value and the preset threshold value based on the ratio of the distance to the length of the first line segment.
In the embodiment of the present disclosure, based on the determined distance, the distance and the length of the first line segment are divided to obtain a ratio therebetween, and then, based on the determined ratio, a preset threshold is determined.
For example, referring to fig. 9, the target key point (reference numeral 95) for marking the center point of the left eye eyeball and the target key point (reference numeral 39) for marking the contour of the left eye are expanded to the outside.
Assuming that the target key point of the mark number 95 is connected with the target key point of the mark number 39, the obtained first line segment (denoted as line segment 1) has a length value of 5mm, and the distance between the target key point of the mark number 95 and the target key point of the mark number 0 is 9 mm.
Since 9/5 is 1.8, the optional interval of the predetermined threshold is [1, 1.8 ].
Optionally, the determined preset threshold value is not greater than the determined multiple, so that it is ensured that the obtained target expansion point is located in the identified face region, and thus an effective target expansion point is obtained, and further the identified face region is refined, and more accurate and fine face texture features are obtained.
In the embodiment of the present disclosure, when step 510 is executed, based on the obtained first line segment and the preset threshold determined from the selectable interval, the first line segment may be expanded outward to obtain a second line segment, and then an endpoint forming the second line segment is used as a target expansion point for one feature central point.
For example, referring to fig. 10, the target key point (reference numeral 95, i.e., the feature center point) for marking the center point of the left eye eyeball and the target key point (reference numeral 39, i.e., the edge point associated with the feature center point) for marking the contour of the left eye are still extended outward as an example.
Assume that the selectable interval of the determined preset threshold is [1, 1.8 ].
Again assume that the predetermined threshold is determined to be 1.5.
Expanding the line segment 1 by 1.5 times outwards to obtain a second line segment which is marked as a line segment 2; and the end points forming the line segment 2 are used as target extension points aiming at the characteristic central point, namely the target extension points aiming at the target key points of the mark label 95 are marked as the target extension points of the mark label 140.
In the embodiment of the present disclosure, by using the above method, each feature center point is connected to the associated edge point, so as to obtain each corresponding first line segment, and based on the preset threshold, each first line segment is expanded outwards, and an end point forming a second line segment is used as each target expansion end point for each feature center point, so that a more detailed and more comprehensive recognition result of the face region of the standard face image can be obtained. As shown in fig. 11, fig. 11 is a schematic diagram illustrating the determined target extension points.
In the embodiment of the present disclosure, when step 220 is executed, referring to fig. 6C, the texture feature information corresponding to the target object is obtained by executing steps 800 to 820.
Step 800: and carrying out key point identification on the target object to obtain each first key point of the target object.
In the embodiment of the present disclosure, when step 800 is executed, a face keypoint model may be used to directly perform face keypoint identification on a target object, so as to obtain each first keypoint for the target object.
It should be noted that, when the face keypoint model is used to perform keypoint identification, for the target object a, the first face keypoint data set obtained by identification should be the same as the data set composed of face keypoints in the standard face image, that is, the number of the first keypoints and the number of the target keypoints are the same, for example, 101 first keypoints. However, since there is a certain difference between the face of the target object a and the standard face in the standard face image, for example, the sizes of the eyes are not the same, the positions of the first key points in the identified target object a and the positions of the target key points in the standard face image have a difference, but the labels are in one-to-one correspondence. When a partial area of a face in an image is blocked, or eyes are closed, or the face is not a front face but a side face, the number of the identified first key points may be less than 101.
Step 810: and respectively determining first extension points respectively associated with the first key points by adopting a central point extension algorithm.
In the embodiment of the present disclosure, when step 810 is executed, based on the obtained first keypoints and a preset threshold, first extension points respectively associated with the first keypoints may be respectively determined.
Step 820: and connecting each first key point with each first extension point to obtain a first grid of the target object, and taking the texture feature information of the region to which the first grid belongs as the texture feature information of the target object.
In the embodiment of the present disclosure, after the first key points and the first extension points are obtained, the obtained first key points and the first extension points are connected to obtain a three-dimensional mesh surface map for the target object.
Optionally, a triangulation algorithm (delaunay) may be used to connect each first keypoint with each first extension point, so as to obtain a first mesh for the target object.
For example, referring to fig. 12, fig. 12 is a schematic diagram illustrating that a corresponding first mesh is drawn for each first keypoint and each first extension point obtained from a standard face image according to an embodiment of the present disclosure.
In the embodiment of the present disclosure, when obtaining each first key point and each first extension point, in order to reduce the influence of the head pose of the target object on the texture feature recognition, the face region of the target object may be divided into triangles that are as small as possible and are one block, so as to reduce the drastic change of the area of the triangle caused by the change of the fixed point position of the triangle, and thus, the obtained texture feature recognition result for the face region of the target object may be more detailed and comprehensive.
In the embodiment of the present disclosure, in step 820, after the first mesh for the face region of the target object is obtained, the texture feature information of the region to which the first mesh belongs is obtained by using a feature extraction algorithm, and the texture feature information is used as the texture feature information of the face region of the target object.
Similarly, the method for extracting the textural feature information of the face of the target object is adopted to identify key points of the second object, so as to obtain each second key point of the second object; respectively determining second expansion points respectively associated with the second key points by adopting a central point expansion algorithm based on the second key points; then, connecting each second key point with each second extension point to obtain a second grid of the second object, and using the texture feature information of the area to which the second grid belongs as the texture feature information of the second object; so that, when step 230 is executed, the texture feature information corresponding to the target object may be adopted to perform texture feature replacement on the second object based on the corresponding relationship between each first key point and each second key point, thereby obtaining a target texture feature replacement image of the second object.
Step 230: and performing texture feature replacement on the second object by adopting the texture feature information to obtain a target texture feature replacement image for the second object.
In the embodiment of the present disclosure, when step 230 is executed, as shown in fig. 13, corresponding functions may be implemented by executing steps 2301 to 2302.
Step 2301: and respectively acquiring the corresponding relation between each first key point and each second key point.
In the embodiment of the present disclosure, the obtained second coordinate information corresponding to each second key point and each second extension point of the second object is extracted by using the same method as that for obtaining the first coordinate information corresponding to each first key point and each first extension point of the target object, so that a preset corresponding relationship exists between the first coordinate information and the second coordinate information.
Step 2302: and performing texture feature replacement on the second object based on the corresponding relation and the texture feature information of the target object to obtain a target texture feature replacement image of the second object.
For example, referring to FIGS. 14A-14C, target account A is still used as an example.
Assume that in response to the third click operation of the target account a, it is determined that the facial texture feature information of the small fragrance in the window 2 is replaced with the small and clear facial texture feature information in the window 1.
Performing key point identification on the small and clear facial area to obtain corresponding first key points, and respectively determining first extension points respectively associated with the first key points by adopting a central point extension algorithm based on the obtained first key points; then, each first key point and each first extension point are connected to obtain a first mesh of the small and clear face, and then the texture feature information of the region to which the first mesh belongs is used as the texture feature information of the small and clear face by using a feature extraction algorithm, as shown in fig. 14A.
Similarly, performing key point identification on the small aromatic face area in the window 2 to obtain corresponding second key points, and respectively determining second extension points respectively associated with the first key points by adopting a central point extension algorithm based on the obtained second key points; then, each second key point and each second extension point are connected to obtain a second mesh of the small aromatic face, and then the texture feature information of the region to which the second mesh belongs is used as the texture feature information of the small aromatic face by using a feature extraction algorithm, as shown in fig. 14B.
Then, based on the correspondence between each first key point and each second key point obtained respectively and the texture feature information of the small and clear face region, the texture feature information of the face region with the small fragrance is replaced, so as to obtain a target texture feature replacement image with the small fragrance, as shown in fig. 14C.
According to the embodiment of the disclosure, each first key point for the target object and each second key point for the second object are respectively obtained through the key point identification algorithm, and then the texture feature information of the target object can be replaced to the face of the second object more closely based on the corresponding relation between each first key point and each second key point.
In the embodiment of the present disclosure, after obtaining the target texture feature replacement image for the second object, the following operations are further performed on each video frame in the second video stream, respectively, by the above method:
and operation one, replacing the image by using the target texture features, and performing texture feature replacement on a second object contained in one video frame in each video frame to obtain a corresponding target video frame.
And operation II, splicing all the obtained target video frames according to a time sequence to obtain a target video stream.
In the embodiment of the disclosure, by using the image processing method, in the target video stream and the first video stream, the facial texture feature information of the first object can be presented, and simultaneously, the body dynamic features of different objects can be presented, so that various special effects can be presented in different application scenes, an application scene is provided for more target accounts, and interestingness is increased.
Optionally, in this embodiment of the present disclosure, the first video stream is captured by a first camera, and the second video stream is captured by a second camera, where the first camera is a front camera and the second camera is a rear camera; or, the first camera device is a rear camera device, and the second camera device is a front camera device, so that after the target video stream is obtained, the first video stream and the target video stream can be displayed in different windows respectively; or the first video stream and the target video stream are displayed in different areas in the same window, so that the defects that the special effect plug-in can only add a special effect in a single scene and can only present one video stream at the same time in the prior art are overcome, and the target account can shoot the video stream meeting the self requirement at will.
In the embodiment of the disclosure, after a first video frame in a first video stream and a second video frame in a second video stream are respectively acquired, when the first video stream contains a plurality of first objects and the second video stream contains a plurality of second objects, one first object in the plurality of first objects is determined to be a target object in response to a selection operation of a target account; and then, performing texture feature replacement on at least one second object in the plurality of second objects by adopting the texture feature information of the target object to obtain a corresponding target texture feature replacement image.
Therefore, the method can realize the key point identification of multiple persons to obtain the texture characteristic information corresponding to each person, and then adopt the texture characteristic information of the target object to carry out texture characteristic replacement on at least one second object in the multiple second objects, so that at least two objects are presented in the obtained target video stream and the first video stream, wherein the two objects have the same facial texture characteristic information, and the two objects have different body dynamic characteristics, and the two video streams presented in different areas have more interestingness.
It should be noted that, in the embodiment of the present disclosure, only the facial texture feature replacement is described in detail, and in practical application, the scheme is not limited to replacing the facial texture feature, and may also extract texture features for any part such as a limb in a target object, so as to provide more interesting experiences for a target account.
Based on the same inventive concept, referring to fig. 15, an embodiment of the present disclosure provides a computer apparatus (e.g., an image processing device) including:
an obtaining unit 1510, configured to obtain a first video frame in a first video stream and a second video frame in a second video stream, respectively, where the first video frame includes a first object, the second video frame includes a second object, and the first object and the second object have the same category attribute;
a determining unit 1520, configured to determine, in response to a selection operation of a target account, that the first object is a target object, and the target object is an object used to replace texture feature information of the second object;
the identifying unit 1530 is configured to perform key point identification on the target object to obtain texture feature information corresponding to the target object;
a replacing unit 1540, configured to perform texture feature replacement on the second object by using the texture feature information, so as to obtain a target texture feature replacement image for the second object.
Optionally, after the first video frame in the first video stream and the second video frame in the second video stream are respectively obtained, when a plurality of first objects are included in the first video stream and a plurality of second objects are included in the second video stream, then in response to a selection operation of a target account, it is determined that the first object is a target object, and the determining unit 1520 is configured to:
in response to a selection operation of the target account, determining one of the plurality of first objects as a target object;
performing texture feature replacement on the second object by using the texture feature information to obtain a target texture feature replacement image for the second object, where the replacement unit 1540 is configured to:
and performing texture feature replacement on at least one second object in the plurality of second objects by adopting the texture feature information to obtain a corresponding target texture feature replacement image.
Optionally, the identifying unit 1530 is configured to perform the keypoint identification on the target object to obtain texture feature information corresponding to the target object, and:
performing key point identification on the target object to obtain each first key point of the target object;
respectively determining first extension points respectively associated with the first key points by adopting a central point extension algorithm;
and connecting the first key points with the first extension points to obtain a first grid of the target object, and using the texture feature information of the region to which the first grid belongs as the texture feature information of the target object.
Optionally, before performing texture feature replacement on the second object by using the texture feature information, the replacing unit 1540 is further configured to:
performing key point identification on the second object to obtain each second key point of the second object;
respectively determining second expansion points respectively associated with the second key points by adopting a central point expansion algorithm;
and connecting the second key points and the second extension points to obtain a second grid of the second object, and using the texture feature information of the region to which the second grid belongs as the texture feature information of the second object.
Optionally, one target keypoint is one of the first keypoints, and one target extension point associated with the one target keypoint is one of the first extension points, or one target keypoint is one of the second keypoints, and one target extension point associated with the one target keypoint is one of the second extension points;
the identifying unit 1530 is further configured to obtain the respective target keypoints by:
based on preset association relations among all target key points, obtaining all feature center points and edge points associated with all the feature center points from all the target key points, wherein one feature center point represents the center of a feature area, and one edge point associated with one feature center point represents the perimeter of the feature area;
for each feature central point, respectively executing the following operations: and acquiring a first line segment between one feature central point of the feature central points and the corresponding edge point, expanding the first line segment outwards based on a preset threshold value to acquire a second line segment, and outputting an end point forming the second line segment as a target expansion point of the feature central point.
Optionally, the preset threshold is determined by:
determining the distance between one feature central point in each feature central point and the edge point corresponding to the adjacent feature central point;
and determining the selectable interval of the preset threshold value and the preset threshold value based on the ratio of the distance to the length of the first line segment.
Optionally, the texture feature replacing is performed on the second object by using the texture feature information to obtain a target texture feature replacement image for the second object, where the replacing unit 1540 is configured to:
respectively acquiring corresponding relations between the first key points and the second key points;
and performing texture feature replacement on the second object based on the corresponding relation and the texture feature information of the target object to obtain a target texture feature replacement image of the second object.
Optionally, after obtaining the target texture feature replacement image for the second object, the replacing unit 1540 is further configured to:
for each video frame in the second video stream, performing the following operations:
replacing the second object contained in one of the video frames by using the target texture characteristic replacement image to obtain a corresponding target video frame;
and splicing the obtained target video frames according to a time sequence to obtain a target video stream.
Optionally, the first video stream is captured by a first camera, and the second video stream is captured by a second camera, where the first camera is a front camera and the second camera is a rear camera; or, the first camera device is a rear camera device, and the second camera device is a front camera device;
then, after the target video stream is obtained, the replacing unit 1540 is further configured to:
respectively displaying the first video stream and the target video stream in different windows; alternatively, the first and second electrodes may be,
and displaying the first video stream and the target video stream in different areas in the same window.
Based on the same inventive concept, referring to fig. 16, an embodiment of the present disclosure provides a computer device, for example, the electronic device 1600 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 16, electronic device 1600 may include one or more of the following components: processing component 1602, memory 1604, power component 1606, multimedia component 1608, audio component 1610, input/output (I/O) interface 1612, sensor component 1614, and communications component 1616.
The processing component 1602 generally controls overall operation of the electronic device 1600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 1602 may include one or more processors 1620 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 1602 can include one or more modules that facilitate interaction between the processing component 1602 and other components. For example, the processing component 1602 can include a multimedia module to facilitate interaction between the multimedia component 1608 and the processing component 1602.
The memory 1609 is configured to store various types of data to support operation at the electronic device 1600. Examples of such data include instructions for any application or method operating on the electronic device 1600, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1604 may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 1606 provides power to the various components of the electronic device 1600. The power components 1606 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 1600.
The multimedia component 1608 includes a screen that provides an output interface between the electronic device 1600 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1608 comprises a front-facing camera and/or a rear-facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 1600 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 1610 is configured to output and/or input an audio signal. For example, the audio component 1610 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 1600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 1604 or transmitted via the communications component 1616. In some embodiments, audio component 1610 further includes a speaker for outputting audio signals.
The I/O interface 1612 provides an interface between the processing component 1602 and peripheral interface modules, such as keyboards, click wheels, buttons, and the like. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
Sensor assembly 1614 includes one or more sensors for providing various aspects of status assessment for electronic device 1600. For example, sensor assembly 1614 may detect an open/closed state of electronic device 1600, the relative positioning of components, such as a display and keypad of electronic device 1600, a change in position of electronic device 1600 or a component of electronic device 1600, the presence or absence of user contact with electronic device 1600, orientation or acceleration/deceleration of electronic device 1600, and a change in temperature of electronic device 1600. The sensor assembly 1614 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 1614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1614 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communications component 1616 is configured to facilitate communications between the electronic device 1600 and other devices in a wired or wireless manner. The electronic device 1600 may access a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 1616 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 1616 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 1600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing any of the methods performed by the computer apparatus of the above-described embodiments.
Based on the same inventive concept, the disclosed embodiments provide a computer-readable storage medium, wherein when instructions in the computer-readable storage medium are executed by a processor, the computer-readable storage medium can perform any one of the methods performed by the computer device in the above embodiments.
Based on the same inventive concept, the disclosed embodiments provide a computer program product comprising executable instructions, which when executed by a processor, can implement any one of the methods performed by the computer device as in the above embodiments.
In summary, in the embodiment of the present disclosure, a first video frame in a first video stream and a second video frame in a second video stream are obtained respectively; determining a first object in the first video frame as a target object in response to the selection operation of the target account, wherein the target object is an object for replacing the texture feature information of the second object; identifying key points of the target object to obtain corresponding textural feature information, and performing textural feature replacement on a second object in the second video stream by adopting the textural feature information to obtain a target textural feature replacement image for the second object; therefore, the obtained texture feature information of the target object is accurately replaced on the second object through the identified key points, so that the obtained target texture feature replacement image is more real and more incongruous; therefore, in the two video streams, the facial texture characteristics of the same object are presented, the body dynamic characteristics of different objects are presented, various special effect effects can be presented in different application scenes, an application scene is added to the target account, and interestingness and entertainment are improved.
In addition, in the embodiment of the disclosure, after each target key point is obtained based on a key identification algorithm, the identified face region is refined according to the preset association relationship between each target key point, and is divided into triangles of which the size is as small as possible, so that the drastic change of the triangle area caused by the change of the fixed point position of the triangle in the large triangle is weakened, and then, the obtained target key points are connected with the target extension points, so that more detailed and comprehensive texture feature information is obtained, and the replacement texture features in the target texture feature replacement image are more real and natural.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present disclosure have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the disclosure.
It will be apparent to those skilled in the art that various changes and modifications may be made to the disclosed embodiments without departing from the spirit and scope of the disclosed embodiments. Thus, if such modifications and variations of the embodiments of the present disclosure fall within the scope of the claims of the present disclosure and their equivalents, the present disclosure is also intended to encompass such modifications and variations.

Claims (10)

1. An image processing method, comprising:
respectively acquiring a first video frame in a first video stream and a second video frame in a second video stream, wherein the first video frame comprises a first object, the second video frame comprises a second object, and the first object and the second object have the same category attribute;
in response to a selection operation of a target account, determining that the first object is a target object, wherein the target object is an object for replacing texture feature information of the second object;
performing key point identification on the target object to obtain texture feature information corresponding to the target object;
and performing texture feature replacement on the second object by adopting the texture feature information to obtain a target texture feature replacement image for the second object.
2. The method of claim 1, wherein after the obtaining of the first video frame in the first video stream and the second video frame in the second video stream, respectively, when the first video stream includes a plurality of first objects and the second video stream includes a plurality of second objects, determining that the first object is the target object in response to a selection operation of the target account comprises:
in response to a selection operation of the target account, determining one of the plurality of first objects as a target object;
performing texture feature replacement on the second object by using the texture feature information to obtain a target texture feature replacement image for the second object, including:
and performing texture feature replacement on at least one second object in the plurality of second objects by adopting the texture feature information to obtain a corresponding target texture feature replacement image.
3. The method of claim 2, wherein the performing the keypoint identification on the target object to obtain the texture feature information corresponding to the target object comprises:
performing key point identification on the target object to obtain each first key point of the target object;
respectively determining first extension points respectively associated with the first key points by adopting a central point extension algorithm;
and connecting the first key points with the first extension points to obtain a first grid of the target object, and using the texture feature information of the region to which the first grid belongs as the texture feature information of the target object.
4. The method of claim 2, wherein prior to said texture replacing the second object with the texture information, further comprising:
performing key point identification on the second object to obtain each second key point of the second object;
respectively determining second expansion points respectively associated with the second key points by adopting a central point expansion algorithm;
and connecting the second key points and the second extension points to obtain a second grid of the second object, and using the texture feature information of the region to which the second grid belongs as the texture feature information of the second object.
5. The method according to claim 3 or 4, wherein a target keypoint is one of said first keypoints and a target extension point associated with said one target keypoint is one of said first extension points, or wherein a target keypoint is one of said second keypoints and a target extension point associated with said one target keypoint is one of said second extension points;
the method further comprises obtaining the respective target keypoints by:
based on preset association relations among all target key points, obtaining all feature central points and edge points associated with all the feature central points from all the target key points, wherein one feature central point represents the center of a feature area, and one edge point associated with one feature central point represents the perimeter of the feature area;
for each feature central point, respectively executing the following operations: and acquiring a first line segment between one feature central point of the feature central points and the corresponding edge point, expanding the first line segment outwards based on a preset threshold value to acquire a second line segment, and outputting an end point forming the second line segment as a target expansion point of the feature central point.
6. The method of claim 5, wherein the preset threshold is determined by:
determining the distance between one feature central point in each feature central point and the edge point corresponding to the adjacent feature central point;
and determining the selectable interval of the preset threshold value and the preset threshold value based on the ratio of the distance to the length of the first line segment.
7. An image processing apparatus characterized by comprising:
an obtaining unit, configured to obtain a first video frame in a first video stream and a second video frame in a second video stream, respectively, where the first video frame includes a first object, the second video frame includes a second object, and the first object and the second object have the same category attribute;
a determining unit, configured to determine, in response to a selection operation of a target account, that the first object is a target object, where the target object is an object used to replace texture feature information of the second object;
the identification unit is used for identifying key points of the target object to obtain texture feature information corresponding to the target object;
and the replacing unit is used for replacing the texture characteristics of the second object by adopting the texture characteristic information to obtain a target texture characteristic replacing image aiming at the second object.
8. An electronic device, comprising:
a memory for storing executable instructions;
a processor for reading and executing executable instructions stored in the memory to implement the method of any one of claims 1-6.
9. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor, enable the processor to perform the method of any of claims 1-6.
10. A computer program product comprising executable instructions capable, when executed by a processor, of performing the method of any one of claims 1 to 6.
CN202110597268.8A 2021-05-31 2021-05-31 Image processing method and device, electronic equipment and storage medium Pending CN113362434A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110597268.8A CN113362434A (en) 2021-05-31 2021-05-31 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110597268.8A CN113362434A (en) 2021-05-31 2021-05-31 Image processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113362434A true CN113362434A (en) 2021-09-07

Family

ID=77528253

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110597268.8A Pending CN113362434A (en) 2021-05-31 2021-05-31 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113362434A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114666622A (en) * 2022-04-02 2022-06-24 北京字跳网络技术有限公司 Special effect video determination method and device, electronic equipment and storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102572391A (en) * 2011-12-09 2012-07-11 深圳市万兴软件有限公司 Method and device for genius-based processing of video frame of camera
CN106534944A (en) * 2016-11-30 2017-03-22 北京锤子数码科技有限公司 Video display method and device
CN108416832A (en) * 2018-01-30 2018-08-17 腾讯科技(深圳)有限公司 Display methods, device and the storage medium of media information
CN108765272A (en) * 2018-05-31 2018-11-06 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and readable storage medium storing program for executing
CN109872297A (en) * 2019-03-15 2019-06-11 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN110070551A (en) * 2019-04-29 2019-07-30 北京字节跳动网络技术有限公司 Rendering method, device and the electronic equipment of video image
CN110536164A (en) * 2019-08-16 2019-12-03 咪咕视讯科技有限公司 Display methods, video data handling procedure and relevant device
CN110619670A (en) * 2019-08-12 2019-12-27 北京百度网讯科技有限公司 Face interchange method and device, computer equipment and storage medium
CN111444743A (en) * 2018-12-27 2020-07-24 北京奇虎科技有限公司 Video portrait replacing method and device
CN111652791A (en) * 2019-06-26 2020-09-11 广州虎牙科技有限公司 Face replacement display method, face replacement display device, live broadcast method, live broadcast device, electronic equipment and storage medium
CN111726536A (en) * 2020-07-03 2020-09-29 腾讯科技(深圳)有限公司 Video generation method and device, storage medium and computer equipment
CN111753784A (en) * 2020-06-30 2020-10-09 广州酷狗计算机科技有限公司 Video special effect processing method and device, terminal and storage medium
CN112101073A (en) * 2019-06-18 2020-12-18 北京陌陌信息技术有限公司 Face image processing method, device, equipment and computer storage medium
CN112351327A (en) * 2019-08-06 2021-02-09 北京字节跳动网络技术有限公司 Face image processing method and device, terminal and storage medium
CN112750176A (en) * 2020-09-10 2021-05-04 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102572391A (en) * 2011-12-09 2012-07-11 深圳市万兴软件有限公司 Method and device for genius-based processing of video frame of camera
CN106534944A (en) * 2016-11-30 2017-03-22 北京锤子数码科技有限公司 Video display method and device
CN108416832A (en) * 2018-01-30 2018-08-17 腾讯科技(深圳)有限公司 Display methods, device and the storage medium of media information
CN108765272A (en) * 2018-05-31 2018-11-06 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and readable storage medium storing program for executing
CN111444743A (en) * 2018-12-27 2020-07-24 北京奇虎科技有限公司 Video portrait replacing method and device
CN109872297A (en) * 2019-03-15 2019-06-11 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN110070551A (en) * 2019-04-29 2019-07-30 北京字节跳动网络技术有限公司 Rendering method, device and the electronic equipment of video image
CN112101073A (en) * 2019-06-18 2020-12-18 北京陌陌信息技术有限公司 Face image processing method, device, equipment and computer storage medium
CN111652791A (en) * 2019-06-26 2020-09-11 广州虎牙科技有限公司 Face replacement display method, face replacement display device, live broadcast method, live broadcast device, electronic equipment and storage medium
CN112351327A (en) * 2019-08-06 2021-02-09 北京字节跳动网络技术有限公司 Face image processing method and device, terminal and storage medium
CN110619670A (en) * 2019-08-12 2019-12-27 北京百度网讯科技有限公司 Face interchange method and device, computer equipment and storage medium
CN110536164A (en) * 2019-08-16 2019-12-03 咪咕视讯科技有限公司 Display methods, video data handling procedure and relevant device
CN111753784A (en) * 2020-06-30 2020-10-09 广州酷狗计算机科技有限公司 Video special effect processing method and device, terminal and storage medium
CN111726536A (en) * 2020-07-03 2020-09-29 腾讯科技(深圳)有限公司 Video generation method and device, storage medium and computer equipment
CN112750176A (en) * 2020-09-10 2021-05-04 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114666622A (en) * 2022-04-02 2022-06-24 北京字跳网络技术有限公司 Special effect video determination method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN106570110B (en) Image duplicate removal method and device
CN106664376B (en) Augmented reality device and method
CN106792004B (en) Content item pushing method, device and system
CN111553864B (en) Image restoration method and device, electronic equipment and storage medium
CN105357425B (en) Image capturing method and device
JP2016531362A (en) Skin color adjustment method, skin color adjustment device, program, and recording medium
US20170332137A1 (en) Real-time content filtering and replacement
WO2015001437A1 (en) Image processing method and apparatus, and electronic device
CN113395542B (en) Video generation method and device based on artificial intelligence, computer equipment and medium
CN109672830B (en) Image processing method, image processing device, electronic equipment and storage medium
WO2022227393A1 (en) Image photographing method and apparatus, electronic device, and computer readable storage medium
CN112991553B (en) Information display method and device, electronic equipment and storage medium
CN106791535B (en) Video recording method and device
CN110211211B (en) Image processing method, device, electronic equipment and storage medium
WO2022198934A1 (en) Method and apparatus for generating video synchronized to beat of music
CN112257552B (en) Image processing method, device, equipment and storage medium
CN112188091B (en) Face information identification method and device, electronic equipment and storage medium
CN112509005A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112148404A (en) Head portrait generation method, apparatus, device and storage medium
CN111340691A (en) Image processing method, image processing device, electronic equipment and storage medium
CN109145878B (en) Image extraction method and device
CN111144266A (en) Facial expression recognition method and device
CN111368127A (en) Image processing method, image processing device, computer equipment and storage medium
CN113362434A (en) Image processing method and device, electronic equipment and storage medium
CN104902318A (en) Playing control method and terminal device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination