CN109788311B - Character replacement method, electronic device, and storage medium - Google Patents
Character replacement method, electronic device, and storage medium Download PDFInfo
- Publication number
- CN109788311B CN109788311B CN201910082617.5A CN201910082617A CN109788311B CN 109788311 B CN109788311 B CN 109788311B CN 201910082617 A CN201910082617 A CN 201910082617A CN 109788311 B CN109788311 B CN 109788311B
- Authority
- CN
- China
- Prior art keywords
- area
- person
- feature point
- determining
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 230000001960 triggered effect Effects 0.000 claims description 52
- 210000001061 forehead Anatomy 0.000 claims description 14
- 210000001508 eye Anatomy 0.000 claims description 10
- 210000004709 eyebrow Anatomy 0.000 claims description 10
- 210000001331 nose Anatomy 0.000 claims description 10
- 210000000887 face Anatomy 0.000 claims description 9
- 210000000214 mouth Anatomy 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 5
- 238000012544 monitoring process Methods 0.000 claims description 5
- 238000012163 sequencing technique Methods 0.000 claims description 4
- 230000000694 effects Effects 0.000 description 4
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 235000019557 luminance Nutrition 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Landscapes
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a person replacing method, an electronic device and a storage medium. The method determines a first video asset; determining a first person in a first video resource; determining a first face region of a first person; determining a second person corresponding to the first person, the second person being different from the first person; determining first attribute information of each feature point in a first face region; determining second attribute information of each feature point in a second face area of a corresponding second person; adjusting a second face area according to the first attribute information and the second attribute information; and replacing the content of the first face area with the content of the adjusted second face area. The method replaces the first face area of the first person in the first video resource with the adjusted second face area of the second person, so that the image change of the person after the video resource is made is realized, and the participation and the interactivity are improved.
Description
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a character replacement method, an electronic device, and a storage medium.
Background
At present, in video resources such as movies, televisions, animations, cartoons, games and the like, character images are fixed, namely once the video resources are manufactured, the character images can only be the appearance of the manufactured images and cannot be changed.
The character image is presented in an unchangeable mode, the interestingness of the video resource can be reduced, and the participation and the interactivity between the video resource and the user are insufficient.
Disclosure of Invention
Technical problem to be solved
In order to improve interactivity of video resources, the invention provides a character replacement method, electronic equipment and a storage medium.
(II) technical scheme
In order to achieve the purpose, the invention adopts the main technical scheme that:
a method of character replacement, the method comprising:
s101, determining a first video resource;
s102, determining a first person in the first video resource;
s103, determining a first face area of the first person;
s104, determining a second person corresponding to the first person, wherein the second person is different from the first person;
s105, determining first attribute information of each feature point in the first face area;
s106, determining second attribute information of each feature point in a second face area of a corresponding second person;
s107, adjusting the second face area according to the first attribute information and the second attribute information;
s108, replacing the content of the first face area with the content of the adjusted second face area;
the first attribute information and the second attribute information comprise the position, resolution, color, brightness, contrast, area, perimeter and length-width ratio of each feature point.
Optionally, the S102 includes:
s102-1, determining the total occurrence time of each person in the first video resource;
s102-2, sequencing all the people in the first video resource according to the total occurrence duration from long to short;
s102-3, determining the preset number of people ranked in the front as first people;
when the number of the first people is 1, the number of the second people is 1;
when the number of the first people is multiple, the number of the second people is the same as that of the first people, each second person corresponds to one unique first person, and the second people are different from the corresponding first people.
Optionally, the S104 includes:
monitoring whether at least one replacement resource is triggered;
when at least one alternative resource is triggered, determining a second person from the triggered alternative resource;
wherein the at least one replacement resource is triggered, comprising:
at least one stored photo is selected; or,
at least one stored second video asset is selected; or,
at least one stored photo is clicked on; or,
at least one stored second video asset is clicked on; or,
at least one photo is uploaded; or,
at least one second video asset is uploaded; or,
at least one picture is taken instantaneously; or,
at least one second video asset is shot instantly;
the second video asset is different from the first video asset.
Optionally, the first video resource is a dynamic image resource, and the dynamic image is a movie, a television, an animation, a game, a self-portrait video, an advertisement video, or a small video;
the second video resource is a dynamic image resource, and the dynamic image is a movie, a television, an animation, a game, a self-portrait video, an advertisement video or a small video.
Optionally, the determining the second person from the triggered alternative resource includes:
determining the person selected by the user in the triggered alternative resources as a second person; or,
when the triggered replacement resource is a picture, identifying all people in the triggered replacement resource, calculating the area of each person, and determining a second person according to the area of each person; or,
and when the triggered alternative resource is the second video resource, identifying all the persons in the triggered alternative resource, and determining the second person according to the importance degree of each person.
Alternatively, the importance level of each character is determined by:
determining that all frames of any person i exist aiming at the person i;
the importance level of any character i is determined according to the following formula:
wherein, WiN is the degree of importance of any character iiThe total number of frames for the existence of any character i, N is the total number of frames of the second video resource, j is the frame identification for the existence of any character i, aijThe area of any character i in the j frames of any character i, AjThe total area, s, of all the characters in the j frame where any character i existsjTotal effective area of j frames in which any character i exists, bijThe area of the face of any person i in the j frames of any person i, BjM is the total area of all human faces in the j frames where any human i existsjThe effective area of the content of the j frame where any character i exists.
Optionally, the feature points include forehead, eyebrow, eyes, nose, mouth.
Optionally, the S107 includes:
for any one of the characteristic points k,
s107-1, obtaining first attribute information of any feature point in a first face region, where the first attribute information includes a first position of any feature point k, a first resolution of any feature point k, a first color of any feature point k, a first brightness of any feature point k, a first contrast of any feature point k, a first area of any feature point k, a first perimeter of any feature point k, and a first length-width ratio of any feature point k;
s107-2, acquiring second attribute information of any feature point in a second face region, wherein the second attribute information comprises a second position of any feature point k, a second resolution of any feature point k, a second color of any feature point k, a second brightness of any feature point k, a second contrast of any feature point k, a second area of any feature point k, a second perimeter of any feature point k, and a second length-width ratio of any feature point k;
s107-3, adjusting the second position to the first position;
s107-4, if the second resolution is larger than the first resolution, adjusting the second resolution to the first resolution; otherwise, the second resolution is not adjusted;
s107-5, if the color information is (R, G, B), adjusting the second color information to:
wherein R is a red channel value, G is a green channel value, B is a blue channel value, R1Is a red channel value, R, in the first color information2Is the red channel value, G, in the second color information1Is the green channel value, G, in the first color information2Is the green channel value, B, in the second color information1Is blue in the first color informationChannel value, B2Is a blue channel value in the second color information;
s107-6, adjusting the second brightness to the first brightness;
s107-7, if the second contrast is larger than the first contrast, adjusting the second contrast to the first contrast; otherwise, the second contrast is not adjusted;
s107-8, if the second area is larger than the first area and the second perimeter is larger than the first perimeter, reducing the area of any feature point k in the second face region according to the second length-width ratio until the perimeter of any feature point k in the reduced second face region is equal to the first perimeter;
if the second area is larger than the first area and the second perimeter is not larger than the first perimeter, reducing the area of any feature point k in the second face region according to the second length-width ratio until the area of any feature point k in the reduced second face region is equal to the first area;
if the second area is equal to the first area, the second perimeter and the second length-width ratio are not adjusted;
if the second area is smaller than the first area and the second circumference is larger than the first circumference, amplifying the area of any feature point k in the second face region according to the second length-width ratio until the area of any feature point k in the amplified second face region is equal to the first area;
and if the second area is smaller than the first area and the second perimeter is not larger than the first perimeter, amplifying the area of any feature point k in the second face region according to the second length-width ratio until the perimeter of any feature point k in the amplified second face region is equal to the first perimeter.
Optionally, the S108 includes:
replacing the image of the first face area with the adjusted image of the second face area;
and the attribute information of the replaced image is the attribute characteristic of the adjusted second face area.
In order to achieve the above purpose, the main technical solution adopted by the present invention further comprises:
an electronic device comprising a memory, a processor, a bus and a computer program stored on the memory and executable on the processor, the processor implementing a method as claimed in any one of the above methods when executing the program.
In order to achieve the above purpose, the main technical solution adopted by the present invention further comprises:
a computer storage medium having stored thereon a computer program which, when executed by a processor, implements a method as in any one of the above methods.
(III) advantageous effects
The invention has the beneficial effects that: determining a first video resource; determining a first person in a first video resource; determining a first face region of a first person; determining a second person corresponding to the first person, the second person being different from the first person; determining first attribute information of each feature point in a first face region; determining second attribute information of each feature point in a second face area of a corresponding second person; adjusting a second face area according to the first attribute information and the second attribute information; replacing the content of the first face area with the content of the adjusted second face area; the first attribute information and the second attribute information comprise the position, resolution, color, brightness, contrast, area, perimeter and length-width ratio of each feature point, so that the image change of people after video resource production is realized, and the participation and the interactivity are improved.
Drawings
Fig. 1 is a schematic flowchart of a character replacement method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to improve interactivity of video resources, the proposal provides a person replacing method, electronic equipment and a storage medium, and a first video resource is determined; determining a first person in a first video resource; determining a first face region of a first person; determining a second person corresponding to the first person, the second person being different from the first person; determining first attribute information of each feature point in a first face region; determining second attribute information of each feature point in a second face area of a corresponding second person; adjusting a second face area according to the first attribute information and the second attribute information; replacing the content of the first face area with the content of the adjusted second face area; the first attribute information and the second attribute information comprise the position, resolution, color, brightness, contrast, area, perimeter and length-width ratio of each feature point, so that the image change of people after video resource production is realized, and the participation and the interactivity are improved.
Referring to fig. 1, the implementation flow of the character replacement method provided in this embodiment is as follows:
s101, determining a first video resource.
The first video resource is a dynamic image resource.
For example, the moving image is a movie, or a television, or an animation, or a game, or a self-timer video, or an advertisement video, or a small video.
For convenience of description, the first video resource is taken as the animation a in this embodiment and the following embodiments. For other forms of the first video asset, this embodiment will not be illustrated.
S102, determining a first person in the first video resource.
The number of first persons in this step may be one or more. The number of first persons is not limited in this embodiment.
In this step, there are various ways to determine the first person, for example, if the user clicks one person, the person clicked by the user is determined as the first person.
For another example, if the user clicks a plurality of characters, all the characters clicked by the user are determined as the first character.
As another example, the first person is determined by:
s102-1, determining the total occurrence time of each person in the first video resource.
S102-2, sorting all the people in the first video resource according to the total occurrence time length from long to short.
S102-3, determining the preset number of people ranked at the top as the first people.
For example, if the preset number is 2, there are 4 characters in the animation a, that is, the character 1, the character 2, the character 3, and the character 4, respectively, then the total length of time that the character 1 appears in the animation a (e.g., T1), the total length of time that the character 2 appears in the animation a (e.g., T2), the total length of time that the character 3 appears in the animation a (e.g., T3), and the total length of time that the character 4 appears in the animation a (e.g., T4) are determined. If T4> T2> T1 is T3, all characters in the animation A are sequenced according to the total occurrence time length from long to short to obtain the following sequence: person 4, person 2, person 1, and person 3. The top 2 persons (person 4 and person 2) are each determined as the first person.
S103, determining a first human face area of the first person.
And if the number of the first person is 1, determining the face area of the first person. If the number of the first people is 2, the face area of each first person is determined.
There are many schemes for determining the face region, and this embodiment is not limited.
And S104, determining a second person corresponding to the first person.
Wherein the second person is different from the first person.
That is, when the number of the first person is 1, the number of the second person is 1, and the second person is different from the first person. When the number of the first people is multiple, the number of the second people is the same as that of the first people, each second person corresponds to one unique first person, and the second people are different from the corresponding first people.
For example, when the number of first persons is 2 (e.g., a and B), the number of second persons is also 2 (e.g., C and D), each second person corresponds to a unique first person (e.g., C corresponds to a and D corresponds to B), and the second person is different from its corresponding first person (e.g., C is different from a and D is different from B). In this embodiment, only C is different from a, and D is the same as B, but this embodiment does not limit whether C is the same as B, and this embodiment does not limit whether a is the same as D.
The specific implementation manner of the step is as follows: monitoring whether at least one replacement resource is triggered; and when at least one alternative resource is triggered, determining a second person from the triggered alternative resource.
The state of the replacement resource may be a stored replacement resource, an uploaded replacement resource, or an immediately photographed replacement resource. In addition, the alternative resource may be a photo or a second video resource. The second video resource is only used for distinguishing from the first video resource in S101, namely, the second video resource and the first video resource are only used for limiting resources in different stages and have no other meanings, the first video resource is a resource where a replaced character is located, the second video resource is a resource where the replaced character is located, and the first video resource is a resource where the replaced character is located.
Therefore, the at least one alternative resource in this embodiment may be at least one stored photo, or at least one stored second video resource, or at least one uploaded photo, or at least one uploaded second video resource, or at least one immediately taken photo, or at least one immediately taken second video resource.
Based on this, determining that at least one replacement resource is triggered if the following events are monitored to occur includes:
the at least one stored photo is selected by a user, or the at least one stored second video resource is selected by a user, or the at least one stored photo is clicked by a user, or the at least one stored second video resource is clicked by a user, or the at least one photo is uploaded, or the at least one second video resource is uploaded, or the at least one photo is taken instantly, or the at least one second video resource is taken instantly.
Furthermore, the implementation manner of determining the second person from the triggered alternative resource may be: and determining the person selected by the user in the triggered alternative resource as a second person.
Or, when the triggered alternative resource is a picture, determining the implementation manner of the second person from the triggered alternative resource may be: and identifying all the characters in the triggered replacement resources, calculating the area of each character, and determining a second character according to the area of each character.
And e, according to the sequence of the area of each character from large to small, determining the preset number of the characters in the top sequence as the second character.
The preset number here is the same as the preset number when the first person is determined in S102.
For example, 2 persons having a large area are set as the second person.
In addition, when the triggered alternative resource is the second video resource, the implementation manner of determining the second person from the triggered alternative resource may be: and identifying all the persons in the triggered replacement resource, and determining a second person according to the importance degree of each person.
And e.g. according to the ranking of the importance degrees of the characters from high to low, determining the preset number of the characters ranked at the top as the second character.
The preset number here is the same as the preset number when the first person is determined in S102.
For example, 2 persons having a higher degree of importance are set as the second persons.
The calculation method for the degree of importance includes but is not limited to:
for any person i, it is determined that all frames of any person i exist.
The importance level of any person i is determined according to the following formula:
wherein, WiN is the degree of importance of any character iiTo exist at any positionThe total number of frames of a character i, N is the total number of frames of the second video resource, j is the frame identification of any character i, aijThe area of any character i in the j frames where any character i exists, AjTotal area of all characters in the j frame where any character i exists, sjTotal effective area of j frames in which any character i exists, bijThe area of the face of any person i in the j frames in which any person i exists, BjM is the total area of all human faces in the j frames where any human i existsjThe content effective area of the j frame in which any character i exists.
The total effective area is different from the effective area of the content, the total effective area is the area occupied by the video, if there is a side in the video, the content is played in the side, and the side is blank, the total effective area is the area in the side. The effective area of the content is the area occupied by the content described in the video, and if the content played inside the edge includes a background that is a blue sky and 2 persons stand in front of the background, the effective area of the content is the area corresponding to a part (i.e., 2 persons) other than the blue sky. The method for calculating the effective content area may first identify a background area of a j frame where any character i exists, and the total effective area — the background area is equal to the effective content area.
Taking a video with 5 frames as an example, for any person i, it is determined that all frames (e.g., frame 1 and frame 3) of the person i exist.
The importance of the character i is determined according to the following formula:
wherein, WiN is the degree of importance of character iiTotal number of frames for existence of character i (n)i2), N is the total number of frames of the second video resource (N is 5), j is the frame identifier of the existing person i (j is frame 1, or j is frame 3), aijIn the j frames where the character i exists, the area of the character i (e.g. the area of the character i in the frame 1, the area of the character i in the frame 3), AjTotal area of all people in frame j for the presence of person i (e.g. Total area of all people in frame 1, all people in frame 3)Total area of the object), sjTotal effective area of j frame (e.g. total effective area in frame 1, total effective area in frame 3) for the presence of character i, bijThe area of the face of the person i in the j frames in which the person i exists (e.g. the area of the face of the person i in the frame 1, the area of the face of the person i in the frame 3), BjM is the total area of all human faces in frame j where human i is present (e.g., total area of all human faces in frame 1, total area of all human faces in frame 3), mjThe content effective area of the j frame where the character i exists (e.g., the content effective area in frame 1, the content effective area in frame 3).
In addition, when there are a plurality of first persons, the determination method of the corresponding relationship between the second person and the first person is not limited in this embodiment. The person may be designated manually, or the second person ranked first may be associated with the first person ranked first.
And S105, determining first attribute information of each feature point in the first face area.
Wherein the characteristic points comprise forehead, eyebrow, eye, nose and mouth.
The first attribute information includes position, resolution, color, brightness, contrast, area, perimeter, length-width ratio of each feature point.
Such as forehead position, resolution, color, brightness, contrast, area, perimeter, aspect ratio in the first face region. The position, resolution, color, brightness, contrast, area, perimeter, length-width ratio of eyebrows in the first face region. The position, resolution, color, brightness, contrast, area, perimeter, aspect ratio of the eyes in the first face region. The position, resolution, color, brightness, contrast, area, perimeter, aspect ratio of the nose in the first face region. The position, resolution, color, brightness, contrast, area, perimeter, aspect ratio of the mouth in the first face region.
In addition, when there are a plurality of first persons, in S103, a first face area of each first person is determined, and in this step, first attribute information of each feature point in each first face area is determined.
And S106, determining second attribute information of each feature point in the second face area of the corresponding second person.
The "second" of the second attribute information is used only for distinguishing from the first attribute information in S105. That is, "second" and "first" for the feature point attribute information are only for limiting attribute information of a face, and do not have other meanings, the first attribute information is attribute information of each feature point on the face of the replaced person, and the second attribute information is attribute information of each feature point on the face of the replaced person.
The feature points in S106 are the same as those in S105, and are forehead, eyebrow, eye, nose, and mouth.
The second attribute information in S106 is the same as the first attribute information in S105, and is the position, resolution, color, brightness, contrast, area, perimeter, length-width ratio of each feature point. That is, the second attribute information includes the position, resolution, color, brightness, contrast, area, perimeter, and aspect ratio of each feature point.
Therefore, the second attribute information of each feature point in the second face region determined in step S106 is the position, resolution, color, brightness, contrast, area, perimeter, length-width ratio of the forehead in the second face region. The position, resolution, color, brightness, contrast, area, perimeter, length-width ratio of eyebrows in the second face region. The position, resolution, color, brightness, contrast, area, perimeter, length-to-width ratio of the eyes in the second face region. The position, resolution, color, brightness, contrast, area, perimeter, length to width ratio of the nose in the second face region. The position, resolution, color, brightness, contrast, area, perimeter, length-width ratio of the mouth in the second face region.
In addition, when there are a plurality of first persons, in S104, a second person corresponding to each first person is determined, and in this step, second attribute information of each feature point in the second face area of each second person is determined.
And S107, adjusting the second face area according to the first attribute information and the second attribute information.
When a plurality of first persons exist, the step adjusts the second face area of the second person corresponding to the first person according to the first attribute information of each feature point in the first face area of each first person and the second attribute information of each feature point in the second face area of the second person corresponding to the first person.
That is, when the first person is p and q, the second person corresponding to p is p ', and the second person corresponding to q is q', the step adjusts the second face area of p 'according to the first attribute information of each feature point in the face area of p and the second attribute information of each feature point in the second face area of p' corresponding to p. And adjusting the second face area of q 'according to the first attribute information of each feature point in the face area of q and the second attribute information of each feature point in the second face area of q' corresponding to q.
For the first attribute information of each feature point in any first face region and the second attribute information of each feature point in the corresponding second face region, the implementation manner of the step is as follows: for any one of the characteristic points k,
s107-1, acquiring first attribute information of any feature point in the first face region, wherein the first attribute information comprises a first position of any feature point k, a first resolution of any feature point k, a first color of any feature point k, a first brightness of any feature point k, a first contrast of any feature point k, a first area of any feature point k, a first perimeter of any feature point k, and a first length-width ratio of any feature point k.
S107-2, second attribute information of any feature point in the second face region is obtained, and the second attribute information comprises a second position of any feature point k, a second resolution of any feature point k, a second color of any feature point k, a second brightness of any feature point k, a second contrast of any feature point k, a second area of any feature point k, a second perimeter of any feature point k, and a second length-width ratio of any feature point k.
S107-3, adjusting the second position to the first position.
S107-4, if the second resolution is larger than the first resolution, adjusting the second resolution to the first resolution; otherwise, the second resolution is not adjusted.
S107-5, if the color information is (R, G, B), adjusting the second color information to:
wherein R is a red channel value, G is a green channel value, B is a blue channel value, R1Is a red channel value, R, in the first color information2Is the red channel value, G, in the second color information1Is the green channel value, G, in the first color information2Is the green channel value, B, in the second color information1As a blue channel value, B, in the first color information2Is a blue channel value in the second color information;
s107-6, the second brightness is adjusted to the first brightness.
S107-7, if the second contrast is larger than the first contrast, adjusting the second contrast to the first contrast; otherwise, the second contrast is not adjusted.
S107-8, if the second area is larger than the first area and the second perimeter is larger than the first perimeter, reducing the area of any feature point k in the second face region according to the second length-width ratio until the perimeter of any feature point k in the reduced second face region is equal to the first perimeter.
If the second area is larger than the first area and the second perimeter is not larger than the first perimeter, reducing the area of any feature point k in the second face region according to the second length-width ratio until the area of any feature point k in the reduced second face region is equal to the first area.
If the second area is equal to the first area, the second perimeter and the second length-width ratio are not adjusted.
And if the second area is smaller than the first area and the second circumference is larger than the first circumference, amplifying the area of any characteristic point k in the second face region according to the second length-width ratio until the area of any characteristic point k in the amplified second face region is equal to the first area.
And if the second area is smaller than the first area and the second perimeter is not larger than the first perimeter, amplifying the area of any feature point k in the second face region according to a second length-width ratio until the perimeter of any feature point k in the amplified second face region is equal to the first perimeter.
Through S107, the positions, the resolution, the color, the brightness, the contrast, the area, the perimeter and the length-width ratio of the forehead, the eyebrows, the eyes, the nose and the mouth in the second face region can be adjusted, so that the adjusted second face region is more attached to the first face region, the effect after replacement can be ensured to better accord with the overall style of the first video resource, and the replacement effect is ensured.
And S108, replacing the content of the first face area with the content of the adjusted second face area.
When a plurality of first persons exist, the content of the first face area of each first person is replaced by the content of the adjusted second face area of the corresponding second person.
That is, when the first person is p and q, the second person corresponding to p is p ', and the second person corresponding to q is q ', the present step replaces the content of the first face region of p with the content of the second face region corresponding to p ' after adjustment. And replacing the content of the first face area of q with the content of the second face area corresponding to the first face area after q' adjustment.
For a first face area of any person and a second face area of a second person corresponding to the first face area, the implementation manner of the step is as follows: replacing the image of the first face area with the adjusted image of the second face area; and the attribute information of the replaced image is the attribute characteristic of the adjusted second face area.
For example, the forehead of the first face region is replaced by the forehead of the second face region, and the position, resolution, color, brightness, contrast, area, perimeter, and aspect ratio of the replaced forehead are the attribute features of the forehead in the second face region adjusted in S107.
By the method, the face of one figure in the first video resource can be replaced by the face of the user, so that the figure image change after the video resource is made is realized, and the participation and the interactivity are improved.
In addition, in order to avoid the face being obtrusive and uncoordinated due to color, resolution and the like after replacement, when the face is replaced, the positions, resolution, color, brightness, contrast, area, circumference and length-width ratio of the forehead, eyebrows, eyes, nose and mouth on the face of the user can be adjusted, so that the replacement effect is improved.
It should be noted that "first" and "second" in this embodiment and subsequent embodiments are only serial numbers, and are used to distinguish different people, faces, attribute information, positions, resolutions, colors, luminances, contrasts, areas, circumferences, length-width ratios, and the like, and have no other meaning.
The method provided by the invention comprises the steps of determining a first video resource; determining a first person in a first video resource; determining a first face region of a first person; determining a second person corresponding to the first person, the second person being different from the first person; determining first attribute information of each feature point in a first face region; determining second attribute information of each feature point in a second face area of a corresponding second person; adjusting a second face area according to the first attribute information and the second attribute information; replacing the content of the first face area with the content of the adjusted second face area; the attribute information comprises the position, resolution, color, brightness, contrast, area, perimeter and length-width ratio of each feature point, so that the image change of the character after the video resource is made is realized, and the participation and the interactivity are improved.
Referring to fig. 2, the present embodiment provides an electronic apparatus including: memory 201, processor 202, bus 203, and computer programs stored on memory 201 and executable on processor 202.
The processor 202 implements the following method when executing the program:
s101, determining a first video resource;
s102, determining a first person in a first video resource;
s103, determining a first face area of a first person;
s104, determining a second person corresponding to the first person, wherein the second person is different from the first person;
s105, determining first attribute information of each feature point in the first face area;
s106, determining second attribute information of each feature point in a second face area of a corresponding second person;
s107, adjusting a second face area according to the first attribute information and the second attribute information;
s108, replacing the content of the first face area with the content of the adjusted second face area;
the first attribute information and the second attribute information include the position, resolution, color, brightness, contrast, area, perimeter, length-width ratio of each feature point.
Optionally, S102, includes:
s102-1, determining the total occurrence time of each person in the first video resource;
s102-2, sequencing all the people in the first video resource according to the total occurrence duration from long to short;
s102-3, determining the preset number of people ranked in the front as first people;
when the number of the first people is 1, the number of the second people is 1;
when the number of the first people is multiple, the number of the second people is the same as that of the first people, each second person corresponds to one unique first person, and the second people are different from the corresponding first people.
Optionally, S104 includes:
monitoring whether at least one replacement resource is triggered;
when at least one alternative resource is triggered, determining a second person from the triggered alternative resource;
wherein the at least one replacement resource is triggered, comprising:
at least one stored photo is selected; or,
at least one stored second video asset is selected; or,
at least one stored photo is clicked on; or,
at least one stored second video asset is clicked on; or,
at least one photo is uploaded; or,
at least one second video asset is uploaded; or,
at least one picture is taken instantaneously; or,
at least one second video asset is shot instantly;
the second video asset is different from the first video asset.
Optionally, the first video resource is a dynamic image resource, and the dynamic image is a movie, a television, an animation, a game, a self-portrait video, an advertisement video, or a small video;
the second video resource is a dynamic image resource, and the dynamic image is a movie, a television, an animation, a game, a self-portrait video, an advertisement video or a small video.
Optionally, determining the second person from the triggered alternative resource includes:
determining the person selected by the user in the triggered alternative resources as a second person; or,
when the triggered replacement resource is a picture, identifying all people in the triggered replacement resource, calculating the area of each person, and determining a second person according to the area of each person; or,
and when the triggered alternative resource is the second video resource, identifying all the persons in the triggered alternative resource, and determining the second person according to the importance degree of each person.
Alternatively, the importance level of each character is determined by:
determining all frames of any person i to exist aiming at any person i;
the importance level of any person i is determined according to the following formula:
wherein, WiN is the degree of importance of any character iiThe total number of frames for any character i, N is the total number of frames of the second video resource, j is the frame identification for any character i, aijThe area of any character i in the j frames where any character i exists, AjTotal area of all characters in the j frame where any character i exists, sjTotal effective area of j frames in which any character i exists, bijThe area of the face of any person i in the j frames in which any person i exists, BjM is the total area of all human faces in the j frames where any human i existsjThe content effective area of the j frame in which any character i exists.
Optionally, the feature points include forehead, eyebrow, eye, nose, mouth.
Optionally, S107 includes:
for any one of the characteristic points k,
s107-1, acquiring first attribute information of any feature point in a first face region, wherein the first attribute information comprises a first position of any feature point k, a first resolution of any feature point k, a first color of any feature point k, a first brightness of any feature point k, a first contrast of any feature point k, a first area of any feature point k, a first perimeter of any feature point k and a first length-width ratio of any feature point k;
s107-2, acquiring second attribute information of any feature point in a second face region, wherein the second attribute information comprises a second position of any feature point k, a second resolution of any feature point k, a second color of any feature point k, a second brightness of any feature point k, a second contrast of any feature point k, a second area of any feature point k, a second perimeter of any feature point k and a second length-width ratio of any feature point k;
s107-3, adjusting the second position to the first position;
s107-4, if the second resolution is larger than the first resolution, adjusting the second resolution to the first resolution; otherwise, the second resolution is not adjusted;
s107-5, if the color information is (R, G, B), adjusting the second color information to:
wherein R is a red channel value, G is a green channel value, B is a blue channel value, R1Is a red channel value, R, in the first color information2Is the red channel value, G, in the second color information1Is the green channel value, G, in the first color information2Is the green channel value, B, in the second color information1As a blue channel value, B, in the first color information2Is a blue channel value in the second color information;
s107-6, adjusting the second brightness to the first brightness;
s107-7, if the second contrast is larger than the first contrast, adjusting the second contrast to the first contrast; otherwise, the second contrast is not adjusted;
s107-8, if the second area is larger than the first area and the second perimeter is larger than the first perimeter, reducing the area of any feature point k in the second face region according to the second length-width ratio until the perimeter of any feature point k in the reduced second face region is equal to the first perimeter;
if the second area is larger than the first area and the second perimeter is not larger than the first perimeter, reducing the area of any feature point k in the second face region according to a second length-width ratio until the area of any feature point k in the reduced second face region is equal to the first area;
if the second area is equal to the first area, the second perimeter and the second length-width ratio are not adjusted;
if the second area is smaller than the first area and the second perimeter is larger than the first perimeter, amplifying the area of any feature point k in the second face region according to a second length-width ratio until the area of any feature point k in the amplified second face region is equal to the first area;
and if the second area is smaller than the first area and the second perimeter is not larger than the first perimeter, amplifying the area of any feature point k in the second face region according to a second length-width ratio until the perimeter of any feature point k in the amplified second face region is equal to the first perimeter.
Optionally, S108 includes:
replacing the image of the first face area with the adjusted image of the second face area;
and the attribute information of the replaced image is the attribute characteristic of the adjusted second face area.
The electronic device provided by the embodiment determines a first video resource; determining a first person in a first video resource; determining a first face region of a first person; determining a second person corresponding to the first person, the second person being different from the first person; determining first attribute information of each feature point in a first face region; determining second attribute information of each feature point in a second face area of a corresponding second person; adjusting a second face area according to the first attribute information and the second attribute information; replacing the content of the first face area with the content of the adjusted second face area; the attribute information comprises the position, resolution, color, brightness, contrast, area, perimeter and length-width ratio of each feature point, so that the image change of the character after the video resource is made is realized, and the participation and the interactivity are improved.
The present embodiments provide a computer storage medium that performs the following operations:
s101, determining a first video resource;
s102, determining a first person in a first video resource;
s103, determining a first face area of a first person;
s104, determining a second person corresponding to the first person, wherein the second person is different from the first person;
s105, determining first attribute information of each feature point in the first face area;
s106, determining second attribute information of each feature point in a second face area of a corresponding second person;
s107, adjusting a second face area according to the first attribute information and the second attribute information;
s108, replacing the content of the first face area with the content of the adjusted second face area;
the first attribute information and the second attribute information include the position, resolution, color, brightness, contrast, area, perimeter, length-width ratio of each feature point.
Optionally, S102, includes:
s102-1, determining the total occurrence time of each person in the first video resource;
s102-2, sequencing all the people in the first video resource according to the total occurrence duration from long to short;
s102-3, determining the preset number of people ranked in the front as first people;
when the number of the first people is 1, the number of the second people is 1;
when the number of the first people is multiple, the number of the second people is the same as that of the first people, each second person corresponds to one unique first person, and the second people are different from the corresponding first people.
Optionally, S104 includes:
monitoring whether at least one replacement resource is triggered;
when at least one alternative resource is triggered, determining a second person from the triggered alternative resource;
wherein the at least one replacement resource is triggered, comprising:
at least one stored photo is selected; or,
at least one stored second video asset is selected; or,
at least one stored photo is clicked on; or,
at least one stored second video asset is clicked on; or,
at least one photo is uploaded; or,
at least one second video asset is uploaded; or,
at least one picture is taken instantaneously; or,
at least one second video asset is shot instantly;
the second video asset is different from the first video asset.
Optionally, the first video resource is a dynamic image resource, and the dynamic image is a movie, a television, an animation, a game, a self-portrait video, an advertisement video, or a small video;
the second video resource is a dynamic image resource, and the dynamic image is a movie, a television, an animation, a game, a self-portrait video, an advertisement video or a small video.
Optionally, determining the second person from the triggered alternative resource includes:
determining the person selected by the user in the triggered alternative resources as a second person; or,
when the triggered replacement resource is a picture, identifying all people in the triggered replacement resource, calculating the area of each person, and determining a second person according to the area of each person; or,
and when the triggered alternative resource is the second video resource, identifying all the persons in the triggered alternative resource, and determining the second person according to the importance degree of each person.
Alternatively, the importance level of each character is determined by:
determining all frames of any person i to exist aiming at any person i;
the importance level of any person i is determined according to the following formula:
wherein, WiN is the degree of importance of any character iiThe total number of frames for any character i, N is the total number of frames of the second video resource, j is the frame identification for any character i, aijThe area of any character i in the j frames where any character i exists, AjFor the presence of j frames of any character iTotal area of all characters in (1), sjTotal effective area of j frames in which any character i exists, bijThe area of the face of any person i in the j frames in which any person i exists, BjM is the total area of all human faces in the j frames where any human i existsjThe content effective area of the j frame in which any character i exists.
Optionally, the feature points include forehead, eyebrow, eye, nose, mouth.
Optionally, S107 includes:
for any one of the characteristic points k,
s107-1, acquiring first attribute information of any feature point in a first face region, wherein the first attribute information comprises a first position of any feature point k, a first resolution of any feature point k, a first color of any feature point k, a first brightness of any feature point k, a first contrast of any feature point k, a first area of any feature point k, a first perimeter of any feature point k and a first length-width ratio of any feature point k;
s107-2, acquiring second attribute information of any feature point in a second face region, wherein the second attribute information comprises a second position of any feature point k, a second resolution of any feature point k, a second color of any feature point k, a second brightness of any feature point k, a second contrast of any feature point k, a second area of any feature point k, a second perimeter of any feature point k and a second length-width ratio of any feature point k;
s107-3, adjusting the second position to the first position;
s107-4, if the second resolution is larger than the first resolution, adjusting the second resolution to the first resolution; otherwise, the second resolution is not adjusted;
s107-5, if the color information is (R, G, B), adjusting the second color information to:
wherein R is a red channel value, G is a green channel value, B is a blue channel value,R1is a red channel value, R, in the first color information2Is the red channel value, G, in the second color information1Is the green channel value, G, in the first color information2Is the green channel value, B, in the second color information1As a blue channel value, B, in the first color information2Is a blue channel value in the second color information;
s107-6, adjusting the second brightness to the first brightness;
s107-7, if the second contrast is larger than the first contrast, adjusting the second contrast to the first contrast; otherwise, the second contrast is not adjusted;
s107-8, if the second area is larger than the first area and the second perimeter is larger than the first perimeter, reducing the area of any feature point k in the second face region according to the second length-width ratio until the perimeter of any feature point k in the reduced second face region is equal to the first perimeter;
if the second area is larger than the first area and the second perimeter is not larger than the first perimeter, reducing the area of any feature point k in the second face region according to a second length-width ratio until the area of any feature point k in the reduced second face region is equal to the first area;
if the second area is equal to the first area, the second perimeter and the second length-width ratio are not adjusted;
if the second area is smaller than the first area and the second perimeter is larger than the first perimeter, amplifying the area of any feature point k in the second face region according to a second length-width ratio until the area of any feature point k in the amplified second face region is equal to the first area;
and if the second area is smaller than the first area and the second perimeter is not larger than the first perimeter, amplifying the area of any feature point k in the second face region according to a second length-width ratio until the perimeter of any feature point k in the amplified second face region is equal to the first perimeter.
Optionally, S108 includes:
replacing the image of the first face area with the adjusted image of the second face area;
and the attribute information of the replaced image is the attribute characteristic of the adjusted second face area.
The computer storage medium provided by the embodiment determines a first video resource; determining a first person in a first video resource; determining a first face region of a first person; determining a second person corresponding to the first person, the second person being different from the first person; determining first attribute information of each feature point in a first face region; determining second attribute information of each feature point in a second face area of a corresponding second person; adjusting a second face area according to the first attribute information and the second attribute information; replacing the content of the first face area with the content of the adjusted second face area; the attribute information comprises the position, resolution, color, brightness, contrast, area, perimeter and length-width ratio of each feature point, so that the image change of the character after the video resource is made is realized, and the participation and the interactivity are improved.
It is to be understood that the invention is not limited to the specific arrangements and instrumentality described above and shown in the drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications and additions or change the order between the steps after comprehending the spirit of the present invention.
It should also be noted that the exemplary embodiments mentioned in this patent describe some methods or systems based on a series of steps or devices. However, the present invention is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be performed in an order different from the order in the embodiments, or may be performed simultaneously.
Finally, it should be noted that: the above-mentioned embodiments are only used for illustrating the technical solution of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
Claims (10)
1. A character replacement method, comprising:
s101, determining a first video resource;
s102, determining a first person in the first video resource;
s103, determining a first face area of the first person;
s104, determining a second person corresponding to the first person, wherein the second person is different from the first person;
s105, determining first attribute information of each feature point in the first face area;
s106, determining second attribute information of each feature point in a second face area of a corresponding second person;
s107, adjusting the second face area according to the first attribute information and the second attribute information;
s108, replacing the content of the first face area with the content of the adjusted second face area;
the first attribute information and the second attribute information comprise the position, resolution, color, brightness, contrast, area, perimeter and length-width ratio of each feature point;
the S107 comprises:
for any one of the characteristic points k,
s107-1, obtaining first attribute information of any feature point in a first face region, where the first attribute information includes a first position of any feature point k, a first resolution of any feature point k, a first color of any feature point k, a first brightness of any feature point k, a first contrast of any feature point k, a first area of any feature point k, a first perimeter of any feature point k, and a first length-width ratio of any feature point k;
s107-2, acquiring second attribute information of any feature point in a second face region, wherein the second attribute information comprises a second position of any feature point k, a second resolution of any feature point k, a second color of any feature point k, a second brightness of any feature point k, a second contrast of any feature point k, a second area of any feature point k, a second perimeter of any feature point k, and a second length-width ratio of any feature point k;
s107-3, adjusting the second position to the first position;
s107-4, if the second resolution is larger than the first resolution, adjusting the second resolution to the first resolution; otherwise, the second resolution is not adjusted;
s107-5, if the color information is (R, G, B), adjusting the second color information to:
wherein R is a red channel value, G is a green channel value, B is a blue channel value, R1Is a red channel value, R, in the first color information2Is the red channel value, G, in the second color information1Is the green channel value, G, in the first color information2Is the green channel value, B, in the second color information1As a blue channel value, B, in the first color information2Is a blue channel value in the second color information;
s107-6, adjusting the second brightness to the first brightness;
s107-7, if the second contrast is larger than the first contrast, adjusting the second contrast to the first contrast; otherwise, the second contrast is not adjusted;
s107-8, if the second area is larger than the first area and the second perimeter is larger than the first perimeter, reducing the area of any feature point k in the second face region according to the second length-width ratio until the perimeter of any feature point k in the reduced second face region is equal to the first perimeter;
if the second area is larger than the first area and the second perimeter is not larger than the first perimeter, reducing the area of any feature point k in the second face region according to the second length-width ratio until the area of any feature point k in the reduced second face region is equal to the first area;
if the second area is equal to the first area, the second perimeter and the second length-width ratio are not adjusted;
if the second area is smaller than the first area and the second circumference is larger than the first circumference, amplifying the area of any feature point k in the second face region according to the second length-width ratio until the area of any feature point k in the amplified second face region is equal to the first area;
and if the second area is smaller than the first area and the second perimeter is not larger than the first perimeter, amplifying the area of any feature point k in the second face region according to the second length-width ratio until the perimeter of any feature point k in the amplified second face region is equal to the first perimeter.
2. The method according to claim 1, wherein the S102 comprises:
s102-1, determining the total occurrence time of each person in the first video resource;
s102-2, sequencing all the people in the first video resource according to the total occurrence duration from long to short;
s102-3, determining the preset number of people ranked in the front as first people;
when the number of the first people is 1, the number of the second people is 1;
when the number of the first people is multiple, the number of the second people is the same as that of the first people, each second person corresponds to one unique first person, and the second people are different from the corresponding first people.
3. The method of claim 1, wherein the S104 comprises:
monitoring whether at least one replacement resource is triggered;
when at least one alternative resource is triggered, determining a second person from the triggered alternative resource;
wherein the at least one replacement resource is triggered, comprising:
at least one stored photo is selected; or,
at least one stored second video asset is selected; or,
at least one stored photo is clicked on; or,
at least one stored second video asset is clicked on; or,
at least one photo is uploaded; or,
at least one second video asset is uploaded; or,
at least one picture is taken instantaneously; or,
at least one second video asset is shot instantly;
the second video asset is different from the first video asset.
4. The method of claim 3, wherein the first video asset is a moving image asset, and the moving image asset is a movie, a television, an animation, a game, a self-portrait video, an advertisement video, or a small video;
the second video resource is a dynamic image resource, and the dynamic image is a movie, a television, an animation, a game, a self-portrait video, an advertisement video or a small video.
5. The method of claim 4, wherein determining the second persona from the triggered alternative resource comprises:
determining the person selected by the user in the triggered alternative resources as a second person; or,
when the triggered replacement resource is a picture, identifying all people in the triggered replacement resource, calculating the area of each person, and determining a second person according to the area of each person; or,
and when the triggered alternative resource is the second video resource, identifying all the persons in the triggered alternative resource, and determining the second person according to the importance degree of each person.
6. The method of claim 5, wherein the importance level of each character is determined by:
determining that all frames of any person i exist aiming at the person i;
the importance level of any character i is determined according to the following formula:
wherein, WiN is the degree of importance of any character iiThe total number of frames for the existence of any character i, N is the total number of frames of the second video resource, j is the frame identification for the existence of any character i, aijThe area of any character i in the j frames of any character i, AjThe total area, s, of all the characters in the j frame where any character i existsjTotal effective area of j frames in which any character i exists, bijThe area of the face of any person i in the j frames of any person i, BjM is the total area of all human faces in the j frames where any human i existsjThe effective area of the content of the j frame where any character i exists.
7. The method of claim 1, wherein the feature points include forehead, eyebrow, eye, nose, mouth.
8. The method of claim 1, wherein the S108 comprises:
replacing the image of the first face area with the adjusted image of the second face area;
and the attribute information of the replaced image is the attribute characteristic of the adjusted second face area.
9. An electronic device comprising a memory, a processor, a bus, and a computer program stored on the memory and executable on the processor, which when executed by the processor implements the method of any of claims 1-8.
10. A computer storage medium having a computer program stored thereon, characterized in that: the program when executed by a processor implementing the method of any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910082617.5A CN109788311B (en) | 2019-01-28 | 2019-01-28 | Character replacement method, electronic device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910082617.5A CN109788311B (en) | 2019-01-28 | 2019-01-28 | Character replacement method, electronic device, and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109788311A CN109788311A (en) | 2019-05-21 |
CN109788311B true CN109788311B (en) | 2021-06-04 |
Family
ID=66502789
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910082617.5A Active CN109788311B (en) | 2019-01-28 | 2019-01-28 | Character replacement method, electronic device, and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109788311B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111047930B (en) * | 2019-11-29 | 2021-07-16 | 联想(北京)有限公司 | Processing method and device and electronic equipment |
CN111131776A (en) * | 2019-12-20 | 2020-05-08 | 中译语通文娱科技(青岛)有限公司 | Intelligent video object replacement system based on Internet of things |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102196245A (en) * | 2011-04-07 | 2011-09-21 | 北京中星微电子有限公司 | Video play method and video play device based on character interaction |
CN104376589A (en) * | 2014-12-04 | 2015-02-25 | 青岛华通国有资本运营(集团)有限责任公司 | Method for replacing movie and TV play figures |
CN105118082A (en) * | 2015-07-30 | 2015-12-02 | 科大讯飞股份有限公司 | Personalized video generation method and system |
CN106792147A (en) * | 2016-12-08 | 2017-05-31 | 天脉聚源(北京)传媒科技有限公司 | A kind of image replacement method and device |
CN108347578A (en) * | 2017-01-23 | 2018-07-31 | 腾讯科技(深圳)有限公司 | The processing method and processing device of video image in video calling |
CN108471544A (en) * | 2018-03-28 | 2018-08-31 | 北京奇艺世纪科技有限公司 | A kind of structure video user portrait method and device |
CN108650555A (en) * | 2018-05-15 | 2018-10-12 | 优酷网络技术(北京)有限公司 | The displaying of video clip, the generation method of interactive information, player and server |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10446189B2 (en) * | 2016-12-29 | 2019-10-15 | Google Llc | Video manipulation with face replacement |
CN106909923A (en) * | 2017-02-20 | 2017-06-30 | 汪爱民 | A kind of vehicles peccancy processing system |
-
2019
- 2019-01-28 CN CN201910082617.5A patent/CN109788311B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102196245A (en) * | 2011-04-07 | 2011-09-21 | 北京中星微电子有限公司 | Video play method and video play device based on character interaction |
CN104376589A (en) * | 2014-12-04 | 2015-02-25 | 青岛华通国有资本运营(集团)有限责任公司 | Method for replacing movie and TV play figures |
CN105118082A (en) * | 2015-07-30 | 2015-12-02 | 科大讯飞股份有限公司 | Personalized video generation method and system |
CN106792147A (en) * | 2016-12-08 | 2017-05-31 | 天脉聚源(北京)传媒科技有限公司 | A kind of image replacement method and device |
CN108347578A (en) * | 2017-01-23 | 2018-07-31 | 腾讯科技(深圳)有限公司 | The processing method and processing device of video image in video calling |
CN108471544A (en) * | 2018-03-28 | 2018-08-31 | 北京奇艺世纪科技有限公司 | A kind of structure video user portrait method and device |
CN108650555A (en) * | 2018-05-15 | 2018-10-12 | 优酷网络技术(北京)有限公司 | The displaying of video clip, the generation method of interactive information, player and server |
Also Published As
Publication number | Publication date |
---|---|
CN109788311A (en) | 2019-05-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111402135B (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
CN110163810B (en) | Image processing method, device and terminal | |
CN109639982A (en) | A kind of image denoising method, device, storage medium and terminal | |
CN110300316B (en) | Method and device for implanting push information into video, electronic equipment and storage medium | |
US8189916B2 (en) | Image processing method, system, and computer readable medium | |
US20150317510A1 (en) | Rating photos for tasks based on content and adjacent signals | |
WO2014058927A1 (en) | Color correction based on multiple images | |
CN111405339B (en) | Split screen display method, electronic equipment and storage medium | |
CN113628100B (en) | Video enhancement method, device, terminal and storage medium | |
US11138437B2 (en) | Image processing apparatus and method thereof | |
CN109788311B (en) | Character replacement method, electronic device, and storage medium | |
CN112351195B (en) | Image processing method, device and electronic system | |
US8983188B1 (en) | Edge-aware smoothing in images | |
CN116308530A (en) | Advertisement implantation method, advertisement implantation device, advertisement implantation equipment and readable storage medium | |
CN112712569B (en) | Skin color detection method and device, mobile terminal and storage medium | |
CN111541937B (en) | Image quality adjusting method, television device and computer storage medium | |
CN115396705B (en) | Screen operation verification method, platform and system | |
CN111768469A (en) | Data visualization color matching extraction method based on image clustering | |
JP4029316B2 (en) | Image type identification method and apparatus and image processing program | |
CN114845158B (en) | Video cover generation method, video release method and related equipment | |
US20110058057A1 (en) | Image capture device and method, image processing device and method, and program | |
CN112788254A (en) | Camera image matting method, device, equipment and storage medium | |
CN111970501A (en) | Pure color scene AE color processing method and device, electronic equipment and storage medium | |
CN108769825B (en) | Method and device for realizing live broadcast | |
CN112435173A (en) | Image processing and live broadcasting method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |