CN113947549A - Self-photographing video decoration prop edge processing method and related product - Google Patents
Self-photographing video decoration prop edge processing method and related product Download PDFInfo
- Publication number
- CN113947549A CN113947549A CN202111235238.9A CN202111235238A CN113947549A CN 113947549 A CN113947549 A CN 113947549A CN 202111235238 A CN202111235238 A CN 202111235238A CN 113947549 A CN113947549 A CN 113947549A
- Authority
- CN
- China
- Prior art keywords
- picture
- pixel points
- region
- rgb values
- edge
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000005034 decoration Methods 0.000 title claims abstract description 13
- 238000003672 processing method Methods 0.000 title claims abstract description 9
- 238000000034 method Methods 0.000 claims abstract description 28
- 239000000284 extract Substances 0.000 claims abstract description 6
- 230000015654 memory Effects 0.000 description 9
- 238000004590 computer program Methods 0.000 description 5
- 239000002609 medium Substances 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000001963 growth medium Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/94—Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/40—Filling a planar surface by adding surface attributes, e.g. colour or texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The present disclosure provides a self-timer video decoration prop edge processing method and related products, the method includes the following steps: after starting a self-shooting video application, a terminal collects decorative props selected by a user; the terminal collects a first picture, the decorative prop is superposed on the first picture to obtain a second picture, and the second picture is displayed; the terminal identifies a lower edge line of a decorative prop of a second picture, divides the second picture into an upper part and a lower part by taking the lower edge line as a boundary, acquires a plurality of RGB values of a plurality of pixel points of the lower part adjacent to the boundary, extracts m RGB values which are not in a set range from the plurality of RGB values, divides the same and continuous RGB values in the m RGB values into one region to obtain at least two regions, and covers the at least two regions to obtain a third picture. The technical scheme provided by the application has the advantage of high user experience.
Description
Technical Field
The invention relates to the technical field of culture media, in particular to a self-photographing video decoration prop edge processing method and a related product.
Background
The short video is short video, which is an internet content transmission mode, and is generally video transmission content transmitted on new internet media within 1 minute.
The existing decoration props (i.e. props which do not replace the main positions of the human face, such as hats, decorations and the like) applied to the short videos are not decorated in the edge positions after the human face areas are superposed, so that the visual influence of the edge positions on the whole decoration props is relatively large, for example, one relatively large hat is worn on the head, but the hair is still outside the hat, so that the visual effect is greatly influenced, and the user experience is influenced.
Disclosure of Invention
The embodiment of the invention provides a self-shooting video decoration prop edge processing method and a related product, which can be used for processing edge positions, improving the quality of pictures and improving the user experience.
In a first aspect, an embodiment of the present invention provides a method for processing an edge of a self-portrait video decoration prop, where the method includes the following steps:
after starting a self-shooting video application, a terminal collects decorative props selected by a user;
the terminal collects a first picture, the decorative prop is superposed on the first picture to obtain a second picture, and the second picture is displayed;
the terminal identifies a lower edge line of a decorative prop of a second picture, divides the second picture into an upper part and a lower part by taking the lower edge line as a boundary, acquires a plurality of RGB values of a plurality of pixel points of the lower part adjacent to the boundary, extracts m RGB values which are not in a set range from the plurality of RGB values, divides the same and continuous RGB values in the m RGB values into one region to obtain at least two regions, and covers the at least two regions to obtain a third picture.
Optionally, the setting range specifically includes:
the face region of the first picture is identified, RGB values of all pixel points of the face region are extracted, the number of the pixel points of each RGB value is counted, the first RGB value with the largest number is determined to be a central value of a set range, and the set range is obtained by increasing the set range by taking the first RGB value as the central value.
Optionally, the step of performing coverage processing on the at least two areas to obtain a third picture specifically includes:
performing a covering operation on each of the at least two areas to obtain a third picture, where the covering operation may specifically include: extracting a region, determining a pixel point with the lower part being the same as and continuous with the RGB value of the region as an expanded region of the region, extracting a plurality of pixel points outside the expanded region range adjacent to the pixel point at the edge of the expanded region, and adjusting the RGB value of the pixel point of the expanded region to the RGB value in the set range to complete the covering treatment if the RGB values of the plurality of pixel points all belong to the set range.
Optionally, the step of performing coverage processing on the at least two areas to obtain a third picture specifically includes:
performing a covering operation on each of the at least two areas to obtain a third picture, where the covering operation may specifically include: extracting a region, determining pixel points with the same and continuous RGB values of the lower part and the region as an expanded region of the region, extracting w pixel points outside the expanded region range adjacent to pixel points at the edge of the expanded region, if x pixel points with RGB values not within a set range exist in the RGB values of the w pixel points, determining that the w-x pixel points are continuous pixel points and the w-x pixel points are positioned at one side of the expanded region, connecting two edge pixel points of the w-x pixel points and then forming an edge region with the continuous w-x pixel points, and if the edge region is overlapped with the expanded region, adjusting the RGB values of all the pixel points at the overlapped part of the edge region and the expanded region to the RGB values within the set range to complete coverage processing.
Optionally, the method further includes:
and displaying the second picture and the third picture through left and right split screens.
In a second aspect, a terminal is provided, which includes: a processor, a camera and a display screen,
the display screen is used for collecting the decorative props selected by the user after the self-shooting video application is started;
the camera is used for collecting a first picture,
the processor is used for superposing the decorative prop on the first picture to obtain a second picture and displaying the second picture; the method comprises the steps of identifying a lower edge line of a decorative prop of a second picture, dividing the second picture into an upper part and a lower part by taking the lower edge line as a boundary, obtaining a plurality of RGB values of a plurality of pixel points of the lower part adjacent to the boundary, extracting m RGB values which are not in a set range from the plurality of RGB values, dividing the same and continuous RGB values in the m RGB values into one area to obtain at least two areas, and performing covering processing on the at least two areas to obtain a third picture.
Optionally, the processor is specifically configured to identify a face region of the first picture, extract RGB values of all pixel points of the face region, count the number of pixel points of each RGB value, determine the first RGB value with the largest number as a central value of the set range, and increase the set interval by using the first RGB value as the central value to obtain the set range.
Optionally, the processor is specifically configured to perform an overlay operation on each of the at least two areas to obtain a third picture, where the overlay operation specifically includes: extracting a region, determining a pixel point with the lower part being the same as and continuous with the RGB value of the region as an expanded region of the region, extracting a plurality of pixel points outside the expanded region range adjacent to the pixel point at the edge of the expanded region, and adjusting the RGB value of the pixel point of the expanded region to the RGB value in the set range to complete the covering treatment if the RGB values of the plurality of pixel points all belong to the set range.
Optionally, the processor is specifically configured to perform an overlay operation on each of the at least two areas to obtain a third picture, where the overlay operation specifically includes: extracting a region, determining pixel points with the same and continuous RGB values of the lower part and the region as an expanded region of the region, extracting w pixel points outside the expanded region range adjacent to pixel points at the edge of the expanded region, if x pixel points with RGB values not within a set range exist in the RGB values of the w pixel points, determining that the w-x pixel points are continuous pixel points and the w-x pixel points are positioned at one side of the expanded region, connecting two edge pixel points of the w-x pixel points and then forming an edge region with the continuous w-x pixel points, and if the edge region is overlapped with the expanded region, adjusting the RGB values of all the pixel points at the overlapped part of the edge region and the expanded region to the RGB values within the set range to complete coverage processing.
Optionally, the terminal is: a smart phone or a tablet computer.
In a third aspect, a computer-readable storage medium is provided, which stores a program for electronic data exchange, wherein the program causes a terminal to execute the method provided in the first aspect.
The embodiment of the invention has the following beneficial effects:
it can be seen that after the second picture is obtained, the lower edge line of the decorative prop is determined, then the second picture is divided into an upper part and a lower part along the lower edge line, then the RGB values of the pixel points of the lower part are identified to determine whether the second picture is in the set range, if the second picture is not in the set range, the pixel points of the lower part which are not in the set range are covered to obtain a third picture, and thus the edge position can be covered to obtain the third picture with better visual effect.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a terminal.
Fig. 2 is a flow chart of a processing method for modifying prop edges by using a self-timer video.
Fig. 3 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of the invention and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, result, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 provides a terminal, which may specifically be a smart phone, a tablet computer, a computer, and a server, where the smart phone may be a terminal of an IOS system, an android system, and the terminal may specifically include: the device comprises a processor, a memory, a camera and a display screen, wherein the components can be connected through a bus or in other ways, and the application is not limited to the specific way of the connection.
Referring to fig. 2, fig. 2 provides a method for processing a self-portrait video decoration prop edge, which is executed by the terminal shown in fig. 1 and shown in fig. 2, and includes the following steps:
step S201, after starting a self-timer video application, a terminal collects decorative props selected by a user;
step S202, the terminal collects a first picture, the decorative prop is superposed on the first picture to obtain a second picture, and the second picture is displayed;
step S203, the terminal identifies a lower edge line of the decorative prop of the second picture, divides the second picture into an upper part and a lower part by taking the lower edge line as a boundary, obtains a plurality of RGB values of a plurality of pixel points of the lower part adjacent to the boundary, extracts m RGB values that are not within a set range from the plurality of RGB values, divides the same and continuous RGB values in the m RGB values into one region to obtain at least two regions, and performs a covering process on the at least two regions to obtain a third picture.
And acquiring a first RGB value and a second RGB value of at least two areas, and dividing the lower part of the continuous pixel points which are identical to the first RGB value to the lower part of the continuous pixel points which are not in the set range to obtain a third picture.
After the second picture is obtained, the lower edge line of the decorative prop is determined, then the second picture is divided into an upper part and a lower part along the lower edge line, the terminal identifies the lower edge line of the decorative prop of the second picture, the second picture is divided into the upper part and the lower part by taking the lower edge line as a boundary line, multiple RGB values of multiple pixel points of the lower part and adjacent to the boundary line are obtained, m RGB values which are not in a set range are extracted from the multiple RGB values, the same RGB value and continuous pixel points in the m RGB values are divided into one area to obtain at least two areas, the at least two areas are subjected to covering processing to obtain a third picture, and therefore the edge position can be subjected to covering processing to obtain the third picture with better visual effect.
The setting range may be specifically a skin RGB value range, which may be set by a user, and in practical application, the setting range may be determined by the first picture, and the specific determination method includes identifying a face region of the first picture, extracting RGB values of all pixels in the face region, counting the number of pixels of each RGB value, determining the first RGB value (for the face region, most of the regions are skin regions) with the largest number as a central value of the setting range, and increasing the setting range by using the first RGB value as the central value to obtain the setting range.
For example, the first RGB value is 248, 197, 183, and if the setting interval is ± 5, the corresponding setting range may be: [ 243,253 ], [ 192,202 ], [ 178,188 ].
Optionally, the obtaining the third picture by performing the covering processing on the at least two areas specifically includes:
performing a covering operation on each of the at least two areas to obtain a third picture, where the covering operation may specifically include: extracting a region, determining a pixel point with the lower part being the same as and continuous with the RGB value of the region as an expanded region of the region, extracting a plurality of pixel points outside the expanded region range adjacent to the pixel point at the edge of the expanded region, and adjusting the RGB value of the pixel point of the expanded region to the RGB value in the set range to complete the covering treatment if the RGB values of the plurality of pixel points all belong to the set range.
According to the technical scheme, the outstanding noise pixel points belonging to the face position are covered, for example, when a small part of hair is located at the edge position of the face, the outer pixel points adjacent to the edge pixel points of the expanded area formed by the small part of hair all belong to the face area, and the adjacent pixel points are within a set range, so that the small part of hair can be covered with the RGB value of the skin to remove the noise pixel points (namely, skinning), and the quality of a third picture is improved.
Optionally, the method may further include:
and displaying the second picture and the third picture through split screens (for example, the second picture and the third picture can be displayed in a left split screen and a right split screen).
Optionally, the obtaining the third picture by performing the covering processing on the at least two areas specifically includes:
performing a covering operation on each of the at least two areas to obtain a third picture, where the covering operation may specifically include: extracting an area, determining pixel points with the same and continuous RGB values of the lower part and the area as an expanded area of the area, extracting w pixel points outside the expanded area range adjacent to pixel points at the edge of the expanded area, if x pixel points with RGB values not within a set range exist in the RGB values of the w pixel points, determining that the w-x pixel points are continuous pixel points and the w-x pixel points are located on one side (left side or right side) of the expanded area, connecting the two edge pixel points of the w-x pixel points, forming an edge area with the continuous w-x pixel points, and if the edge area is overlapped with the expanded area, adjusting the RGB values of all the pixel points at the overlapped part of the edge area and the expanded area to the RGB values within the set range to complete coverage processing.
The technical scheme includes that the edge of a unilateral face is subjected to thinning processing to obtain a better third picture, for a face picture, some noise points are possibly out of the range of the face, but most of the area where the noise points are located is in the edge area of the face, so that w-x pixel points are determined as edge lines of the face, and if the edge lines are located on one side, the overlapping part of the range (namely the edge area) of the edge lines and the expanded range is subjected to covering processing to obtain a better face picture, and the quality of the third picture is improved.
Referring to fig. 3, fig. 3 provides a terminal including: a camera, a processor and a display screen,
the display screen is used for collecting the decorative props selected by the user after the self-shooting video application is started;
the camera is used for collecting a first picture,
the processor is used for superposing the decorative prop on the first picture to obtain a second picture and displaying the second picture; the method comprises the steps of identifying a lower edge line of a decorative prop of a second picture, dividing the second picture into an upper part and a lower part by taking the lower edge line as a boundary, obtaining a plurality of RGB values of a plurality of pixel points of the lower part adjacent to the boundary, extracting m RGB values which are not in a set range from the plurality of RGB values, dividing the same and continuous RGB values in the m RGB values into one area to obtain at least two areas, and performing covering processing on the at least two areas to obtain a third picture.
The processor may also be used to perform alternatives, refinements or details of the method as shown in fig. 2, which are not described in detail here.
An embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the self-portrait video decoration prop edge processing methods described in the above method embodiments.
Embodiments of the present invention also provide a computer program product, which includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to execute part or all of the steps of any one of the self-portrait video decoration prop edge processing methods described in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules illustrated are not necessarily required to practice the invention.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above embodiments of the present invention are described in detail, and the principle and the implementation of the present invention are explained by applying specific embodiments, and the above description of the embodiments is only used to help understanding the method of the present invention and the core idea thereof; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (10)
1. A self-timer video decoration prop edge processing method is characterized by comprising the following steps:
after starting a self-shooting video application, a terminal collects decorative props selected by a user;
the terminal collects a first picture, the decorative prop is superposed on the first picture to obtain a second picture, and the second picture is displayed;
the terminal identifies a lower edge line of a decorative prop of a second picture, divides the second picture into an upper part and a lower part by taking the lower edge line as a boundary, acquires a plurality of RGB values of a plurality of pixel points of the lower part adjacent to the boundary, extracts m RGB values which are not in a set range from the plurality of RGB values, divides the same and continuous RGB values in the m RGB values into one region to obtain at least two regions, and covers the at least two regions to obtain a third picture.
2. The method according to claim 1, wherein the setting of the range specifically comprises:
the face region of the first picture is identified, RGB values of all pixel points of the face region are extracted, the number of the pixel points of each RGB value is counted, the first RGB value with the largest number is determined to be a central value of a set range, and the set range is obtained by increasing the set range by taking the first RGB value as the central value.
3. The method according to claim 2, wherein the step of performing the overlay processing on the at least two areas to obtain the third picture specifically comprises:
performing a covering operation on each of the at least two areas to obtain a third picture, wherein the covering operation specifically includes: extracting a region, determining a pixel point with the lower part being the same as and continuous with the RGB value of the region as an expanded region of the region, extracting a plurality of pixel points outside the expanded region range adjacent to the pixel point at the edge of the expanded region, and adjusting the RGB value of the pixel point of the expanded region to the RGB value in the set range to complete the covering treatment if the RGB values of the plurality of pixel points all belong to the set range.
4. The method according to claim 2, wherein the step of performing the overlay processing on the at least two areas to obtain the third picture specifically comprises:
performing a covering operation on each of the at least two areas to obtain a third picture, wherein the covering operation specifically includes: extracting a region, determining pixel points with the same and continuous RGB values of the lower part and the region as an expanded region of the region, extracting w pixel points outside the expanded region range adjacent to pixel points at the edge of the expanded region, if x pixel points with RGB values not within a set range exist in the RGB values of the w pixel points, determining that the w-x pixel points are continuous pixel points and the w-x pixel points are positioned at one side of the expanded region, connecting two edge pixel points of the w-x pixel points and then forming an edge region with the continuous w-x pixel points, and if the edge region is overlapped with the expanded region, adjusting the RGB values of all the pixel points at the overlapped part of the edge region and the expanded region to the RGB values within the set range to complete coverage processing.
5. The method of claim 1, further comprising:
displaying the second picture and the third picture in a split screen manner;
a terminal, the terminal comprising: a processor, a camera and a display screen, which is characterized in that,
the display screen is used for collecting the decorative props selected by the user after the self-shooting video application is started;
the camera is used for collecting a first picture,
the processor is used for superposing the decorative prop on the first picture to obtain a second picture and displaying the second picture; the method comprises the steps of identifying a lower edge line of a decorative prop of a second picture, dividing the second picture into an upper part and a lower part by taking the lower edge line as a boundary, obtaining a plurality of RGB values of a plurality of pixel points of the lower part adjacent to the boundary, extracting m RGB values which are not in a set range from the plurality of RGB values, dividing the same and continuous RGB values in the m RGB values into one area to obtain at least two areas, and performing covering processing on the at least two areas to obtain a third picture.
6. The terminal of claim 5,
the processor is specifically configured to identify a face region of the first picture, extract RGB values of all pixel points of the face region, count the number of pixel points of each RGB value, determine the first RGB value with the largest number as a central value of a set range, and increase the set interval by using the first RGB value as the central value to obtain the set range.
7. The terminal of claim 5,
the processor is specifically configured to perform a covering operation on each of the at least two areas to obtain a third picture, where the covering operation specifically includes: extracting a region, determining a pixel point with the lower part being the same as and continuous with the RGB value of the region as an expanded region of the region, extracting a plurality of pixel points outside the expanded region range adjacent to the pixel point at the edge of the expanded region, and adjusting the RGB value of the pixel point of the expanded region to the RGB value in the set range to complete the covering treatment if the RGB values of the plurality of pixel points all belong to the set range.
8. The terminal of claim 5,
the processor is specifically configured to perform a covering operation on each of the at least two areas to obtain a third picture, where the covering operation specifically includes: extracting a region, determining pixel points with the same and continuous RGB values of the lower part and the region as an expanded region of the region, extracting w pixel points outside the expanded region range adjacent to pixel points at the edge of the expanded region, if x pixel points with RGB values not within a set range exist in the RGB values of the w pixel points, determining that the w-x pixel points are continuous pixel points and the w-x pixel points are positioned at one side of the expanded region, connecting two edge pixel points of the w-x pixel points and then forming an edge region with the continuous w-x pixel points, and if the edge region is overlapped with the expanded region, adjusting the RGB values of all the pixel points at the overlapped part of the edge region and the expanded region to the RGB values within the set range to complete coverage processing.
9. A terminal according to any of claims 5-8,
the terminal is as follows: a smart phone or a tablet computer.
10. A computer-readable storage medium storing a program for electronic data exchange, characterized in that the program causes a terminal to perform the method as provided in any one of claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111235238.9A CN113947549B (en) | 2021-10-22 | 2021-10-22 | Self-shooting video decoration prop edge processing method and related product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111235238.9A CN113947549B (en) | 2021-10-22 | 2021-10-22 | Self-shooting video decoration prop edge processing method and related product |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113947549A true CN113947549A (en) | 2022-01-18 |
CN113947549B CN113947549B (en) | 2022-10-25 |
Family
ID=79332458
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111235238.9A Active CN113947549B (en) | 2021-10-22 | 2021-10-22 | Self-shooting video decoration prop edge processing method and related product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113947549B (en) |
Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104899853A (en) * | 2014-03-04 | 2015-09-09 | 腾讯科技(深圳)有限公司 | Image region dividing method and device |
US20160117832A1 (en) * | 2014-10-23 | 2016-04-28 | Ricoh Company, Ltd. | Method and apparatus for separating foreground image, and computer-readable recording medium |
WO2016107638A1 (en) * | 2014-12-29 | 2016-07-07 | Keylemon Sa | An image face processing method and apparatus |
CN106023204A (en) * | 2016-05-20 | 2016-10-12 | 陕西师范大学 | Method and system for removing mosquito noise based on edge detection algorithm |
CN106372602A (en) * | 2016-08-31 | 2017-02-01 | 华平智慧信息技术(深圳)有限公司 | Method and device for processing video file |
CN108320265A (en) * | 2018-01-31 | 2018-07-24 | 努比亚技术有限公司 | A kind of image processing method, terminal and computer readable storage medium |
CN109618211A (en) * | 2018-12-04 | 2019-04-12 | 深圳市子瑜杰恩科技有限公司 | Short-sighted channel tool edit methods and Related product |
CN109640170A (en) * | 2018-12-04 | 2019-04-16 | 深圳市子瑜杰恩科技有限公司 | From the Output Velocity Dealing Method and Related product to shoot the video |
CN109658328A (en) * | 2018-11-26 | 2019-04-19 | 深圳艺达文化传媒有限公司 | From animal head ear processing method and the Related product of shooting the video |
CN109671014A (en) * | 2018-11-26 | 2019-04-23 | 深圳艺达文化传媒有限公司 | From the plait stacking method and Related product to shoot the video |
CN109688452A (en) * | 2018-12-04 | 2019-04-26 | 深圳市子瑜杰恩科技有限公司 | Pagination Display stage property stacking method and Related product |
CN109697746A (en) * | 2018-11-26 | 2019-04-30 | 深圳艺达文化传媒有限公司 | Self-timer video cartoon head portrait stacking method and Related product |
CN109712103A (en) * | 2018-11-26 | 2019-05-03 | 深圳艺达文化传媒有限公司 | From the eyes processing method and Related product of the Thunder God picture that shoots the video |
CN109712104A (en) * | 2018-11-26 | 2019-05-03 | 深圳艺达文化传媒有限公司 | The exposed method of self-timer video cartoon head portrait and Related product |
CN109712066A (en) * | 2018-11-26 | 2019-05-03 | 深圳艺达文化传媒有限公司 | From animal head ear adding method and the Related product of shooting the video |
CN109740431A (en) * | 2018-11-26 | 2019-05-10 | 深圳艺达文化传媒有限公司 | From the eyebrow processing method and Related product of the head portrait picture to shoot the video |
US20190244060A1 (en) * | 2018-02-02 | 2019-08-08 | Nvidia Corporation | Domain Stylization Using a Neural Network Model |
WO2020082731A1 (en) * | 2018-10-26 | 2020-04-30 | 平安科技(深圳)有限公司 | Electronic device, credential recognition method and storage medium |
CN112365516A (en) * | 2020-11-11 | 2021-02-12 | 华中科技大学 | Virtual and real occlusion processing method in augmented reality |
CN112396562A (en) * | 2020-11-17 | 2021-02-23 | 中山大学 | Disparity map enhancement method based on RGB and DVS image fusion in high-dynamic-range scene |
CN113096022A (en) * | 2019-12-23 | 2021-07-09 | RealMe重庆移动通信有限公司 | Image blurring processing method and device, storage medium and electronic equipment |
-
2021
- 2021-10-22 CN CN202111235238.9A patent/CN113947549B/en active Active
Patent Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104899853A (en) * | 2014-03-04 | 2015-09-09 | 腾讯科技(深圳)有限公司 | Image region dividing method and device |
US20160117832A1 (en) * | 2014-10-23 | 2016-04-28 | Ricoh Company, Ltd. | Method and apparatus for separating foreground image, and computer-readable recording medium |
WO2016107638A1 (en) * | 2014-12-29 | 2016-07-07 | Keylemon Sa | An image face processing method and apparatus |
CN106023204A (en) * | 2016-05-20 | 2016-10-12 | 陕西师范大学 | Method and system for removing mosquito noise based on edge detection algorithm |
CN106372602A (en) * | 2016-08-31 | 2017-02-01 | 华平智慧信息技术(深圳)有限公司 | Method and device for processing video file |
CN108320265A (en) * | 2018-01-31 | 2018-07-24 | 努比亚技术有限公司 | A kind of image processing method, terminal and computer readable storage medium |
US20190244060A1 (en) * | 2018-02-02 | 2019-08-08 | Nvidia Corporation | Domain Stylization Using a Neural Network Model |
WO2020082731A1 (en) * | 2018-10-26 | 2020-04-30 | 平安科技(深圳)有限公司 | Electronic device, credential recognition method and storage medium |
CN109740431A (en) * | 2018-11-26 | 2019-05-10 | 深圳艺达文化传媒有限公司 | From the eyebrow processing method and Related product of the head portrait picture to shoot the video |
CN109671014A (en) * | 2018-11-26 | 2019-04-23 | 深圳艺达文化传媒有限公司 | From the plait stacking method and Related product to shoot the video |
CN109697746A (en) * | 2018-11-26 | 2019-04-30 | 深圳艺达文化传媒有限公司 | Self-timer video cartoon head portrait stacking method and Related product |
CN109712103A (en) * | 2018-11-26 | 2019-05-03 | 深圳艺达文化传媒有限公司 | From the eyes processing method and Related product of the Thunder God picture that shoots the video |
CN109712104A (en) * | 2018-11-26 | 2019-05-03 | 深圳艺达文化传媒有限公司 | The exposed method of self-timer video cartoon head portrait and Related product |
CN109712066A (en) * | 2018-11-26 | 2019-05-03 | 深圳艺达文化传媒有限公司 | From animal head ear adding method and the Related product of shooting the video |
CN109658328A (en) * | 2018-11-26 | 2019-04-19 | 深圳艺达文化传媒有限公司 | From animal head ear processing method and the Related product of shooting the video |
CN109688452A (en) * | 2018-12-04 | 2019-04-26 | 深圳市子瑜杰恩科技有限公司 | Pagination Display stage property stacking method and Related product |
CN109640170A (en) * | 2018-12-04 | 2019-04-16 | 深圳市子瑜杰恩科技有限公司 | From the Output Velocity Dealing Method and Related product to shoot the video |
CN109618211A (en) * | 2018-12-04 | 2019-04-12 | 深圳市子瑜杰恩科技有限公司 | Short-sighted channel tool edit methods and Related product |
CN113096022A (en) * | 2019-12-23 | 2021-07-09 | RealMe重庆移动通信有限公司 | Image blurring processing method and device, storage medium and electronic equipment |
CN112365516A (en) * | 2020-11-11 | 2021-02-12 | 华中科技大学 | Virtual and real occlusion processing method in augmented reality |
CN112396562A (en) * | 2020-11-17 | 2021-02-23 | 中山大学 | Disparity map enhancement method based on RGB and DVS image fusion in high-dynamic-range scene |
Non-Patent Citations (4)
Title |
---|
A.FARRUKH等: "Automated segmentation of skin-tone regions in video sequences", 《IEEE STUDENTS CONFERENCE, ISCON "02. PROCEEDINGS》 * |
MAMTA MITTAL等: "An Efficient Edge Detection Approach to Provide Better Edge Connectivity for Image Analysis", 《IEEE ACCESS》 * |
刘腾龙: "视频人脸替换中目标分割算法的研究", 《中国优秀硕士论文全文数据库 信息科技辑》 * |
顾标准: "面向普通用户的若干视频特效研究与应用", 《中国优秀硕士论文全文数据库信息科技辑》 * |
Also Published As
Publication number | Publication date |
---|---|
CN113947549B (en) | 2022-10-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105744292A (en) | Video data processing method and device | |
CN107295352B (en) | Video compression method, device, equipment and storage medium | |
CN111476735B (en) | Face image processing method and device, computer equipment and readable storage medium | |
CN109658420A (en) | Change face method and the Related product of short-sighted frequency | |
CN109646950B (en) | Image processing method and device applied to game scene and terminal | |
CN109740431B (en) | Eyebrow processing method of head portrait picture of self-shot video and related product | |
CN110267079B (en) | Method and device for replacing human face in video to be played | |
CN109658328A (en) | From animal head ear processing method and the Related product of shooting the video | |
CN113947549B (en) | Self-shooting video decoration prop edge processing method and related product | |
CN108010038B (en) | Live-broadcast dress decorating method and device based on self-adaptive threshold segmentation | |
CN109726632A (en) | Background recommended method and Related product | |
CN110414596B (en) | Video processing method, video processing device, model training method, model training device, storage medium and electronic device | |
CN109034059B (en) | Silence type face living body detection method, silence type face living body detection device, storage medium and processor | |
CN109640170B (en) | Speed processing method of self-shooting video, terminal and storage medium | |
CN109697746A (en) | Self-timer video cartoon head portrait stacking method and Related product | |
CN108230328B (en) | Method and device for acquiring target object and robot | |
CN109671138B (en) | Double overlapping method for head portrait background of self-photographing video and related product | |
CN109712103B (en) | Eye processing method for self-shot video Thor picture and related product | |
CN109658327B (en) | Self-photographing video hair style generation method and related product | |
CN109639962B (en) | Self-timer short video mode selection method and related product | |
CN109671014A (en) | From the plait stacking method and Related product to shoot the video | |
CN108898081B (en) | Picture processing method and device, mobile terminal and computer readable storage medium | |
CN109712104A (en) | The exposed method of self-timer video cartoon head portrait and Related product | |
CN108924411B (en) | Photographing control method and device | |
CN109712066A (en) | From animal head ear adding method and the Related product of shooting the video |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |