CN109740431B - Eyebrow processing method of head portrait picture of self-shot video and related product - Google Patents

Eyebrow processing method of head portrait picture of self-shot video and related product Download PDF

Info

Publication number
CN109740431B
CN109740431B CN201811431497.7A CN201811431497A CN109740431B CN 109740431 B CN109740431 B CN 109740431B CN 201811431497 A CN201811431497 A CN 201811431497A CN 109740431 B CN109740431 B CN 109740431B
Authority
CN
China
Prior art keywords
picture
region
line
determining
eyebrow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811431497.7A
Other languages
Chinese (zh)
Other versions
CN109740431A (en
Inventor
张磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yida Culture Media Co ltd
Original Assignee
Shenzhen Yida Culture Media Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yida Culture Media Co ltd filed Critical Shenzhen Yida Culture Media Co ltd
Priority to CN201811431497.7A priority Critical patent/CN109740431B/en
Publication of CN109740431A publication Critical patent/CN109740431A/en
Application granted granted Critical
Publication of CN109740431B publication Critical patent/CN109740431B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure provides an eyebrow processing method for head portrait picture of self-timer video and related products, the method includes the following steps: when the terminal determines to enter a self-shooting page of a self-shooting video application, determining a selected head portrait picture; the method comprises the steps that a terminal collects a first picture and determines a face area of the first picture; when the terminal identifies that the head portrait picture contains eyebrows, the person eyebrow area of the face area is identified, pixel points of the person eyebrow area of the first picture are set to be transparent, a second picture is obtained, and the head portrait picture is superposed on the second picture to obtain a third picture. The technical scheme provided by the application has the advantage of high user experience.

Description

Eyebrow processing method of head portrait picture of self-shot video and related product
Technical Field
The invention relates to the technical field of culture media, in particular to an eyebrow processing method of a head portrait picture of a self-portrait video and a related product.
Background
The short video is short video, which is an internet content transmission mode, and is generally video transmission content transmitted on new internet media within 1 minute.
If the head portrait picture of the existing short video application has eyebrows, the eyebrows are directly superposed on the shot picture, so that the eyebrows and the eyebrows of a person cannot be overlapped for the head portrait of the person, and the picture has 4 eyebrows, so that the effect of the picture is influenced, and the experience degree of a user is influenced.
Disclosure of Invention
The embodiment of the invention provides an eyebrow processing method for an avatar picture of a self-shot video and a related product, which can be used for performing transparency processing on the eyebrows of an avatar, avoid the occurrence of 4 eyebrows and have the advantage of improving user experience.
In a first aspect, an embodiment of the present invention provides an eyebrow processing method for an avatar frame of a self-portrait video, where the method includes the following steps:
when the terminal determines to enter a self-shooting page of a self-shooting video application, determining a selected head portrait picture;
the method comprises the steps that a terminal collects a first picture and determines a face area of the first picture;
when the terminal identifies that the head portrait picture contains eyebrows, the person eyebrow area of the face area is identified, pixel points of the person eyebrow area of the first picture are set to be transparent, a second picture is obtained, and the head portrait picture is superposed on the second picture to obtain a third picture.
Optionally, the identifying the eyebrow region of the person in the face region specifically includes:
determining a vertical central line of a face region, determining an RGB value of each pixel point in the face region, keeping the RGB values of black pixel points, combining the pixel points with the set distance between the black pixel points into a region to obtain a plurality of regions, and searching 2 regions which are in the set region and are symmetrical along the vertical central line from the plurality of regions to determine the eyebrow region.
Optionally, the setting the pixel point of the character eyebrow region of the first picture to a transparent color to obtain the second picture specifically includes:
and setting the transparency of the pixel points in the eyebrow area of the first picture as 0 level.
Optionally, the method further includes:
and displaying the third picture and the second picture in a split screen mode.
In a second aspect, a terminal is provided, which includes: a processor, a camera and a display screen,
the display screen is used for determining a selected head portrait picture when a self-timer page of a self-timer video application is entered;
the camera is used for collecting a first picture,
the processor is used for determining a face area of the first picture; when the head portrait picture contains eyebrows, the person eyebrow area of the face area is identified, pixel points of the person eyebrow area of the first picture are set to be transparent, a second picture is obtained, and the head portrait picture is superposed on the second picture to obtain a third picture.
Alternatively to this, the first and second parts may,
the processor is specifically configured to determine a vertical center line of the face region, determine an RGB value of each pixel point in the face region, retain the RGB values of the black pixel points, combine the pixel points at a set distance between the black pixel points into one region to obtain a plurality of regions, and search 2 regions that are within the set area and are symmetrical along the vertical center line from the plurality of regions to determine the eyebrow region.
Optionally, the processor is further configured to set the transparency of a pixel point in the eyebrow area of the first picture as 0 level.
Optionally, the processor is further configured to control the display screen to display the third picture and the second picture in a split screen manner.
Optionally, the terminal is: a smart phone or a tablet computer.
In a third aspect, a computer-readable storage medium is provided, which stores a program for electronic data exchange, wherein the program causes a terminal to execute the method provided in the first aspect.
The embodiment of the invention has the following beneficial effects:
according to the technical scheme, when the self-portrait page of the self-portrait application is determined, the selected head portrait picture is determined, the face area is determined after the first picture is collected, when the head portrait picture contains eyebrows, the eyebrow area of the face area is identified, the eyebrow area is subjected to transparency processing to obtain a second picture, and then the second picture is superposed with the head portrait picture to obtain a third picture, so that 4 eyebrows do not appear in the third picture, the visual effect of the pictures is improved, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a terminal.
Fig. 2 is a flowchart illustrating an eyebrow processing method for an avatar frame of a self-timer video.
Fig. 3 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of the invention and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, result, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 provides a terminal, which may specifically be a smart phone, a tablet computer, a computer, and a server, where the smart phone may be a terminal of an IOS system, an android system, and the terminal may specifically include: the device comprises a processor, a memory, a camera and a display screen, wherein the components can be connected through a bus or in other ways, and the application is not limited to the specific way of the connection.
Referring to fig. 2, fig. 2 provides an eyebrow processing method for an avatar frame of a self-timer video, which is performed by the terminal shown in fig. 1, as shown in fig. 2, and includes the following steps:
step S201, when the terminal determines to enter a self-timer page of a self-timer video application, determining a selected head portrait picture;
step S202, a terminal collects a first picture and determines a face area of the first picture;
and S203, when the terminal identifies that the head portrait picture contains eyebrows, identifying a character eyebrow area of the face area, setting pixel points of the character eyebrow area of the first picture into transparent colors to obtain a second picture, and overlapping the head portrait picture on the second picture to obtain a third picture.
The technical scheme that the application provides when confirming the auto heterodyne page that auto heterodyne was used, confirm the head portrait picture of selection, after gathering first picture, confirm face region, when confirming that the head portrait picture contains the eyebrow, after discerning face region's eyebrow region, carry out transparentization with eyebrow region and handle and obtain the second picture, then obtain the third picture with second picture stack head portrait picture, 4 eyebrows can not appear in the third picture promptly like this, the visual effect of picture has been improved, user experience degree has been improved.
Optionally, the manner of determining the face region of the first picture may be determined by a face recognition algorithm, which includes but is not limited to: a Baidu face recognition algorithm, a Google face recognition algorithm, and the like.
Optionally, the method for identifying that the avatar picture includes eyebrows by the terminal may specifically include determining whether the avatar picture includes an eyebrow beautifying identifier, such as an eyebrow beautifying identifier, and determining that the avatar picture includes eyebrows, or may also be directly marked by a user or a manufacturer providing the avatar picture.
Optionally, the identifying the eyebrow region of the person in the face region may specifically include:
determining a vertical center line of a face region, determining an RGB value of each pixel point in the face region, keeping the RGB values of black pixel points, combining the pixel points with a set distance (smaller, for example 1mm) between the black pixel points into a plurality of regions, and searching 2 regions which are within a set area (moderate, smaller than a hair region, larger than a mole region, for example 5 x 20mm) and are symmetrical along the vertical center line from the plurality of regions to determine the eyebrow region.
Optionally, the setting the pixel point of the person eyebrow region of the first picture to a transparent color to obtain the second picture specifically may include:
and setting the transparency (alpha) of the pixel points in the human eyebrow area of the first picture to 0 order.
Optionally, the method may further include:
and displaying the third picture and the second picture in a split screen mode.
Optionally, the method may further include:
and if the third picture comprises double chin, carrying out flattening treatment on the double chin in the third picture to obtain a fourth picture.
The above leveling the double chin in the third picture may specifically include:
obtaining a third picture line route, determining a region with 2 adjacent line lines and the distance between the 2 line lines being less than the set distance as a region to be determined, constructing equidistant lines of lambda line lines in the region to be determined, extracting pictures between the equidistant lines, comparing and determining whether the equidistant lines are consistent, if the two regions are consistent (for example, the two regions are non-double-chin regions, the pictures thereof are consistent, the picture comparison method can be realized by the existing comparison method), determining the region to be determined as the non-double-chin region, if the two regions are inconsistent (the double-chin regions are not distributed uniformly due to the meat, so the pictures of each region are different), determining the region to be determined as the double-chin region, and constructing an upper equidistant line in the upper region of the double-chin region, extracting a gamma picture between the upper equidistant line and the upper grain line, extending the gamma picture downwards by lambda-1 (namely, replacing each equidistant line with the gamma picture), and then replacing the double-chin region to finish flattening treatment to obtain a fourth picture.
The obtaining of the third picture grain line may specifically include:
marking the RGB values of all pixel points of the third picture into different colors, determining different color line segments in the same color, if the number of the different color line segments in the same color exceeds 2, obtaining the distance between the different color line segments, and if the distance is smaller than a set distance threshold, determining the different color line segments as grain lines.
For the line path, it divides the skin into a plurality of areas, the RGB value of the areas is the same because it is the skin, then when marking with color, the color of the skin is the same color, but for the line path, because of its folding and light, the line path is different from the RGB value of the skin, because of marking different color, the color of the line path is inconsistent with the color of the skin area, and it is a line segment, in addition, the line path with double chin has at least 2 or more, therefore the number is also more than 2. The grain lines can be distinguished by this method.
Referring to fig. 3, fig. 3 provides a terminal including: a camera, a processor and a display screen,
the display screen is used for determining a selected head portrait picture when a self-timer page of a self-timer video application is entered;
the camera is used for collecting a first picture,
the processor is used for determining a face area of the first picture; when the head portrait picture contains eyebrows, the person eyebrow area of the face area is identified, pixel points of the person eyebrow area of the first picture are set to be transparent, a second picture is obtained, and the head portrait picture is superposed on the second picture to obtain a third picture.
An embodiment of the present invention also provides a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, the computer program causing a computer to execute a part or all of the steps of any one of the methods for processing eyebrows of an avatar picture of a self-timer video as set forth in the above method embodiments.
Embodiments of the present invention also provide a computer program product including a non-transitory computer-readable storage medium storing a computer program, the computer program being operable to cause a computer to perform part or all of the steps of any one of the methods for processing eyebrows of an avatar picture of a self-timer video as set forth in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules illustrated are not necessarily required to practice the invention.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above embodiments of the present invention are described in detail, and the principle and the implementation of the present invention are explained by applying specific embodiments, and the above description of the embodiments is only used to help understanding the method of the present invention and the core idea thereof; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. An eyebrow processing method for an avatar picture of a self-timer video, the method comprising the steps of:
when the terminal determines to enter a self-shooting page of a self-shooting video application, determining a selected head portrait picture;
the method comprises the steps that a terminal collects a first picture and determines a face area of the first picture;
when the terminal identifies that the head portrait picture contains eyebrows, identifying a figure eyebrow area of the face area, setting pixel points of the figure eyebrow area of the first picture into transparent colors to obtain a second picture, and overlapping the head portrait picture on the second picture to obtain a third picture;
if the third picture comprises double chin, carrying out leveling treatment on the double chin in the third picture to obtain a fourth picture;
the flattening of the double chin in the third picture specifically comprises:
obtaining a third picture line route, determining a region with 2 adjacent line lines and the distance between the 2 line lines being smaller than a set distance as a region to be determined, constructing equidistant lines of lambda line routes for the region to be determined, extracting pictures between the equidistant lines, comparing and determining whether the two lines are consistent, if so, determining that the region to be determined is a non-double chin region, if not, determining that the region to be determined is a double chin region, constructing an upper equidistant line in the upper region of the double chin region, extracting a gamma picture between the upper equidistant line and the upper line, extending the gamma picture downwards by lambda-1, and replacing the double chin region to complete flattening treatment to obtain a fourth picture;
obtaining the third picture grain line specifically includes:
marking the RGB values of all pixel points of the third picture into different colors, determining different color line segments in the same color, if the number of the different color line segments in the same color exceeds 2, obtaining the distance between the different color line segments, and if the distance is smaller than a set distance threshold, determining the different color line segments as grain lines.
2. The method according to claim 1, wherein the identifying the eyebrow region of the person in the face region specifically comprises:
determining a vertical central line of a face region, determining an RGB value of each pixel point in the face region, keeping the RGB values of black pixel points, combining the pixel points with the set distance between the black pixel points into a region to obtain a plurality of regions, and searching 2 regions which are in the set region and are symmetrical along the vertical central line from the plurality of regions to determine the eyebrow region.
3. The method of claim 1, wherein the setting the pixel points in the eyebrow area of the person in the first picture to transparent color to obtain the second picture specifically comprises:
and setting the transparency of the pixel points in the character eyebrow area of the first picture to 0 order.
4. The method of claim 1, further comprising:
and displaying the third picture and the second picture in a split screen mode.
5. A terminal, the terminal comprising: a processor, a camera and a display screen, which is characterized in that,
the display screen is used for determining a selected head portrait picture when a self-timer page of a self-timer video application is entered;
the camera is used for collecting a first picture,
the processor is used for determining a face area of the first picture; when the head portrait picture contains eyebrows, identifying a figure eyebrow area of the face area, setting pixel points of the figure eyebrow area of the first picture into transparent colors to obtain a second picture, and overlapping the head portrait picture on the second picture to obtain a third picture;
if the third picture comprises double chin, carrying out leveling treatment on the double chin in the third picture to obtain a fourth picture;
the flattening of the double chin in the third picture specifically comprises:
obtaining a third picture line route, determining a region with 2 adjacent line lines and the distance between the 2 line lines being smaller than a set distance as a region to be determined, constructing equidistant lines of lambda line routes for the region to be determined, extracting pictures between the equidistant lines, comparing and determining whether the two lines are consistent, if so, determining that the region to be determined is a non-double chin region, if not, determining that the region to be determined is a double chin region, constructing an upper equidistant line in the upper region of the double chin region, extracting a gamma picture between the upper equidistant line and the upper line, extending the gamma picture downwards by lambda-1, and replacing the double chin region to complete flattening treatment to obtain a fourth picture;
obtaining the third picture grain line specifically includes:
marking the RGB values of all pixel points of the third picture into different colors, determining different color line segments in the same color, if the number of the different color line segments in the same color exceeds 2, obtaining the distance between the different color line segments, and if the distance is smaller than a set distance threshold, determining the different color line segments as grain lines.
6. The terminal of claim 5,
the processor is specifically configured to determine a vertical center line of the face region, determine an RGB value of each pixel point in the face region, retain the RGB values of the black pixel points, combine the pixel points at a set distance between the black pixel points into one region to obtain a plurality of regions, and search 2 regions that are within the set area and are symmetrical along the vertical center line from the plurality of regions to determine the eyebrow region.
7. The terminal of claim 5,
the processor is further configured to set the transparency of the pixel points in the person eyebrow region of the first picture to 0 level.
8. The terminal of claim 5,
the processor is further used for controlling the display screen to display the third picture and the second picture in a split screen mode.
9. A terminal according to any of claims 5-8,
the terminal is as follows: a smart phone or a tablet computer.
10. A computer-readable storage medium storing a program for electronic data exchange, wherein the program causes a terminal to perform the method as provided in any one of claims 1-4.
CN201811431497.7A 2018-11-26 2018-11-26 Eyebrow processing method of head portrait picture of self-shot video and related product Active CN109740431B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811431497.7A CN109740431B (en) 2018-11-26 2018-11-26 Eyebrow processing method of head portrait picture of self-shot video and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811431497.7A CN109740431B (en) 2018-11-26 2018-11-26 Eyebrow processing method of head portrait picture of self-shot video and related product

Publications (2)

Publication Number Publication Date
CN109740431A CN109740431A (en) 2019-05-10
CN109740431B true CN109740431B (en) 2021-11-16

Family

ID=66359148

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811431497.7A Active CN109740431B (en) 2018-11-26 2018-11-26 Eyebrow processing method of head portrait picture of self-shot video and related product

Country Status (1)

Country Link
CN (1) CN109740431B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113947549B (en) * 2021-10-22 2022-10-25 深圳国邦信息技术有限公司 Self-shooting video decoration prop edge processing method and related product

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306293A (en) * 2011-07-29 2012-01-04 南京多伦科技有限公司 Method for judging driver exam in actual road based on facial image identification technology
CN105224910A (en) * 2015-08-28 2016-01-06 华中师范大学 A kind of system and method for training common notice
CN107679497A (en) * 2017-10-11 2018-02-09 齐鲁工业大学 Video face textures effect processing method and generation system
CN107818305A (en) * 2017-10-31 2018-03-20 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306293A (en) * 2011-07-29 2012-01-04 南京多伦科技有限公司 Method for judging driver exam in actual road based on facial image identification technology
CN105224910A (en) * 2015-08-28 2016-01-06 华中师范大学 A kind of system and method for training common notice
CN107679497A (en) * 2017-10-11 2018-02-09 齐鲁工业大学 Video face textures effect processing method and generation system
CN107818305A (en) * 2017-10-31 2018-03-20 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium

Also Published As

Publication number Publication date
CN109740431A (en) 2019-05-10

Similar Documents

Publication Publication Date Title
US9367756B2 (en) Selection of representative images
CN106507021A (en) Method for processing video frequency and terminal device
CN107590474B (en) Unlocking control method and related product
CN105279778A (en) Method and terminal for picture color filling
KR102325829B1 (en) Recommendation method for face-wearing products and device therefor
CN109740431B (en) Eyebrow processing method of head portrait picture of self-shot video and related product
CN113079329A (en) Matting method, related device and matting system
CN105808190A (en) Display screen display method and terminal equipment
CN109658328A (en) From animal head ear processing method and the Related product of shooting the video
US20180336243A1 (en) Image Search Method, Apparatus and Storage Medium
CN109712103B (en) Eye processing method for self-shot video Thor picture and related product
CN108010038B (en) Live-broadcast dress decorating method and device based on self-adaptive threshold segmentation
CN109658327B (en) Self-photographing video hair style generation method and related product
CN114979487B (en) Image processing method and device, electronic equipment and storage medium
CN109697746A (en) Self-timer video cartoon head portrait stacking method and Related product
CN109671138B (en) Double overlapping method for head portrait background of self-photographing video and related product
CN108475341B (en) Three-dimensional image recognition method and terminal
CN111260537A (en) Image privacy protection method and device, storage medium and camera equipment
CN109640170B (en) Speed processing method of self-shooting video, terminal and storage medium
CN109639962B (en) Self-timer short video mode selection method and related product
CN115222621A (en) Image correction method, electronic device, storage medium, and computer program product
CN113947549B (en) Self-shooting video decoration prop edge processing method and related product
CN114028812A (en) Information prompting method and device, terminal equipment and readable storage medium
CN109671014A (en) From the plait stacking method and Related product to shoot the video
CN109547850B (en) Video shooting error correction method and related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant